Updates from: 03/31/2022 01:11:53
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Userjourneys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/userjourneys.md
Preconditions can check multiple preconditions. The following example checks whe
Identity provider selection lets users select an action from a list of options. The identity provider selection consists of a pair of two orchestration steps: 1. **Buttons** - It starts with type of `ClaimsProviderSelection`, or `CombinedSignInAndSignUp` that contains a list of options a user can choose from. The order of the options inside the `ClaimsProviderSelections` element controls the order of the buttons presented to the user.
-2. **Actions** - Followed by type of `ClaimsExchange`. The ClaimsExchange contains list of actions. The action is a reference to a technical profile, such as [OAuth2](oauth2-technical-profile.md), [OpenID Connect](openid-connect-technical-profile.md), [claims transformation](claims-transformation-technical-profile.md), or [self-asserted](self-asserted-technical-profile.md). The When a user clicks on one of the buttons, the corresponding action is executed.
+2. **Actions** - Followed by type of `ClaimsExchange`. The ClaimsExchange contains list of actions. The action is a reference to a technical profile, such as [OAuth2](oauth2-technical-profile.md), [OpenID Connect](openid-connect-technical-profile.md), [claims transformation](claims-transformation-technical-profile.md), or [self-asserted](self-asserted-technical-profile.md). When a user clicks on one of the buttons, the corresponding action is executed.
The **ClaimsProviderSelections** element contains the following element:
active-directory Howto Password Ban Bad On Premises Agent Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-agent-versions.md
# Azure AD Password Protection agent version history
+## 1.2.177.1
+
+Release date: March 28, 2022
+
+* Fixed software version being incorrect
+
+## 1.2.177.0
+
+Release date: March 14, 2022
+
+* Minor bugfixes
+* Fixed issue with Azure AD Connect Agent Updater not being updated
+ ## 1.2.176.0 Release date: June 4, 2021
active-directory Consent Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/consent-framework.md
Title: Azure AD consent framework
+ Title: Microsoft identity platform consent framework
-description: Learn about the consent framework in Azure Active Directory and how it makes it easy to develop multi-tenant web and native client applications.
+description: Learn about the consent framework in the Microsoft identity platform and how it applies to multi-tenant applications.
Previously updated : 03/14/2022 Last updated : 03/29/2022
-# Azure Active Directory consent framework
+# Microsoft identity platform consent framework
-The Azure Active Directory (Azure AD) consent framework makes it easy to develop multi-tenant web and native client applications. These applications allow sign-in by user accounts from an Azure AD tenant that's different from the one where the application is registered. They may also need to access web APIs such as the Microsoft Graph API (to access Azure AD, Intune, and services in Microsoft 365) and other Microsoft services' APIs, in addition to your own web APIs.
+Multi-tenant applications allow sign-ins by user accounts from Azure AD tenants other than the tenant in which the app was initially registered. The Microsoft identity platform consent framework enables a tenant administrator or user in these other tenants to consent to (or deny) an application's request for permission to access their resources.
-The framework is based on a user or an administrator giving consent to an application that asks to be registered in their directory, which may involve accessing directory data. For example, if a web client application needs to read calendar information about the user from Microsoft 365, that user is required to consent to the client application first. After consent is given, the client application will be able to call the Microsoft Graph API on behalf of the user, and use the calendar information as needed. The [Microsoft Graph API](https://developer.microsoft.com/graph) provides access to data in Microsoft 365 (like calendars and messages from Exchange, sites and lists from SharePoint, documents from OneDrive, notebooks from OneNote, tasks from Planner, and workbooks from Excel), as well as users and groups from Azure AD and other data objects from more Microsoft cloud services.
-
-The consent framework is built on OAuth 2.0 and its various flows, such as authorization code grant and client credentials grant, using public or confidential clients. By using OAuth 2.0, Azure AD makes it possible to build many different types of client applications--such as on a phone, tablet, server, or a web application--and gain access to the required resources.
-
-For more info about using the consent framework with OAuth2.0 authorization grants, see [Authorize access to web applications using OAuth 2.0 and Azure AD](v2-oauth2-auth-code-flow.md) and [Authentication scenarios for Azure AD](./authentication-vs-authorization.md). For info about getting authorized access to Microsoft 365 through Microsoft Graph, see [App authentication with Microsoft Graph](/graph/).
+For example, perhaps a web application requires read-only access to a user's calendar in Microsoft 365. It's the identity platform's consent framework that enables the prompt asking the user to consent to the app's request for permission to read their calendar. If the user consents, the application is able to call the Microsoft Graph API on their behalf and get their calendar data.
## Consent experience - an example
The following steps show you how the consent experience works for both the appli
## Next steps
-See [how to convert an app to be multi-tenant](howto-convert-app-to-be-multi-tenant.md)
+See [how to convert an app to multi-tenant](howto-convert-app-to-be-multi-tenant.md)
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
Previously updated : 08/27/2021 Last updated : 03/29/2022
# Microsoft identity platform code samples
-These code samples, built and maintained by Microsoft, demonstrate authentication and authorization by using Azure AD and the Microsoft identity platform in several [application types](v2-app-types.md), development languages, and frameworks.
+These code samples are built and maintained by Microsoft to demonstrate usage of our authentication libraries with the Microsoft identity platform. Common authentication and authorization scenarios are implemented in several [application types](v2-app-types.md), development languages, and frameworks.
- Sign in users to web applications and provide authorized access to protected web APIs. - Protect a web API by requiring an access token to perform API operations.
-Each code sample includes a _README.md_ file that describes how to build the project (if applicable) and run the sample application. Comments in the code help you understand critical sections that implementing authentication and authorization using authentication libraries and the identity platform.
+Each code sample includes a _README.md_ file describing how to build the project (if applicable) and run the sample application. Comments in the code help you understand how these libraries are used in the application to perform authentication and authorization by using the identity platform.
## Single-page applications
active-directory V2 Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-admin-consent.md
Previously updated : 12/18/2020 Last updated : 3/29/2022
# Admin consent on the Microsoft identity platform
-Some permissions require consent from an administrator before they can be granted within a tenant. You can also use the admin consent endpoint to grant permissions to an entire tenant.
+Some permissions require consent from an administrator before they can be granted within a tenant. You can also use the admin consent endpoint to grant permissions to an entire tenant.
## Recommended: Sign the user into your app Typically, when you build an application that uses the admin consent endpoint, the app needs a page or view in which the admin can approve the app's permissions. This page can be part of the app's sign-up flow, part of the app's settings, or it can be a dedicated "connect" flow. In many cases, it makes sense for the app to show this "connect" view only after a user has signed in with a work or school Microsoft account.
-When you sign the user into your app, you can identify the organization to which the admin belongs before asking them to approve the necessary permissions. Although not strictly necessary, it can help you create a more intuitive experience for your organizational users. To sign the user in, follow the [Microsoft identity platform protocol tutorials](active-directory-v2-protocols.md).
+When you sign the user into your app, you can identify the organization to which the admin belongs before asking them to approve the necessary permissions. Although not strictly necessary, it can help you create a more intuitive experience for your organizational users.
## Request the permissions from a directory admin
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md
Previously updated : 02/18/2022 Last updated : 03/29/2022
# Federation with SAML/WS-Fed identity providers for guest users (preview) > [!NOTE]
+>
>- *Direct federation* in Azure Active Directory is now referred to as *SAML/WS-Fed identity provider (IdP) federation*. >- SAML/WS-Fed IdP federation is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). This article describes how to set up federation with any organization whose identity provider (IdP) supports the SAML 2.0 or WS-Fed protocol. When you set up federation with a partner's IdP, new guest users from that domain can use their own IdP-managed organizational account to sign in to your Azure AD tenant and start collaborating with you. There's no need for the guest user to create a separate Azure AD account. > [!IMPORTANT]
-> - In the SAML request sent by Azure AD for external federations, the Issuer URL is a tenanted endpoint. For any new federations, we recommend that all our partners set the audience of the SAML or WS-Fed based IdP to a tenanted endpoint. Refer to the [SAML 2.0](#required-saml-20-attributes-and-claims) and [WS-Fed](#required-ws-fed-attributes-and-claims) required attributes and claims sections below. Any existing federations configured with the global endpoint will continue to work, but new federations will stop working if your external IdP is expecting a global issuer URL in the SAML request.
+>
+>- We no longer support an allowlist of IdPs for new SAML/WS-Fed IdP federations. When you're setting up a new external federation, refer to [Step 1: Determine if the partner needs to update their DNS text records](#step-1-determine-if-the-partner-needs-to-update-their-dns-text-records).
+>- In the SAML request sent by Azure AD for external federations, the Issuer URL is a tenanted endpoint. For any new federations, we recommend that all our partners set the audience of the SAML or WS-Fed based IdP to a tenanted endpoint. Refer to the [SAML 2.0](#required-saml-20-attributes-and-claims) and [WS-Fed](#required-ws-fed-attributes-and-claims) required attributes and claims sections below. Any existing federations configured with the global endpoint will continue to work, but new federations will stop working if your external IdP is expecting a global issuer URL in the SAML request.
> - We've removed the limitation that required the authentication URL domain to match the target domain or be from an allowed IdP. For details, see [Step 1: Determine if the partner needs to update their DNS text records](#step-1-determine-if-the-partner-needs-to-update-their-dns-text-records). ## When is a guest user authenticated with SAML/WS-Fed IdP federation?
In any of these scenarios, you can update a guest userΓÇÖs authentication method
SAML/WS-Fed IdP federation is tied to domain namespaces, such as contoso.com and fabrikam.com. When establishing federation with AD FS or a third-party IdP, organizations associate one or more domain namespaces to these IdPs.
-## End-user experience
+## End-user experience
With SAML/WS-Fed IdP federation, guest users sign into your Azure AD tenant using their own organizational account. When they are accessing shared resources and are prompted for sign-in, users are redirected to their IdP. After successful sign-in, users are returned to Azure AD to access resources. Their refresh tokens are valid for 12 hours, the [default length for passthrough refresh token](../develop/active-directory-configurable-token-lifetimes.md#configurable-token-lifetime-properties) in Azure AD. If the federated IdP has SSO enabled, the user will experience SSO and will not see any sign-in prompt after initial authentication.
Currently, the Azure AD SAML/WS-Fed federation feature doesn't support sending a
Depending on the partner's IdP, the partner might need to update their DNS records to enable federation with you. Use the following steps to determine if DNS updates are needed.
-1. If the partner's IdP is one of these allowed IdPs, no DNS changes are needed (this list is subject to change):
-
- - accounts.google.com
- - pingidentity.com
- - login.pingone.com
- - okta.com
- - oktapreview.com
- - okta-emea.com
- - my.salesforce.com
- - federation.exostar.com
- - federation.exostartest.com
- - idaptive.app
- - idaptive.qa
+> [!NOTE]
+> We no longer support an allowlist of IdPs for new SAML/WS-Fed IdP federations.
-2. If the IdP is not one of the allowed providers listed in the previous step, check the partner's IdP passive authentication URL to see if the domain matches the target domain or a host within the target domain. In other words, when setting up federation for `fabrikam.com`:
+1. Check the partner's IdP passive authentication URL to see if the domain matches the target domain or a host within the target domain. In other words, when setting up federation for `fabrikam.com`:
- - If the authentication URL is `https://fabrikam.com` or `https://sts.fabrikam.com/adfs` (a host in the same domain), no DNS changes are needed.
- - If the authentication URL is `https://fabrikamconglomerate.com/adfs` or `https://fabrikam.com.uk/adfs`, the domain doesn't match the fabrikam.com domain, so the partner will need to add a text record for the authentication URL to their DNS configuration.
+ - If the passive authentication endpoint is `https://fabrikam.com` or `https://sts.fabrikam.com/adfs` (a host in the same domain), no DNS changes are needed.
+ - If the passive authentication endpoint is `https://fabrikamconglomerate.com/adfs` or `https://fabrikam.com.uk/adfs`, the domain doesn't match the fabrikam.com domain, so the partner will need to add a text record for the authentication URL to their DNS configuration.
-3. If DNS changes are needed based on the previous step, ask the partner to add a TXT record to their domain's DNS records, like the following example:
+1. If DNS changes are needed based on the previous step, ask the partner to add a TXT record to their domain's DNS records, like the following example:
`fabrikam.com.ΓÇ» IN ΓÇ» TXT ΓÇ» DirectFedAuthUrl=https://fabrikamconglomerate.com/adfs`
active-directory Custom Security Attributes Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-overview.md
Here are some of the limits and constraints for custom security attributes.
> | Predefined values per attribute definition | 100 | | > | Attribute value length | 64 | Unicode characters | > | Attribute values assigned per object | 50 | Values can be distributed across single and multi-valued attributes.<br/>Example: 5 attributes with 10 values each or 50 attributes with 1 value each |
-> | Characters not allowed for:<br/>Attribute set name<br/>Attribute name | ``<space> ` ~ ! @ # $ % ^ & * ( ) _ - + = { [ } ] \| \ : ; " ' < , > . ? /`` | |
+> | Characters not allowed for:<br/>Attribute set name<br/>Attribute name | ``<space> ` ~ ! @ # $ % ^ & * ( ) _ - + = { [ } ] \| \ : ; " ' < , > . ? /`` | Attribute set name and attribute name cannot start with a number |
> | Characters not allowed for:<br/>Attribute values | `# % & * + \ : " / < > ?` | | ## Custom security attribute roles
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
To test sign-in with *seamless SSO*:
1. Ensure that the sign-in successfully appears in the [Azure AD sign-in activity report](../reports-monitoring/concept-sign-ins.md) by filtering with the UserPrincipalName. To track user sign-ins that still occur on Active Directory Federation Services (AD FS) for selected staged rollout users, follow the instructions at [AD FS troubleshooting: Events and logging](/windows-server/identity/ad-fs/troubleshooting/ad-fs-tshoot-logging#types-of-events). Check vendor documentation about how to check this on third-party federation providers.
+
+ >[!NOTE]
+ >While users are in Staged rollout with PHS, changing passwords might take up to 2 minutes to take effect due to sync time. Make sure to set expectations with your users to avoid helpdesk calls after they changed their password.
## Monitoring You can monitor the users and groups added or removed from staged rollout and users sign-ins while in staged rollout, using the new Hybrid Auth workbooks in the Azure portal.
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-risks.md
These risks are calculated offline using Microsoft's internal and external threa
| | | | Leaked credentials | This risk detection type indicates that the user's valid credentials have been leaked. When cybercriminals compromise valid passwords of legitimate users, they often share those credentials. This sharing is typically done by posting publicly on the dark web, paste sites, or by trading and selling the credentials on the black market. When the Microsoft leaked credentials service acquires user credentials from the dark web, paste sites, or other sources, they are checked against Azure AD users' current valid credentials to find valid matches. For more information about leaked credentials, see [Common questions](#common-questions). | | Azure AD threat intelligence | This risk detection type indicates user activity that is unusual for the given user or is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. |
+| Possible attempt to access Primary Refresh Token (PRT)| This risk detection type is detected by Microsoft Defender for Endpoint (MDE). A Primary Refresh Token (PRT) is a key artifact of Azure AD authentication on Windows 10, Windows Server 2016, and later versions, iOS, and Android devices. It is a JSON Web Token (JWT) that's specially issued to Microsoft first-party token brokers to enable single sign-on (SSO) across the applications used on those devices. Attackers can attempt to access this resource to move laterally into an organization or perform credential theft. This detection will move users to high risk and will only fire in organizations that have deployed MDE. This is a low-volume detection that will be infrequently seen by most organizations. However, when it does occur it is high risk and users should be remediated.|
### Sign-in risk
active-directory Overview Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-reports.md
The [audit logs report](concept-audit-logs.md) provides you with records of syst
#### What Azure AD license do you need to access the audit logs report?
-The audit logs report is available for features for which you have licenses. If you have a license for a specific feature, you also have access to the audit log information for it. A deatiled feature comparison as per [different types of licenses](../fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) can be seen on the [Azure Active Directory pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). For more details, see [Azure Active Directory features and capabilities](../fundamentals/active-directory-whatis.md#which-features-work-in-azure-ad).
+The audit logs report is available for features for which you have licenses. If you have a license for a specific feature, you also have access to the audit log information for it. A detailed feature comparison as per [different types of licenses](../fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) can be seen on the [Azure Active Directory pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). For more details, see [Azure Active Directory features and capabilities](../fundamentals/active-directory-whatis.md#which-features-work-in-azure-ad).
### Sign-ins report
active-directory Reference Azure Ad Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md
+
+ Title: Azure Active Directory SLA performance | Microsoft Docs
+description: Learn about the Azure AD SLA performance
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
++
+ na
++ Last updated : 03/15/2022++++++
+# Azure Active Directory SLA performance
+
+As an identity admin, you may need to track Azure AD's service-level agreement (SLA) performance to make sure Azure AD can support your vital apps. This article shows how the Azure AD service has performed according to the [SLA for Azure Active Directory (Azure AD)](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/).
+
+You can use this article in discussions with app or business owners to help them understand the performance they can expect from Azure AD.
++
+## Service availability commitment
+
+Microsoft offers Premium Azure AD customers the opportunity to get a service credit if Azure AD fails to meet the documented SLA. When you request a service credit, Microsoft evaluates the SLA for your specific tenant; however, this global SLA can give you an indication of the general health of Azure AD over time.
+
+The SLA covers the following scenarios that are vital to businesses:
+
+- **User authentication:** Users are able to login to the Azure Active Directory service.
+
+- **App access:** Azure Active Directory successfully emits the authentication and authorization tokens required for users to log into applications connected to the service.
+
+For full details on SLA coverage and instructions on requesting a service credit, see the [SLA for Azure Active Directory (Azure AD)](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/).
++
+## No planned downtime
+
+You rely on Azure AD to provide identity and access management for your vital systems. To ensure Azure AD is available when business operations require it, Microsoft does not plan downtime for Azure AD system maintenance. Instead, maintenance is performed as the service runs, without customer impact.
+
+## Recent worldwide SLA performance
+
+To help you plan for moving workloads to Azure AD, we publish past SLA performance. These numbers show the level at which Azure AD met the requirements in the [SLA for Azure Active Directory (Azure AD)](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/), for all tenants.
+
+For each month, we truncate the SLA attainment at three places after the decimal. Numbers are not rounded up, so actual SLA attainment is higher than indicated.
++
+| Month | 2021 | 2022 |
+| | | |
+| January | | 99.999% |
+| February | 99.999% | 99.999% |
+| March | 99.568% | |
+| April | 99.999% | |
+| May | 99.999% | |
+| June | 99.999% | |
+| July | 99.999% | |
+| August | 99.999% | |
+| September | 99.999% | |
+| October | 99.999% | |
+| November | 99.998% | |
+| December | 99.978% | |
+++
+### How is Azure AD SLA measured?
+
+The Azure AD SLA is measured in a way that reflects customer authentication experience, rather than simply reporting on whether the system is available to outside connections. This means that the calculation is based on whether:
+
+- Users can authenticate
+- Azure AD successfully issues tokens for target apps after authentication
+
+The numbers above are a global total of Azure AD authentications across all customers and geographies.
+
+
+## Incident history
+
+All incidents that seriously impact Azure AD performance are documented in the [Azure status history](https://status.azure.com/status/history/). Not all events documented in Azure status history are serious enough to cause Azure AD to go below its SLA. You can view information about the impact of incidents, as well as a root cause analysis of what caused the incident and what steps Microsoft took to prevent future incidents.
+
+
++
+## Next steps
+
+* [Azure AD reports overview](overview-reports.md)
+* [Programmatic access to Azure AD reports](concept-reporting-api.md)
+* [Azure Active Directory risk detections](../identity-protection/overview-identity-protection.md)
active-directory Keepabl Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/keepabl-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Keepabl'
+description: Learn how to configure single sign-on between Azure Active Directory and Keepabl.
++++++++ Last updated : 03/29/2022++++
+# Tutorial: Azure AD SSO integration with Keepabl
+
+In this tutorial, you'll learn how to integrate Keepabl with Azure Active Directory (Azure AD). When you integrate Keepabl with Azure AD, you can:
+
+* Control in Azure AD who has access to Keepabl.
+* Enable your users to be automatically signed-in to Keepabl with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Keepabl single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Keepabl supports **SP** and **IDP** initiated SSO.
+
+## Add Keepabl from the gallery
+
+To configure the integration of Keepabl into Azure AD, you need to add Keepabl from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Keepabl** in the search box.
+1. Select **Keepabl** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Keepabl
+
+Configure and test Azure AD SSO with Keepabl using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Keepabl.
+
+To configure and test Azure AD SSO with Keepabl, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Keepabl SSO](#configure-keepabl-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Keepabl test user](#create-keepabl-test-user)** - to have a counterpart of B.Simon in Keepabl that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Keepabl** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `keepabl_microsoft_azure_<OrganizationID>`
+
+ b. In the **Reply URL** text box, type one of the following URLs:
+
+ | **Reply URL** |
+ ||
+ | `https://app.keepabl.com/users/saml/auth` |
+ | `https://keepabl.herokuapp.com/users/saml/auth` |
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in SP initiated mode:
+
+ In the **Sign on URL** textbox, type a URL using one of the following patterns:
+
+ | **Sign on URL** |
+ |--|
+ | `https://app.keepabl.com/users/saml/sign_in?organization_id=<OrganizationID>` |
+ | `https://keepabl.herokuapp.com/users/saml/sign_in?organization_id=<OrganizationID>` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Keepabl Client support team](mailto:support@keepabl.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Keepabl** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Keepabl.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Keepabl**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Keepabl SSO
+
+To configure single sign-on on **Keepabl** side, you need to send the **Certificate (Base64)** to [Keepabl support team](mailto:support@keepabl.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Keepabl test user
+
+In this section, you create a user called Britta Simon in Keepabl. Work with [Keepabl support team](mailto:support@keepabl.com) to add the users in the Keepabl platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Keepabl Sign on URL where you can initiate the login flow.
+
+* Go to Keepabl Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Keepabl for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Keepabl tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Keepabl for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Keepabl you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Netsparker Enterprise Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netsparker-enterprise-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Netsparker Enterprise | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Netsparker Enterprise.
+ Title: 'Tutorial: Azure AD SSO integration with Invicti'
+description: Learn how to configure single sign-on between Azure Active Directory and Invicti.
Previously updated : 11/13/2020 Last updated : 03/28/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Netsparker Enterprise
+# Tutorial: Azure AD SSO integration with Invicti
-In this tutorial, you'll learn how to integrate Netsparker Enterprise with Azure Active Directory (Azure AD). When you integrate Netsparker Enterprise with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Invicti with Azure Active Directory (Azure AD). When you integrate Invicti with Azure AD, you can:
-* Control in Azure AD who has access to Netsparker Enterprise.
-* Enable your users to be automatically signed-in to Netsparker Enterprise with their Azure AD accounts.
+* Control in Azure AD who has access to Invicti.
+* Enable your users to be automatically signed-in to Invicti with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Netsparker Enterprise with Azure
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Netsparker Enterprise single sign-on (SSO) enabled subscription.
+* Invicti single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Netsparker Enterprise supports **SP and IDP** initiated SSO
-* Netsparker Enterprise supports **Just In Time** user provisioning
+* Invicti supports **SP and IDP** initiated SSO.
+* Invicti supports **Just In Time** user provisioning.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+## Add Invicti from the gallery
-## Adding Netsparker Enterprise from the gallery
-
-To configure the integration of Netsparker Enterprise into Azure AD, you need to add Netsparker Enterprise from the gallery to your list of managed SaaS apps.
+To configure the integration of Invicti into Azure AD, you need to add Invicti from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Netsparker Enterprise** in the search box.
-1. Select **Netsparker Enterprise** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
+1. In the **Add from the gallery** section, type **Invicti** in the search box.
+1. Select **Invicti** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Netsparker Enterprise
+## Configure and test Azure AD SSO for Invicti
-Configure and test Azure AD SSO with Netsparker Enterprise using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Netsparker Enterprise.
+Configure and test Azure AD SSO with Invicti using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Invicti.
-To configure and test Azure AD SSO with Netsparker Enterprise, perform the following steps:
+To configure and test Azure AD SSO with Invicti, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Netsparker Enterprise SSO](#configure-netsparker-enterprise-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Netsparker Enterprise test user](#create-netsparker-enterprise-test-user)** - to have a counterpart of B.Simon in Netsparker Enterprise that is linked to the Azure AD representation of user.
+1. **[Configure Invicti SSO](#configure-invicti-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Invicti test user](#create-invicti-test-user)** - to have a counterpart of B.Simon in Invicti that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Netsparker Enterprise** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Invicti** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings. ![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
In the **Reply URL** text box, type a URL using the following pattern: `https://www.netsparkercloud.com/account/assertionconsumerservice/?spId=<SPID>`
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://www.netsparkercloud.com/account/ssosignin/` > [!NOTE]
- > The Reply URL value is not real. Update the value with the actual Reply URL. Contact [Netsparker Enterprise Client support team](mailto:support@netsparker.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The Reply URL value is not real. Update the value with the actual Reply URL. Contact [Invicti Client support team](mailto:support@netsparker.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up Netsparker Enterprise** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up Invicti** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Netsparker Enterprise.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Invicti.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Netsparker Enterprise**.
+1. In the applications list, select **Invicti**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Netsparker Enterprise SSO
+## Configure Invicti SSO
-1. Log in to Netsparker Enterprise as an Administrator.
+1. Log in to Invicti as an Administrator.
-1. Go to the **Settings > Single
-Sign-On**.
+1. Go to the **Settings > Single Sign-On**.
1. In the **Single Sign-On** window, select the **Azure Active Directory** tab.
Sign-On**.
c. Paste the **Identifier** value, which you have copied from the Azure portal into the **IdP Identifier** field.
- e. Paste the **Reply URL** value, which you have copied from the Azure portal into the **SAML 2.0 Endpoint** field.
+ d. Paste the **Reply URL** value, which you have copied from the Azure portal into the **SAML 2.0 Endpoint** field.
- f. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **x.509 Certificate** textbox.
+ e. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **x.509 Certificate** textbox.
- g. Check **Enable Auto Provisioning** and **Require SAML assertions to be encrypted** as required.
+ f. Check **Enable Auto Provisioning** and **Require SAML assertions to be encrypted** as required.
- h. Click **Save Changes**.
+ g. Click **Save Changes**.
-### Create Netsparker Enterprise test user
+### Create Invicti test user
-In this section, a user called Britta Simon is created in Netsparker Enterprise. Netsparker Enterprise supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Netsparker Enterprise, a new one is created after authentication.
+In this section, a user called Britta Simon is created in Invicti. Invicti supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Invicti, a new one is created after authentication.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Netsparker Enterprise Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Invicti Sign-on URL where you can initiate the login flow.
-* Go to Netsparker Enterprise Sign-on URL directly and initiate the login flow from there.
+* Go to Invicti Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Netsparker Enterprise for which you set up the SSO
-
-You can also use Microsoft Access Panel to test the application in any mode. When you click the Netsparker Enterprise tile in the Access Panel, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Netsparker Enterprise for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Invicti for which you set up the SSO.
+You can also use Microsoft My Apps to test the application in any mode. When you click the Invicti tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Invicti for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Netsparker Enterprise you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Invicti you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Swit Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/swit-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Swit'
+description: Learn how to configure single sign-on between Azure Active Directory and Swit.
++++++++ Last updated : 03/28/2022++++
+# Tutorial: Azure AD SSO integration with Swit
+
+In this tutorial, you'll learn how to integrate Swit with Azure Active Directory (Azure AD). When you integrate Swit with Azure AD, you can:
+
+* Control in Azure AD who has access to Swit.
+* Enable your users to be automatically signed-in to Swit with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Swit single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Swit supports **SP** initiated SSO.
+
+## Add Swit from the gallery
+
+To configure the integration of Swit into Azure AD, you need to add Swit from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Swit** in the search box.
+1. Select **Swit** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Swit
+
+Configure and test Azure AD SSO with Swit using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Swit.
+
+To configure and test Azure AD SSO with Swit, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Swit SSO](#configure-swit-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Swit test user](#create-swit-test-user)** - to have a counterpart of B.Simon in Swit that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Swit** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<OrgName>.swit.io`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://saml.swit.io/saml/acs`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://swit.io/auth/login?subdomain=<OrgName>`
+
+ > [!NOTE]
+ > These values are not real. Update the values with the actual Identifier and Sign on URL. Contact [Swit support team](mailto:help@swit.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Swit application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Swit application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | -- | |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+ | email | user.mail |
+ | username | user.displayname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Swit** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Swit.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Swit**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Swit SSO
+
+1. Log in to your Swit company site as an administrator.
+
+1. Go to **Admin console** in the bottom-left corner of the Administration page, then select **SAML configuration**.
+
+1. In the **SAML configuration** page, perform the following steps:
+
+ ![Screenshot shows the SSO Configuration.](./media/swit-tutorial/certificate.png "Configuration")
+
+ a. Select **Enable single sign-on with SAML** button.
+
+ b. In the **SAML 2.0 Endpoint (HTTP)** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ c. In the **Identity Provider Issuer (Entity ID)** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ d. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **Public Certificate** textbox.
+
+ e. Select **Allowed sign-in methods** from the dropdown.
+
+ f. Click **Save**.
+
+### Create Swit test user
+
+1. In a different web browser window, log into your Swit company site as an administrator.
+
+1. Go to **Admin console** > **Members&teams** and click **Invitations**.
+
+1. In the **Invitations** page, perform the following steps:
+
+ ![Screenshot shows the SSO members.](./media/swit-tutorial/user.png "Members")
+
+ a. In the **Invite people by email** textbox, type a valid email address.
+
+ b. Select **Roles** from the dropdown menu.
+
+ c. Select **Primary team** from the dropdown menu.
+
+ d. Click **Send Invitation**.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Swit Sign-on URL where you can initiate the login flow.
+
+* Go to Swit Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Swit tile in the My Apps, this will redirect to Swit Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Swit you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-rbac.md
subjects:
name: groupObjectId ```
+> [!TIP]
+> If you want to create the RoleBinding for a single user, specify *kind: User* and replace *groupObjectId* with the user principal name (UPN) in the above sample.
+ Create the RoleBinding using the [kubectl apply][kubectl-apply] command and specify the filename of your YAML manifest: ```console
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
If you wish to re-enable the cluster autoscaler on an existing cluster, you can
> [!NOTE] > If you are planning on using the cluster autoscaler with nodepools that span multiple zones and leverage scheduling features related to zones such as volume topological scheduling, the recommendation is to have one nodepool per zone and enable the `--balance-similar-node-groups` through the autoscaler profile. This will ensure that the autoscaler will scale up succesfully and try and keep the sizes of the nodepools balanced.
+## Configure the horizontal pod autoscaler
+
+Kubernetes supports [horizontal pod autoscaling][kubernetes-hpa] to adjust the number of pods in a deployment depending on CPU utilization or other select metrics. The [Metrics Server][metrics-server] is used to provide resource utilization to Kubernetes. You can configure horizontal pod autoscaling through the `kubectl autoscale` command or through a manifest. For more details on using the horizontal pod autoscaler, see [HorizontalPodAutoscaler Walkthrough][kubernetes-hpa-walkthrough].
+ ## Next steps This article showed you how to automatically scale the number of AKS nodes. You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application. For steps on using the horizontal pod autoscaler, see [Scale applications in AKS][aks-scale-apps].
This article showed you how to automatically scale the number of AKS nodes. You
[az-aks-nodepool-update]: https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview#enable-cluster-auto-scaler-for-a-node-pool [autoscaler-scaledown]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node [autoscaler-parameters]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca
-[kubernetes-faq]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#ca-doesnt-work-but-it-used-to-work-yesterday-why
+[kubernetes-faq]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#ca-doesnt-work-but-it-used-to-work-yesterday-why
+[kubernetes-hpa]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
+[kubernetes-hpa-walkthrough]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
+[metrics-server]: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md
In AKS, four initial `StorageClasses` are created for cluster using the in-tree
| `azurefile` | Uses Azure Standard storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted. | | `azurefile-premium` | Uses Azure Premium storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted.|
-For clusters using the new Container Storage Interface (CSI) external plugins (preview) the following extra `StorageClasses` are created:
+For clusters using the [Container Storage Interface (CSI) drivers][csi-storage-drivers] the following extra `StorageClasses` are created:
| Permission | Reason | |||
For clusters using the new Container Storage Interface (CSI) external plugins (p
Unless you specify a StorageClass for a persistent volume, the default StorageClass will be used. Ensure volumes use the appropriate storage you need when requesting persistent volumes.
+> [!IMPORTANT]
+> Starting in Kubernetes version 1.21, AKS will use CSI drivers only and by default. The `default` class will be the same as `managed-csi`
+ You can create a StorageClass for additional needs using `kubectl`. The following example uses Premium Managed Disks and specifies that the underlying Azure Disk should be *retained* when you delete the pod: ```yaml
For more information on core Kubernetes and AKS concepts, see the following arti
[aks-concepts-security]: concepts-security.md [aks-concepts-network]: concepts-network.md [operator-best-practices-storage]: operator-best-practices-storage.md
+[csi-storage-drivers]: csi-storage-drivers.md
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
The required network rules and IP address dependencies are:
| Destination Endpoint | Protocol | Port | Use | |-|-|||
-| **`*:1194`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:1194`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:1194`** <br/> *Or* <br/> **`APIServerPublicIP:1194`** `(only known after cluster creation)` | UDP | 1194 | For tunneled secure communication between the nodes and the control plane. This is not required for [private clusters](private-clusters.md)|
-| **`*:9000`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:9000`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:9000`** <br/> *Or* <br/> **`APIServerPublicIP:9000`** `(only known after cluster creation)` | TCP | 9000 | For tunneled secure communication between the nodes and the control plane. This is not required for [private clusters](private-clusters.md) |
-| **`*:123`** or **`ntp.ubuntu.com:123`** (if using Azure Firewall network rules) | UDP | 123 | Required for Network Time Protocol (NTP) time synchronization on Linux nodes. |
+| **`*:1194`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:1194`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:1194`** <br/> *Or* <br/> **`APIServerPublicIP:1194`** `(only known after cluster creation)` | UDP | 1194 | For tunneled secure communication between the nodes and the control plane. This is not required for [private clusters][aks-private-clusters], or for clusters with the *konnectivity-agent* enabled. |
+| **`*:9000`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:9000`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:9000`** <br/> *Or* <br/> **`APIServerPublicIP:9000`** `(only known after cluster creation)` | TCP | 9000 | For tunneled secure communication between the nodes and the control plane. This is not required for [private clusters][aks-private-clusters], or for clusters with the *konnectivity-agent* enabled. |
+| **`*:123`** or **`ntp.ubuntu.com:123`** (if using Azure Firewall network rules) | UDP | 123 | Required for Network Time Protocol (NTP) time synchronization on Linux nodes. This is not required for nodes provisioned after March 2021. |
| **`CustomDNSIP:53`** `(if using custom DNS servers)` | UDP | 53 | If you're using custom DNS servers, you must ensure they're accessible by the cluster nodes. |
-| **`APIServerPublicIP:443`** `(if running pods/deployments that access the API Server)` | TCP | 443 | Required if running pods/deployments that access the API Server, those pods/deployments would use the API IP. This is not required for [private clusters](private-clusters.md) |
+| **`APIServerPublicIP:443`** `(if running pods/deployments that access the API Server)` | TCP | 443 | Required if running pods/deployments that access the API Server, those pods/deployments would use the API IP. This is not required for [private clusters][aks-private-clusters]. |
### Azure Global required FQDN / application rules
The following FQDN / application rules are required:
| Destination FQDN | Port | Use | |-|--|-|
-| **`*.hcp.<location>.azmk8s.io`** | **`HTTPS:443`** | Required for Node <-> API server communication. Replace *\<location\>* with the region where your AKS cluster is deployed. |
+| **`*.hcp.<location>.azmk8s.io`** | **`HTTPS:443`** | Required for Node <-> API server communication. Replace *\<location\>* with the region where your AKS cluster is deployed. This is not required for [private clusters][aks-private-clusters]. |
| **`mcr.microsoft.com`** | **`HTTPS:443`** | Required to access images in Microsoft Container Registry (MCR). This registry contains first-party images/charts (for example, coreDNS, etc.). These images are required for the correct creation and functioning of the cluster, including scale and upgrade operations. | | **`*.data.mcr.microsoft.com`** | **`HTTPS:443`** | Required for MCR storage backed by the Azure content delivery network (CDN). | | **`management.azure.com`** | **`HTTPS:443`** | Required for Kubernetes operations against the Azure API. |
The following FQDN / application rules are required for AKS clusters that have G
|--|--|-| | **`nvidia.github.io`** | **`HTTPS:443`** | This address is used for correct driver installation and operation on GPU-based nodes. | | **`us.download.nvidia.com`** | **`HTTPS:443`** | This address is used for correct driver installation and operation on GPU-based nodes. |
-| **`apt.dockerproject.org`** | **`HTTPS:443`** | This address is used for correct driver installation and operation on GPU-based nodes. |
+| **`download.docker.com`** | **`HTTPS:443`** | This address is used for correct driver installation and operation on GPU-based nodes. |
## Windows Server based node pools
If you want to restrict how pods communicate between themselves and East-West tr
[aks-upgrade]: upgrade-cluster.md [aks-support-policies]: support-policies.md [aks-faq]: faq.md
+[aks-private-clusters]: private-clusters.md
aks Node Upgrade Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-upgrade-github-actions.md
Download and sign in to the Azure CLI.
1. From the Azure CLI, run the following command to generate a new username and password. ```azurecli-interactive
- az ad sp create-for-rbac --role Contributor -o json
+ az ad sp create-for-rbac --role Contributor --scopes /subscriptions/{subscriptionID} -o json
``` The output should be similar to the following json:
aks Spark Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/spark-job.md
az group create --name mySparkCluster --location eastus
Create a Service Principal for the cluster. After it is created, you will need the Service Principal appId and password for the next command. ```azurecli
-az ad sp create-for-rbac --name SparkSP --role Contributor
+az ad sp create-for-rbac --name SparkSP --role Contributor --scopes /subscriptions/mySubscriptionID
``` Create the AKS cluster with nodes that are of size `Standard_D3_v2`, and values of appId and password passed as service-principal and client-secret parameters.
aks Tutorial Kubernetes Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-scale.md
azure-vote-front Deployment/azure-vote-front 0% / 50% 3 10
After a few minutes, with minimal load on the Azure Vote app, the number of pod replicas decreases automatically to three. You can use `kubectl get pods` again to see the unneeded pods being removed.
+> [!NOTE]
+> For additional examples on using the horizontal pod autoscaler, see [HorizontalPodAutoscaler Walkthrough][kubernetes-hpa-walkthrough].
+ ## Manually scale AKS nodes If you created your Kubernetes cluster using the commands in the previous tutorial, it has two nodes. You can adjust the number of nodes manually if you plan more or fewer container workloads on your cluster.
Advance to the next tutorial to learn how to update application in Kubernetes.
[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-scale]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#scale [kubernetes-hpa]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
+[kubernetes-hpa-walkthrough]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
[metrics-server-github]: https://github.com/kubernetes-sigs/metrics-server/blob/master/README.md#deployment [metrics-server]: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/update-credentials.md
If you chose to update the existing service principal credentials in the previou
To create a service principal and then update the AKS cluster to use these new credentials, use the [az ad sp create-for-rbac][az-ad-sp-create] command. ```azurecli-interactive
-az ad sp create-for-rbac --role Contributor
+az ad sp create-for-rbac --role Contributor --scopes /subscriptions/mySubscriptionID
``` The output is similar to the following example. Make a note of your own `appId` and `password`. These values are used in the next step.
api-management Api Management Get Started Revise Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-get-started-revise-api.md
documentationcenter: '' -+ Previously updated : 02/09/2021 Last updated : 03/30/2022 # Tutorial: Use revisions to make non-breaking API changes safely
-When your API is ready to go and starts to be used by developers, you eventually need to make changes to that API and at the same time not disrupt callers of your API. It's also useful to let developers know about the changes you made.
+When your API is ready to go and is used by developers, you eventually need to make changes to that API and at the same time not disrupt callers of your API. It's also useful to let developers know about the changes you made.
-In Azure API Management, use *revisions* to make non-breaking API changes so you can model and test changes safely. When ready, you can make a revision current and replace your current API.
+In Azure API Management, use *revisions* to make non-breaking API changes so you can model and test changes safely. When ready, you can make a revision current and replace your current API.
For background, see [Versions & revisions](https://azure.microsoft.com/blog/versions-revisions/) and [API Versioning with Azure API Management](https://azure.microsoft.com/blog/api-versioning-with-azure-api-management/).
In this tutorial, you learn how to:
## Add a new revision
-1. Sign in to the [Azure portal](https://portal.azure.com), and navigate to your API Management instance.
+1. Sign in to the [Azure portal](https://portal.azure.com), and go to your API Management instance.
1. Select **APIs**. 2. Select **Demo Conference API** from the API list (or another API to which you want to add revisions). 3. Select the **Revisions** tab.
In this tutorial, you learn how to:
> [!TIP] > You can also select **Add revision** in the context menu (**...**) of the API.
-5. Provide a description for your new revision, to help remember what it will be used for.
-6. Select **Create**,
+5. Provide a description for your new revision, to help remember what it'll be used for.
+6. Select **Create**.
7. Your new revision is now created. > [!NOTE]
In this tutorial, you learn how to:
> [!TIP] > Use the revision selector to switch between revisions that you wish to work on. 1. Select **+ Add Operation**.
-1. Set your new operation to be **POST**, and the Name, Display Name and URL of the operation as **test**.
+1. Set your new operation to **POST**, and the Name, Display Name and URL of the operation as **test**.
1. **Save** your new operation. :::image type="content" source="media/api-management-getstarted-revise-api/07-add-revisions-02-make-changes.png" alt-text="Modify revision"::: 1. You've now made a change to **Revision 2**. Use the **revision selector** near the top of the page to switch back to **Revision 1**.
-1. Notice that your new operation does not appear in **Revision 1**.
+1. Notice that your new operation doesn't appear in **Revision 1**.
## Make your revision current and add a change log entry
In this tutorial, you learn how to:
1. Select the **Revisions** tab from the menu near the top of the page. 1. Open the context menu (**...**) for **Revision 2**. 1. Select **Make current**.
-1. Select the **Post to Public Change log for this API** checkbox, if you want to post notes about this change. Provide a description for your change that developers see, for example: **Testing revisions. Added new "test" operation.**
+1. Select the **Post to Public Change log for this API** checkbox, if you want to post notes about this change. Provide a description for your change that the developers can see, for example: **Testing revisions. Added new "test" operation.**
1. **Revision 2** is now current. :::image type="content" source="media/api-management-getstarted-revise-api/revisions-menu.png" alt-text="Revision menu in Revisions window":::
Use this procedure to create and update a release.
--api-id echo-api --service-name apim-hello-world --output table ```
- The notes you specify appear in the changelog. You can see them in the output of the previous command.
+ The notes you specify appear in the change log. You can see them in the output of the previous command.
-1. When you create a release, the `--notes` parameter is optional. You can add or change the notes later by using the [az apim api release update](/cli/azure/apim/api/release#az-apim-api-release-update) command:
+1. When you create a release, the `--notes` parameter is optional. You can add or change the notes later using the [az apim api release update](/cli/azure/apim/api/release#az_apim_api_release_update) command:
```azurecli az apim api release update --resource-group apim-hello-word-resource-group \
If you've tried the [developer portal](api-management-howto-developer-portal-cus
1. In the developer portal, select **APIs**, and then select **Demo Conference API**. 1. Notice your new **test** operation is now available. 1. Select **Changelog** near the API name.
-1. Notice that your change log entry appears in this list.
+1. Notice that your change log entry appears in the list.
## Next steps
api-management Restify Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/restify-soap-api.md
Title: Import SOAP API to Azure API Management and convert to REST using the por
description: Learn how to import a SOAP API to Azure API Management as a WSDL specification and convert it to a REST API. Then, test the API in the Azure portal. -+ Previously updated : 03/01/2022 Last updated : 03/29/2022
Complete the following quickstart: [Create an Azure API Management instance](get
1. Under **Create from definition**, select **WSDL**. ![SOAP API](./media/restify-soap-api/wsdl-api.png)
-1. In **WSDL specification**, enter the URL to your SOAP API, or click **Select a file** to select a local WSDL file.
+1. In **WSDL specification**, enter the URL to your SOAP API, or select **Select a file** to select a local WSDL file.
1. In **Import method**, select **SOAP to REST**. When this option is selected, API Management attempts to make an automatic transformation between XML and JSON. In this case, consumers should call the API as a RESTful API, which returns JSON. API Management converts each request to a SOAP call.
Operations can be called directly from the Azure portal, which provides a conven
2. Select the **Test** tab. 3. Select an operation.
- The page displays fields for query parameters and fields for the headers. One of the headers is **Ocp-Apim-Subscription-Key**, for the subscription key of the product that is associated with this API. If you created the API Management instance, you're an administrator already, so the key is filled in automatically.
+ The page shows fields for query parameters and fields for the headers. One of the headers is **Ocp-Apim-Subscription-Key**, for the subscription key of the product that is associated with this API. If you created the API Management instance, you're an admin already, so the key is filled in automatically.
1. Press **Send**. When the test is successful, the backend responds with **200 OK** and some data.
app-service Monitor App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-app-service.md
See [Monitoring *App Service* data reference](monitor-app-service-reference.md)
App Service also provides built-in diagnostics to assist with debugging apps. See [Enable diagnostics logging](troubleshoot-diagnostic-logs.md) for more information on enabling the built-in logs. To monitor App Service instances, see [Monitor App Service instances using Health check](monitor-instances-health-check.md). +
+## Application Insights
+
+Application Insights monitors the availability, performance, and usage of your web applications. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to a variety of development tools and integrates with Visual Studio to support your DevOps processes. Learn more about Application Insights in the [Application Monitoring for App Service Overview](../azure-monitor/app/azure-web-apps.md).
## Collection and routing Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
app-service Overview Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-monitoring.md
+
+ Title: Monitoring overview
+description: Learn about the various monitoring options on App Service.
+keywords: app service, azure app service, monitoring, diagnostic settings, support, web app, troubleshooting,
+ Last updated : 02/25/2022++
+# Azure App Service monitoring overview
+
+Azure App Service provides several monitoring options for monitoring resources for availability, performance, and operation. Options such as Diagnostic Settings, Application Insights, Log stream, Metrics, Quotas and alerts, and Activity logs. This article aims to bring clarity on monitoring options on App Service and [provide scenarios](#monitoring-scenarios) when each should be used.
+
+## Diagnostic Settings (via Azure Monitor)
+
+Azure Monitor is a monitoring service that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premise. The Azure Monitor data platform collects data into logs and metrics where they can be analyzed. App Service monitoring data can be shipped to Azure Monitor through Diagnostic Settings.
+
+Diagnostic settings lets you export logs to other services, such as Log Analytics, Storage account, and Event Hubs. Large amounts of data using SQL-like Kusto can be queried with Log Analytics. You can capture platform logs in Azure Monitor Logs as configured via Diagnostic Settings, and instrument your app further with the dedicated application performance management feature (Application Insights) for additional telemetry and logs.
+
+For an end-to-end tutorial on Diagnostic Settings, see the article [Troubleshoot an App Service app with Azure Monitor](tutorial-troubleshoot-monitor.md).
+
+## Quotas and alerts
+
+Apps that are hosted in App Service are subject to certain limits on the resources they can use. [The limits](web-sites-monitor.md#understand-quotas) are defined by the App Service plan that's associated with the app. Metrics for an app or an App Service plan can be hooked up to alerts.
+
+## Metrics
+
+Build visualizations of [metrics](web-sites-monitor.md#understand-metrics) on Azure resources (web apps and App Service Plans). Metrics can be viewed by aggregates on data (ie. average, max, min, etc), instances, time range, and other filters. Metrics can monitor performance, memory, CPU, and other attributes.
+
+## Activity logs
+View a historical log of [events changing your resource](get-resource-events.md#view-azure-activity-logs). Resource events help you understand any changes that were made to your underlying web app resources and take action as necessary. Event examples include scaling of instances, updates to application settings, restarting of the web app, and many more.
+
+## Application Insights (via Azure Monitor)
+
+[Application Insights](monitor-app-service.md#application-insights), a feature of Azure Monitor, is an extensible Application Performance Management (APM) service for developers and DevOps professionals. Use it to monitor your live applications. It will automatically detect performance anomalies, and includes powerful analytics tools to help you diagnose issues and to understand what users actually do with your app. The logs in Application Insights are generated by application code.
+
+## Log stream (via App Service Logs)
+Azure provides built-in diagnostics to assist during testing and development to debug an App Service app. [Log stream](troubleshoot-diagnostic-logs.md#stream-logs) can be used to get quick access to output and errors written by your application, and logs from the web server. These are standard output/error logs in addition to web server logs.
+
+## Monitoring scenarios
+
+The table below lists monitoring methods to use for different scenarios.
+
+|Scenario|Monitoring method |
+|-|--|
+|I want to monitor platform metrics and logs | (Azure Monitor) [Diagnostic Settings](troubleshoot-diagnostic-logs.md)|
+|I want to monitor application performance and usage | (Azure Monitor) [Application Insights](monitor-app-service.md#application-insights)|
+|I want to monitor built-in logs for testing and development|[Log stream](troubleshoot-diagnostic-logs.md#stream-logs)|
+|I want to monitor resource limits and configure alerts|[Quotas and alerts](web-sites-monitor.md)|
+|I want to monitor web app resource events|[Activity Logs](get-resource-events.md#view-azure-activity-logs)|
+|I want to monitor metrics visually|[Metrics](web-sites-monitor.md#metrics-granularity-and-retention-policy)|
+
+## Next steps
+* [Query logs with Azure Monitor](../azure-monitor/logs/log-query-overview.md)
+* [How to Monitor Azure App Service](web-sites-monitor.md)
+* [Troubleshooting Azure App Service with Azure Monitor](tutorial-troubleshoot-monitor.md)
+* [Monitor App Service with Azure Monitor](monitor-app-service.md)
application-gateway Ingress Controller Install New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-new.md
Follow the steps below to create an Azure Active Directory (AAD) [service princi
1. Create AD service principal ([Read more about Azure RBAC](../role-based-access-control/overview.md)): ```azurecli
- az ad sp create-for-rbac -o json > auth.json
+ az ad sp create-for-rbac --role Contributor --scopes /subscriptions/mySubscriptionID -o json > auth.json
appId=$(jq -r ".appId" auth.json) password=$(jq -r ".password" auth.json) ```
azure-app-configuration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/overview.md
Previously updated : 02/19/2020 Last updated : 03/30/2022 # What is Azure App Configuration?
azure-app-configuration Use Key Vault References Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-spring-boot.md
To add a secret to the vault, you need to take just a few additional steps. In t
1. In this tutorial, you use a service principal for authentication to Key Vault. To create this service principal, use the Azure CLI [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command: ```azurecli
- az ad sp create-for-rbac -n "http://mySP" --role Contributor --sdk-auth
+ az ad sp create-for-rbac -n "http://mySP" --role Contributor --scopes /subscriptions/{subscription-id} --sdk-auth
``` This operation returns a series of key/value pairs:
azure-arc Managed Instance Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-features.md
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
| Encryption for backups | Yes | | SQL Server Authentication | Yes | | Azure Active Directory Authentication | No |
-| Windows Authentication | No |
+| Windows Authentication | Yes |
## <a name="RDBMSM"></a> RDBMS Manageability
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Azure Arc resource bridge (preview) is part of the core Azure Arc platform, and
All management operations are performed from Azure, no local configuration is required on the appliance.
-> [!IMPORTANT]
-> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
- ## Overview Azure resource bridge (preview) hosts other components such as Custom Locations, cluster extensions, and other Azure Arc agents in order to deliver the level of functionality with the private cloud infrastructures it supports. This complex system is composed of three layers:
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/security-overview.md
Last updated 11/08/2021
This article describes the security configuration and considerations you should evaluate before deploying Azure Arc resource bridge (preview) in your enterprise.
-> [!IMPORTANT]
-> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
- ## Using a managed identity By default, an Azure Active Directory system-assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) is created and assigned to the Azure Arc resource bridge (preview). Azure Arc resource bridge (preview) currently supports only a system-assigned identity. The `clusteridentityoperator` identity initiates the first outbound communication and fetches the Managed Service Identity (MSI) certificate used by other agents for communication with Azure.
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
This article provides information on troubleshooting and resolving issues that may occur while attempting to deploy, use, or remove the Azure Arc resource bridge (preview). The resource bridge is a packaged virtual machine, which hosts a *management* Kubernetes cluster. For general information, see [Azure Arc resource bridge (preview) overview](./overview.md).
-> [!IMPORTANT]
-> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
- ## Logs For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [Az arcappliance log](placeholder for published ref API) command. This command needs to be run from the client machine where you've deployed the Azure Arc resource bridge from.
azure-arc Manage Vm Extensions Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-cli.md
Title: Enable VM extension using Azure CLI description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments using the Azure CLI. Previously updated : 10/28/2021 Last updated : 03/30/2022
az connectedmachine extension update --name "CustomScriptExtension" --type "Cust
When a new version of a supported VM extension is released, you can upgrade it to that latest release. To upgrade a VM extension, use [az connectedmachine upgrade-extension](/cli/azure/connectedmachine) with the `--machine-name`, `--resource-group`, and `--extension-targets` parameters.
-For the `--extension-targets` parameter, you need to specify the extension and the latest version available. To find out what the latest version available is, you can get this information from the **Extensions** page for the selected Arc-enabled server in the Azure portal, or by running [az vm extension image list](/cli/azure/vm/extension/image#az-vm-extension-image-list).
+For the `--extension-targets` parameter, you need to specify the extension and the latest version available. To find out what the latest version available is, you can get this information from the **Extensions** page for the selected Arc-enabled server in the Azure portal, or by running [az vm extension image list](/cli/azure/vm/extension/image#az-vm-extension-image-list). You may specify multiple extensions in a single upgrade request by providing a comma-separated list of extensions, defined by their publisher and type (separated by a period) and the target version for each extension, as shown in the example below.
To upgrade the Log Analytics agent extension for Windows that has a newer version available, run the following command: ```azurecli
-az connectedmachine upgrade-extension --machine-name "myMachineName" --resource-group "myResourceGroup --extension-targets --extension-targets "{\"MicrosoftMonitoringAgent\":{\"targetVersion\":\"1.0.18053.0\"}}""
+az connectedmachine upgrade-extension --machine-name "myMachineName" --resource-group "myResourceGroup" --extension-targets '{\"Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent\":{\"targetVersion\":\"1.0.18053.0\"}}'
``` You can review the version of installed VM extensions at any time by running the command [az connectedmachine extension list](/cli/azure/connectedmachine/extension#az-connectedmachine-extension-list). The `typeHandlerVersion` property value represents the version of the extension.
azure-arc Manage Vm Extensions Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-powershell.md
Title: Enable VM extension using Azure PowerShell description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments using Azure PowerShell. Previously updated : 10/21/2021 Last updated : 03/30/2022
Run the following command on your Azure Arc-enabled server:
When the installation completes, the following message is returned:
-`The installed extension `Az.ConnectedMachine` is experimental and not covered by customer support. Please use with discretion.`
+`The installed extension 'Az.ConnectedMachine' is experimental and not covered by customer support. Please use with discretion.`
## Enable extension
Name Location PropertiesType ProvisioningState
custom westus2 CustomScriptExtension Succeeded ```
-## Update extensions
+## Update extension configuration
To reconfigure an installed extension, you can use the [Update-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/update-azconnectedmachineextension) cmdlet with the `-Name`, `-MachineName`, `-ResourceGroupName`, and `-Settings` parameters. Refer to the reference article for the cmdlet to understand the different methods to provide the changes you want to the extension.
+## Upgrade extension
+
+When a new version of a supported VM extension is released, you can upgrade it to that latest release. To upgrade a VM extension, use [Update-AzConnectedExtension](/powershell/module/az.connectedmachine/update-azconnectedextension) with the `-MachineName`, `-ResourceGroupName`, and `-ExtensionTarget` parameters.
+
+For the `-ExtensionTarget` parameter, you need to specify the extension and the latest version available. To find out what the latest version available is, you can get this information from the **Extensions** page for the selected Arc-enabled server in the Azure portal, or by running [Get-AzVMExtensionImage](/powershell/module/az.compute/get-azvmextensionimage). You may specify multiple extensions in a single upgrade request by providing a comma-separated list of extensions, defined by their publisher and type (separated by a period) and the target version for each extension, as shown in the example below.
+
+To upgrade the Log Analytics agent extension for Windows that has a newer version available, run the following command:
+
+```powershell
+Update-AzConnectedExtension -MachineName "myMachineName" -ResourceGroupName "myResourceGroup" -ExtensionTarget '{\"Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent\":{\"targetVersion\":\"1.0.18053.0\"}}'
+```
+
+You can review the version of installed VM extensions at any time by running the command [Get-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/get-azconnectedmachineextension). The `TypeHandlerVersion` property value represents the version of the extension.
+ ## Remove extensions To remove an installed VM extension on your Azure Arc-enabled server, use [Remove-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/remove-azconnectedmachineextension) with the `-Name`, `-MachineName` and `-ResourceGroupName` parameters.
azure-arc Browse And Enable Vcenter Resources In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/browse-and-enable-vcenter-resources-in-azure.md
After you've connected your VMware vCenter to Azure, you can browse your vCenter
Visit the VMware vCenter blade in Azure Arc center to view all the connected vCenters. From there, you'll browse your virtual machines (VMs), resource pools, templates, and networks. From the inventory of your vCenter resources, you can select and enable one or more resources in Azure. When you enable a vCenter resource in Azure, it creates an Azure resource that represents your vCenter resource. You can use this Azure resource to assign permissions or conduct management operations.
-> [!IMPORTANT]
-> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
- ## Enable resource pools, clusters, hosts, datastores, networks, and VM templates in Azure In this section, you will enable resource pools, networks, and other non-VM resources in Azure.
azure-arc Manage Access To Arc Vmware Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/manage-access-to-arc-vmware-resources.md
Last updated 11/08/2021
Once your VMware vCenter resources have been enabled in Azure, the final step in setting up a self-service experience for your teams is to provide them access. This article describes how to use built-in roles to manage granular access to VMware resources through Azure and allow your teams to deploy and manage VMs.
-> [!IMPORTANT]
-> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
- ## Arc-enabled VMware vSphere built-in roles There are three built-in roles to meet your access control requirements. You can apply these roles to a whole subscription, resource group, or a single resource.
azure-arc Manage Vmware Vms In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md
In this article, you will learn how to perform various operations on the Azure A
To perform guest OS operations on Arc-enabled VMs, you must enable guest management on the VMs. When you enable guest management, the Arc Connected Machine Agent is installed on the VM.
-> [!IMPORTANT]
-> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
- ## Supported extensions and management services ### Windows extensions
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
Arc-enabled VMware vSphere (preview) allows you to:
- Conduct governance and monitoring operations across Azure and VMware VMs by enabling guest management (installing the [Azure Arc-enabled servers Connected Machine agent](../servers/agent-overview.md)).
-> [!IMPORTANT]
-> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
- ## How does it work? To deliver this experience, you need to deploy the [Azure Arc resource bridge](../resource-bridge/overview.md) (preview), which is a virtual appliance, in your vSphere environment. It connects your vCenter Server to Azure. Azure Arc resource bridge (preview) enables you to represent the VMware resources in Azure and do various operations on them.
azure-arc Quick Start Create A Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-create-a-vm.md
Last updated 09/29/2021
Once your administrator has connected a VMware vCenter to Azure, represented VMware vCenter resources in Azure, and provided you permissions on those resources, you'll create a virtual machine.
-> [!IMPORTANT]
-> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
- ## Prerequisites - An Azure subscription and resource group where you have an Arc VMware VM contributor role.
azure-functions Create First Function Cli Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-csharp.md
Before you begin, you must have the following:
+ One of the following tools for creating Azure resources:
- + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
+ + [Azure CLI](/cli/azure/install-azure-cli) [version 2.4](/cli/azure/release-notes-azure-cli#april-21-2020) or later.
+ The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 5.9.0 or later.
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-python.md
Use the following commands to create these items. Both Azure CLI and PowerShell
# [Azure CLI](#tab/azure-cli) ```azurecli
- az functionapp create --consumption-plan-location westeurope --runtime python --runtime-version 3.8 --functions-version 3 --name <APP_NAME> --os-type linux --storage-account <STORAGE_NAME>
+ az functionapp create --consumption-plan-location westeurope --runtime python --runtime-version 3.9 --functions-version 4 --name <APP_NAME> --os-type linux --storage-account <STORAGE_NAME>
```
- The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. If you are using Python 3.7 or 3.6, change `--runtime-version` to `3.7` or `3.6`, respectively. You must supply `--os-type linux` because Python functions can't run on Windows, which is the default.
+ The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. If you are using Python 3.8, 3.7, or 3.6, change `--runtime-version` to `3.8`, `3.7`, or `3.6`, respectively. You must supply `--os-type linux` because Python functions can't run on Windows, which is the default.
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
- New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -FunctionsVersion 3 -RuntimeVersion 3.8 -Runtime python -Location '<REGION>'
+ New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccountName <STORAGE_NAME> -FunctionsVersion 4 -RuntimeVersion 3.9 -Runtime python -Location '<REGION>'
```
- The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. If you're using Python 3.7 or 3.6, change `-RuntimeVersion` to `3.7` or `3.6`, respectively.
+ The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. If you're using Python 3.8, 3.7, or 3.6, change `-RuntimeVersion` to `3.8`, `3.7`, or `3.6`, respectively.
azure-monitor Alerts Enable Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-enable-template.md
Title: Resource Manager template - create metric alert description: Learn how to use a Resource Manager template to create a classic metric alert to receive notifications by email or webhook.--++ Previously updated : 09/06/2021 Last updated : 03/30/2022
-# Create a classic metric alert with a Resource Manager template
+# Create a classic metric alert rule with a Resource Manager template
> [!WARNING]
-> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
+> This article describes how to create older classic metric alert rules. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
>
-This article shows how you can use an [Azure Resource Manager template](../../azure-resource-manager/templates/syntax.md) to configure Azure classic metric alerts. This enables you to automatically set up alerts on your resources when they are created to ensure that all resources are monitored correctly.
+This article shows how you can use an [Azure Resource Manager template](../../azure-resource-manager/templates/syntax.md) to configure Azure classic metric alert rules. This enables you to automatically set up alert rules on your resources when they are created to ensure that all resources are monitored correctly.
The basic steps are as follows:
-1. Create a template as a JSON file that describes how to create the alert.
+1. Create a template as a JSON file that describes how to create the alert rule.
2. [Deploy the template using any deployment method](../../azure-resource-manager/templates/deploy-powershell.md).
-Below we describe how to create a Resource Manager template first for an alert alone, then for an alert during the creation of another resource.
+Below we describe how to create a Resource Manager template first for an alert rule alone, then for an alert rule during the creation of another resource.
-## Resource Manager template for a classic metric alert
-To create an alert using a Resource Manager template, you create a resource of type `Microsoft.Insights/alertRules` and fill in all related properties. Below is a template that creates an alert rule.
+## Resource Manager template for a classic metric alert rule
+To create an alert rule using a Resource Manager template, you create a resource of type `Microsoft.Insights/alertRules` and fill in all related properties. Below is a template that creates an alert rule.
```json {
To create an alert using a Resource Manager template, you create a resource of t
An explanation of the schema and properties for an alert rule [is available here](/rest/api/monitor/alertrules).
-## Resource Manager template for a resource with a classic metric alert
-An alert on a Resource Manager template is most often useful when creating an alert while creating a resource. For example, you may want to ensure that a ΓÇ£CPU % > 80ΓÇ¥ rule is set up every time you deploy a Virtual Machine. To do this, you add the alert rule as a resource in the resource array for your VM template and add a dependency using the `dependsOn` property to the VM resource ID. HereΓÇÖs a full example that creates a Windows VM and adds an alert that notifies subscription admins when the CPU utilization goes above 80%.
+## Resource Manager template for a resource with a classic metric alert rule
+An alert rule on a Resource Manager template is most often useful when creating an alert rule while creating a resource. For example, you may want to ensure that a ΓÇ£CPU % > 80ΓÇ¥ rule is set up every time you deploy a Virtual Machine. To do this, you add the alert rule as a resource in the resource array for your VM template and add a dependency using the `dependsOn` property to the VM resource ID. HereΓÇÖs a full example that creates a Windows VM and adds an alert rule that notifies subscription admins when the CPU utilization goes above 80%.
```json {
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
It's important to note that the following example doesn't cause the ApplicationI
For more information, see [ILogger configuration](ilogger.md#logging-level).
+### How can I get all custom ILogger error messages?
+
+Disable adaptive sampling. Examples of how to do this are provided in [Configure the Application Insights SDK](#configure-the-application-insights-sdk) section of this article.
+ ### Some Visual Studio templates used the UseApplicationInsights() extension method on IWebHostBuilder to enable Application Insights. Is this usage still valid? The extension method `UseApplicationInsights()` is still supported, but it's marked as obsolete in Application Insights SDK version 2.8.0 and later. It will be removed in the next major version of the SDK. To enable Application Insights telemetry, we recommend using `AddApplicationInsightsTelemetry()` because it provides overloads to control some configuration. Also, in ASP.NET Core 3.X apps, `services.AddApplicationInsightsTelemetry()` is the only way to enable Application Insights.
azure-netapp-files Azacsnap Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-installation.md
Create RBAC Service Principal
1. Create a service principal using Azure CLI per the following example: ```azurecli-interactive
- az ad sp create-for-rbac --role Contributor --sdk-auth
+ az ad sp create-for-rbac --role Contributor --scopes /subscriptions/{subscription-id} --sdk-auth
``` 1. This should generate an output like the following example:
azure-netapp-files Azure Netapp Files Develop With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-develop-with-rest-api.md
The REST API specification for Azure NetApp Files is published through [GitHub](
2. Enter the following command in the Azure CLI: ```azurecli
- az ad sp create-for-rbac --name $YOURSPNAMEGOESHERE --role Contributor
+ az ad sp create-for-rbac --name $YOURSPNAMEGOESHERE --role Contributor --scopes /subscriptions/{subscription-id}
``` The command output is similar to the following example:
azure-netapp-files Develop Rest Api Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/develop-rest-api-powershell.md
The REST API specification for Azure NetApp Files is published through [GitHub](
2. Enter the following command in the Azure CLI: ```azurecli
- $RBAC_SP = az ad sp create-for-rbac --name <YOURSPNAMEGOESHERE> --role Contributor | ConvertFrom-Json
+ $RBAC_SP = az ad sp create-for-rbac --name <YOURSPNAMEGOESHERE> --role Contributor --scopes /subscriptions/<subscription-id> | ConvertFrom-Json
``` To display the service principal information, type `$RBAC_SP` and press Enter.
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
Previously updated : 01/21/2022 Last updated : 03/30/2022 # SMB FAQs for Azure NetApp Files
Azure NetApp Files supports modifying `SMB Shares` by using Microsoft Management
You can change the NTFS permissions of the root volume by using [NTFS file and folder permissions](azure-netapp-files-create-volumes-smb.md#ntfs-file-and-folder-permissions) procedure.
+## Can I change the SMB share name after the SMB volume has been created?
+
+No. However, you can create a new SMB volume with the new share name from a snapshot of the SMB volume with the old share name.
+ ## Next steps - [FAQs about SMB performance for Azure NetApp Files](azure-netapp-files-smb-performance.md)
azure-resource-manager Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/modules.md
The full path for a module in a registry can be long. Instead of providing the f
### File in template spec
-After creating a [template spec](../templates/template-specs.md), you can link to that template spec in a module. Specify the template spec in the following format:
+After creating a [template spec](../bicep/template-specs.md), you can link to that template spec in a module. Specify the template spec in the following format:
```bicep module <symbolic-name> 'ts:<sub-id>/<rg-name>/<template-spec-name>:<version>' = {
azure-resource-manager Quickstart Create Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-template-specs.md
+
+ Title: Create and deploy a template spec with Bicep
+description: Learn how to use Bicep to create and deploy a template spec to a resource group in your Azure subscription. Then, use a template spec to deploy Azure resources.
Last updated : 03/30/2022+++
+# Customer intent: As a developer I want to use Bicep to create and share deployment templates so that other people in my organization can deploy Microsoft Azure resources.
++
+# Quickstart: Create and deploy a template spec with Bicep
+
+This quickstart describes how to create and deploy a [template spec](template-specs.md) with a Bicep file. A template spec is deployed to a resource group so that people in your organization can deploy resources in Microsoft Azure. Template specs let you share deployment templates without needing to give users access to change the Bicep file. This template spec example uses a Bicep file to deploy a storage account.
+
+When you create a template spec, the Bicep file is transpiled into JavaScript Object Notation (JSON). The template spec uses JSON to deploy Azure resources. Currently, you can't use the Microsoft Azure portal to import a Bicep file and create a template spec resource.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure PowerShell [version 6.3.0 or later](/powershell/azure/install-az-ps) or Azure CLI [version 2.27.0 or later](/cli/azure/install-azure-cli).
+- [Visual Studio Code](https://code.visualstudio.com/) with the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).
+
+## Create Bicep file
+
+You create a template spec from a local Bicep file. Copy the following sample and save it to your computer as _main.bicep_. The examples use the path _C:\templates\main.bicep_. You can use a different path, but you'll need to change the commands.
+
+The following Bicep file is used in the **PowerShell** and **CLI** tabs. The **Bicep file** tab uses a different template that combines Bicep and JSON to create and deploy a template spec.
+
+```bicep
+@allowed([
+ 'Premium_LRS'
+ 'Premium_ZRS'
+ 'Standard_GRS'
+ 'Standard_GZRS'
+ 'Standard_LRS'
+ 'Standard_RAGRS'
+ 'Standard_RAGZRS'
+ 'Standard_ZRS'
+])
+@description('Storage account type.')
+param storageAccountType string = 'Standard_LRS'
+
+@description('Location for all resources.')
+param location string = resourceGroup().location
+
+var storageAccountName = 'storage${uniqueString(resourceGroup().id)}'
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' = {
+ name: storageAccountName
+ location: location
+ sku: {
+ name: storageAccountType
+ }
+ kind: 'StorageV2'
+ properties: {}
+}
+
+output storageAccountNameOutput string = storageAccount.name
+```
+
+## Create template spec
+
+The template spec is a resource type named [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs). To create a template spec, use Azure CLI, Azure PowerShell, or a Bicep file.
+
+This example uses the resource group name `templateSpecRG`. You can use a different name, but you'll need to change the commands.
+
+# [PowerShell](#tab/azure-powershell)
+
+1. Create a new resource group to contain the template spec.
+
+ ```azurepowershell
+ New-AzResourceGroup `
+ -Name templateSpecRG `
+ -Location westus2
+ ```
+
+1. Create the template spec in that resource group. Give the new template spec the name _storageSpec_.
+
+ ```azurepowershell
+ New-AzTemplateSpec `
+ -Name storageSpec `
+ -Version "1.0" `
+ -ResourceGroupName templateSpecRG `
+ -Location westus2 `
+ -TemplateFile "C:\templates\main.bicep"
+ ```
+
+# [CLI](#tab/azure-cli)
+
+1. Create a new resource group to contain the template spec.
+
+ ```azurecli
+ az group create \
+ --name templateSpecRG \
+ --location westus2
+ ```
+
+1. Create the template spec in that resource group. Give the new template spec the name _storageSpec_.
+
+ ```azurecli
+ az ts create \
+ --name storageSpec \
+ --version "1.0" \
+ --resource-group templateSpecRG \
+ --location westus2 \
+ --template-file "C:\templates\main.bicep"
+ ```
+
+# [Bicep file](#tab/bicep)
+
+You can create a template spec with a Bicep file but the `mainTemplate` must be in JSON. The JSON template doesn't use standard JSON syntax. For example, there are no end-of-line commas, double quotes are replaced with single quotes, and backslashes (`\`) are used to escape single quotes within expressions.
+
+1. Copy the following template and save it to your computer as _main.bicep_.
+
+ ```bicep
+ param templateSpecName string = 'storageSpec'
+
+ param templateSpecVersionName string = '1.0'
+
+ @description('Location for all resources.')
+ param location string = resourceGroup().location
+
+ resource createTemplateSpec 'Microsoft.Resources/templateSpecs@2021-05-01' = {
+ name: templateSpecName
+ location: location
+ }
+
+ resource createTemplateSpecVersion 'Microsoft.Resources/templateSpecs/versions@2021-05-01' = {
+ parent: createTemplateSpec
+ name: templateSpecVersionName
+ location: location
+ properties: {
+ mainTemplate: {
+ '$schema': 'https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#'
+ 'contentVersion': '1.0.0.0'
+ 'metadata': {}
+ 'parameters': {
+ 'storageAccountType': {
+ 'type': 'string'
+ 'defaultValue': 'Standard_LRS'
+ 'metadata': {
+ 'description': 'Storage account type.'
+ }
+ 'allowedValues': [
+ 'Premium_LRS'
+ 'Premium_ZRS'
+ 'Standard_GRS'
+ 'Standard_GZRS'
+ 'Standard_LRS'
+ 'Standard_RAGRS'
+ 'Standard_RAGZRS'
+ 'Standard_ZRS'
+ ]
+ }
+ 'location': {
+ 'type': 'string'
+ 'defaultValue': '[resourceGroup().location]'
+ 'metadata': {
+ 'description': 'Location for all resources.'
+ }
+ }
+ }
+ 'variables': {
+ 'storageAccountName': '[format(\'{0}{1}\', \'storage\', uniqueString(resourceGroup().id))]'
+ }
+ 'resources': [
+ {
+ 'type': 'Microsoft.Storage/storageAccounts'
+ 'apiVersion': '2021-08-01'
+ 'name': '[variables(\'storageAccountName\')]'
+ 'location': '[parameters(\'location\')]'
+ 'sku': {
+ 'name': '[parameters(\'storageAccountType\')]'
+ }
+ 'kind': 'StorageV2'
+ 'properties': {}
+ }
+ ]
+ 'outputs': {
+ 'storageAccountNameOutput': {
+ 'type': 'string'
+ 'value': '[variables(\'storageAccountName\')]'
+ }
+ }
+ }
+ }
+ }
+ ```
+
+1. Use Azure PowerShell or Azure CLI to create a new resource group.
+
+ ```azurepowershell
+ New-AzResourceGroup `
+ -Name templateSpecRG `
+ -Location westus2
+ ```
+
+ ```azurecli
+ az group create \
+ --name templateSpecRG \
+ --location westus2
+ ```
+
+1. Create the template spec in that resource group. The template spec name _storageSpec_ and version number `1.0` are parameters in the Bicep file.
+
+ ```azurepowershell
+ New-AzResourceGroupDeployment `
+ -ResourceGroupName templateSpecRG `
+ -TemplateFile "C:\templates\main.bicep"
+ ```
+
+ ```azurecli
+ az deployment group create \
+ --resource-group templateSpecRG \
+ --template-file "C:\templates\main.bicep"
+ ```
+++
+## Deploy template spec
+
+Use the template spec to deploy a storage account. This example uses the resource group name `storageRG`. You can use a different name, but you'll need to change the commands.
+
+# [PowerShell](#tab/azure-powershell)
+
+1. Create a resource group to contain the new storage account.
+
+ ```azurepowershell
+ New-AzResourceGroup `
+ -Name storageRG `
+ -Location westus2
+ ```
+
+1. Get the resource ID of the template spec.
+
+ ```azurepowershell
+ $id = (Get-AzTemplateSpec -ResourceGroupName templateSpecRG -Name storageSpec -Version "1.0").Versions.Id
+ ```
+
+1. Deploy the template spec.
+
+ ```azurepowershell
+ New-AzResourceGroupDeployment `
+ -TemplateSpecId $id `
+ -ResourceGroupName storageRG
+ ```
+
+1. You provide parameters exactly as you would for a Bicep file deployment. Redeploy the template spec with a parameter for the storage account type.
+
+ ```azurepowershell
+ New-AzResourceGroupDeployment `
+ -TemplateSpecId $id `
+ -ResourceGroupName storageRG `
+ -storageAccountType Standard_GRS
+ ```
+
+# [CLI](#tab/azure-cli)
+
+1. Create a resource group to contain the new storage account.
+
+ ```azurecli
+ az group create \
+ --name storageRG \
+ --location westus2
+ ```
+
+1. Get the resource ID of the template spec.
+
+ ```azurecli
+ id=$(az ts show --name storageSpec --resource-group templateSpecRG --version "1.0" --query "id")
+ ```
+
+ > [!NOTE]
+ > There is a known issue with getting a template spec ID and assigning it to a variable in Windows PowerShell.
+
+1. Deploy the template spec.
+
+ ```azurecli
+ az deployment group create \
+ --resource-group storageRG \
+ --template-spec $id
+ ```
+
+1. You provide parameters exactly as you would for a Bicep file deployment. Redeploy the template spec with a parameter for the storage account type.
+
+ ```azurecli
+ az deployment group create \
+ --resource-group storageRG \
+ --template-spec $id \
+ --parameters storageAccountType="Standard_GRS"
+ ```
+
+# [Bicep file](#tab/bicep)
+
+To deploy a template spec using a Bicep file, use a module. The module links to an existing template spec. For more information, see [file in template spec](modules.md#file-in-template-spec).
+
+1. Copy the following Bicep module and save it to your computer as _storage.bicep_.
+
+ ```bicep
+ module deployTemplateSpec 'ts:<subscriptionId>/templateSpecRG/storageSpec:1.0' = {
+ name: 'deployVersion1'
+ }
+ ```
+
+1. Replace `<subscriptionId>` in the module. Use Azure PowerShell or Azure CLI to get your subscription ID.
+
+ ```azurepowershell
+ (Get-AzContext).Subscription.Id
+ ```
+
+ ```azurecli
+ az account show --query "id" --output tsv
+ ```
+
+1. Use Azure PowerShell or Azure CLI to create a new resource group for the storage account.
+
+ ```azurepowershell
+ New-AzResourceGroup `
+ -Name storageRG `
+ -Location westus2
+ ```
+
+ ```azurecli
+ az group create \
+ --name storageRG \
+ --location westus2
+ ```
+
+1. Deploy the template spec with Azure PowerShell or Azure CLI.
+
+ ```azurepowershell
+ New-AzResourceGroupDeployment `
+ -ResourceGroupName storageRG `
+ -TemplateFile "C:\templates\storage.bicep"
+ ```
+
+ ```azurecli
+ az deployment group create \
+ --resource-group storageRG \
+ --template-file "C:\templates\storage.bicep"
+ ```
+
+1. You can add a parameter and redeploy the template spec with a different storage account type. Copy the sample and replace your _storage.bicep_ file. Then, redeploy the template spec deployment.
+
+ ```bicep
+ module deployTemplateSpec 'ts:<subscriptionId>/templateSpecRG/storageSpec:1.0' = {
+ name: 'deployVersion1'
+ params: {
+ storageAccountType: 'Standard_GRS'
+ }
+ }
+ ```
+++
+## Grant access
+
+If you want to let other users in your organization deploy your template spec, you need to grant them read access. You can assign the Reader role to an Azure AD group for the resource group that contains template specs you want to share. For more information, see [Tutorial: Grant a group access to Azure resources using Azure PowerShell](../../role-based-access-control/tutorial-role-assignments-group-powershell.md).
+
+## Update Bicep file
+
+After the template spec was created, you decided to update the Bicep file. To continue with the examples in the **PowerShell** or **CLI** tabs, copy the sample and replace your _main.bicep_ file.
+
+The parameter `storageNamePrefix` specifies a prefix value for the storage account name. The `storageAccountName` variable concatenates the prefix with a unique string.
+
+```bicep
+@allowed([
+ 'Premium_LRS'
+ 'Premium_ZRS'
+ 'Standard_GRS'
+ 'Standard_GZRS'
+ 'Standard_LRS'
+ 'Standard_RAGRS'
+ 'Standard_RAGZRS'
+ 'Standard_ZRS'
+])
+@description('Storage account type.')
+param storageAccountType string = 'Standard_LRS'
+
+@description('Location for all resources.')
+param location string = resourceGroup().location
+
+@maxLength(11)
+@description('The storage account name prefix.')
+param storageNamePrefix string = 'storage'
+
+var storageAccountName = '${toLower(storageNamePrefix)}${uniqueString(resourceGroup().id)}'
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' = {
+ name: storageAccountName
+ location: location
+ sku: {
+ name: storageAccountType
+ }
+ kind: 'StorageV2'
+ properties: {}
+}
+
+output storageAccountNameOutput string = storageAccount.name
+```
+
+## Update template spec version
+
+Rather than create a new template spec for the revised template, add a new version named `2.0` to the existing template spec. Users can choose to deploy either version.
+
+# [PowerShell](#tab/azure-powershell)
+
+1. Create a new version of the template spec.
+
+ ```azurepowershell
+ New-AzTemplateSpec `
+ -Name storageSpec `
+ -Version "2.0" `
+ -ResourceGroupName templateSpecRG `
+ -Location westus2 `
+ -TemplateFile "C:\templates\main.bicep"
+ ```
+
+1. To deploy the new version, get the resource ID for the `2.0` version.
+
+ ```azurepowershell
+ $id = (Get-AzTemplateSpec -ResourceGroupName templateSpecRG -Name storageSpec -Version "2.0").Versions.Id
+ ```
+
+1. Deploy the new version and use the `storageNamePrefix` to specify a prefix for the storage account name.
+
+ ```azurepowershell
+ New-AzResourceGroupDeployment `
+ -TemplateSpecId $id `
+ -ResourceGroupName storageRG `
+ -storageNamePrefix "demo"
+ ```
+
+# [CLI](#tab/azure-cli)
+
+1. Create a new version of the template spec.
+
+ ```azurecli
+ az ts create \
+ --name storageSpec \
+ --version "2.0" \
+ --resource-group templateSpecRG \
+ --location westus2 \
+ --template-file "C:\templates\main.bicep"
+ ```
+
+1. To deploy the new version, get the resource ID for the `2.0` version.
+
+ ```azurecli
+ id=$(az ts show --name storageSpec --resource-group templateSpecRG --version "2.0" --query "id")
+ ```
+
+1. Deploy the new version and use the `storageNamePrefix` to specify a prefix for the storage account name.
+
+ ```azurecli
+ az deployment group create \
+ --resource-group storageRG \
+ --template-spec $id \
+ --parameters storageNamePrefix="demo"
+ ```
+
+# [Bicep file](#tab/bicep)
+
+1. Create a new version of the template spec. Copy the sample and replace your _main.bicep_ file.
+
+ The parameter `storageNamePrefix` specifies a prefix value for the storage account name. The `storageAccountName` variable concatenates the prefix with a unique string.
+
+ ```bicep
+ param templateSpecName string = 'storageSpec'
+
+ param templateSpecVersionName string = '2.0'
+
+ @description('Location for all resources.')
+ param location string = resourceGroup().location
+
+ resource createTemplateSpec 'Microsoft.Resources/templateSpecs@2021-05-01' = {
+ name: templateSpecName
+ location: location
+ }
+
+ resource createTemplateSpecVersion 'Microsoft.Resources/templateSpecs/versions@2021-05-01' = {
+ parent: createTemplateSpec
+ name: templateSpecVersionName
+ location: location
+ properties: {
+ mainTemplate: {
+ '$schema': 'https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#'
+ 'contentVersion': '1.0.0.0'
+ 'metadata': {}
+ 'parameters': {
+ 'storageAccountType': {
+ 'type': 'string'
+ 'defaultValue': 'Standard_LRS'
+ 'metadata': {
+ 'description': 'Storage account type.'
+ }
+ 'allowedValues': [
+ 'Premium_LRS'
+ 'Premium_ZRS'
+ 'Standard_GRS'
+ 'Standard_GZRS'
+ 'Standard_LRS'
+ 'Standard_RAGRS'
+ 'Standard_RAGZRS'
+ 'Standard_ZRS'
+ ]
+ }
+ 'location': {
+ 'type': 'string'
+ 'defaultValue': '[resourceGroup().location]'
+ 'metadata': {
+ 'description': 'Location for all resources.'
+ }
+ }
+ 'storageNamePrefix': {
+ 'type': 'string'
+ 'defaultValue': 'storage'
+ 'metadata': {
+ 'description': 'The storage account name prefix.'
+ }
+ 'maxLength': 11
+ }
+ }
+ 'variables': {
+ 'storageAccountName': '[format(\'{0}{1}\', toLower(parameters(\'storageNamePrefix\')), uniqueString(resourceGroup().id))]'
+ }
+ 'resources': [
+ {
+ 'type': 'Microsoft.Storage/storageAccounts'
+ 'apiVersion': '2021-08-01'
+ 'name': '[variables(\'storageAccountName\')]'
+ 'location': '[parameters(\'location\')]'
+ 'sku': {
+ 'name': '[parameters(\'storageAccountType\')]'
+ }
+ 'kind': 'StorageV2'
+ 'properties': {}
+ }
+ ]
+ 'outputs': {
+ 'storageAccountNameOutput': {
+ 'type': 'string'
+ 'value': '[variables(\'storageAccountName\')]'
+ }
+ }
+ }
+ }
+ }
+ ```
+
+1. To add the new version to your template spec, deploy your template with Azure PowerShell or Azure CLI.
+
+ ```azurepowershell
+ New-AzResourceGroupDeployment `
+ -ResourceGroupName templateSpecRG `
+ -TemplateFile "C:\templates\main.bicep"
+ ```
+
+ ```azurecli
+ az deployment group create \
+ --resource-group templateSpecRG \
+ --template-file "C:\templates\main.bicep"
+ ```
+
+1. Copy the following Bicep module and save it to your computer as _storage.bicep_.
+
+ ```bicep
+ module deployTemplateSpec 'ts:<subscriptionId>/templateSpecRG/storageSpec:2.0' = {
+ name: 'deployVersion2'
+ params: {
+ storageNamePrefix: 'demo'
+ }
+ }
+ ```
+
+1. Replace `<subscriptionId>` in the module. Use Azure PowerShell or Azure CLI to get your subscription ID.
+
+ ```azurepowershell
+ (Get-AzContext).Subscription.Id
+ ```
+
+ ```azurecli
+ az account show --query "id" --output tsv
+ ```
+
+1. Deploy the template spec with Azure PowerShell or Azure CLI.
+
+ ```azurepowershell
+ New-AzResourceGroupDeployment `
+ -ResourceGroupName storageRG `
+ -TemplateFile "C:\templates\storage.bicep"
+ ```
+
+ ```azurecli
+ az deployment group create \
+ --resource-group storageRG \
+ --template-file "C:\templates\storage.bicep"
+ ```
+++
+## Clean up resources
+
+To clean up the resources you deployed in this quickstart, delete both resource groups. The resource group, template specs, and storage accounts will be deleted.
+
+Use Azure PowerShell or Azure CLI to delete the resource groups.
+
+```azurepowershell
+Remove-AzResourceGroup -Name "templateSpecRG"
+
+Remove-AzResourceGroup -Name "storageRG"
+```
+
+```azurecli
+az group delete --name templateSpecRG
+
+az group delete --name storageRG
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Azure Resource Manager template specs in Bicep](template-specs.md)
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
Last updated 02/01/2022
A template spec is a resource type for storing an Azure Resource Manager template (ARM template) for later deployment. This resource type enables you to share ARM templates with other users in your organization. Just like any other Azure resource, you can use Azure role-based access control (Azure RBAC) to share the template spec. You can use Azure CLI or Azure PowerShell to create template specs by providing Bicep files. The Bicep files are transpiled into ARM JSON templates before they are stored. Currently, you can't import a Bicep file from the Azure portal to create a template spec resource.
-[**Microsoft.Resources/templateSpecs**](/azure/templates/microsoft.resources/templatespecs) is the resource type for template specs. It consists of a main template and any number of linked templates. Azure securely stores template specs in resource groups. Both the main template and the linked templates must be in JSON. Template Specs support [versioning](#versioning).
+[Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) is the resource type for template specs. It consists of a main template and any number of linked templates. Azure securely stores template specs in resource groups. Both the main template and the linked templates must be in JSON. Template Specs support [versioning](#versioning).
To deploy the template spec, you use standard Azure tools like PowerShell, Azure CLI, Azure portal, REST, and other supported SDKs and clients. You use the same commands as you would for the template or the Bicep file. > [!NOTE] > To use template specs in Bicep with Azure PowerShell, you must install [version 6.3.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.27.0 or later](/cli/azure/install-azure-cli).
-When designing your deployment, always consider the lifecycle of the resources and group the resources that share similar lifecycle into a single template spec. For instance, your deployments include multiple instances of Cosmos DB with each instance containing its own databases and containers. Given the databases and the containers donΓÇÖt change much, you want to create one template spec to include a Cosmo DB instance and its underlying databases and containers. You can then use conditional statements in your Bicep along with copy loops to create multiple instances of these resources.
+When designing your deployment, always consider the lifecycle of the resources and group the resources that share similar lifecycle into a single template spec. For instance, your deployments include multiple instances of Cosmos DB with each instance containing its own databases and containers. Given the databases and the containers don't change much, you want to create one template spec to include a Cosmo DB instance and its underlying databases and containers. You can then use conditional statements in your Bicep along with copy loops to create multiple instances of these resources.
> [!TIP] > The choice between module registry and template specs is mostly a matter of preference. There are a few things to consider when you choose between the two:
azure-resource-manager Quickstart Create Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/quickstart-create-template-specs.md
Last updated 05/04/2021 -+ ms.devlang: azurecli
ms.devlang: azurecli
This quickstart shows you how to package an Azure Resource Manager template (ARM template) into a [template spec](template-specs.md). Then, you deploy that template spec. Your template spec contains an ARM template that deploys a storage account.
+> [!TIP]
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [Quickstart: Create and deploy a template spec with Bicep](../bicep/quickstart-create-template-specs.md).
+ ## Prerequisites An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs.md
A template spec is a resource type for storing an Azure Resource Manager template (ARM template) in Azure for later deployment. This resource type enables you to share ARM templates with other users in your organization. Just like any other Azure resource, you can use Azure role-based access control (Azure RBAC) to share the template spec.
-**Microsoft.Resources/templateSpecs** is the resource type for template specs. It consists of a main template and any number of linked templates. Azure securely stores template specs in resource groups. Template Specs support [versioning](#versioning).
+[Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) is the resource type for template specs. It consists of a main template and any number of linked templates. Azure securely stores template specs in resource groups. Template Specs support [versioning](#versioning).
To deploy the template spec, you use standard Azure tools like PowerShell, Azure CLI, Azure portal, REST, and other supported SDKs and clients. You use the same commands as you would for the template. > [!NOTE] > To use template spec with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
-When designing your deployment, always consider the lifecycle of the resources and group the resources that share similar lifecycle into a single template spec. For instance, your deployments include multiple instances of Cosmos DB with each instance containing its own databases and containers. Given the databases and the containers donΓÇÖt change much, you want to create one template spec to include a Cosmo DB instance and its underlying databases and containers. You can then use conditional statements in your templates along with copy loops to create multiple instances of these resources.
+When designing your deployment, always consider the lifecycle of the resources and group the resources that share similar lifecycle into a single template spec. For instance, your deployments include multiple instances of Cosmos DB with each instance containing its own databases and containers. Given the databases and the containers don't change much, you want to create one template spec to include a Cosmo DB instance and its underlying databases and containers. You can then use conditional statements in your templates along with copy loops to create multiple instances of these resources.
### Microsoft Learn To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs) on **Microsoft Learn**.
+> [!TIP]
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [Azure Resource Manager template specs in Bicep](../bicep/template-specs.md).
+ ## Why use template specs? Template specs provide the following benefits:
azure-sql Elastic Pool Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/elastic-pool-resource-management.md
Previously updated : 1/24/2022 Last updated : 3/30/2022 # Resource management in dense elastic pools
To send an alert when pool resource utilization (CPU, data IO, log IO, workers,
|`max_worker_percent`|[Worker thread](/sql/relational-databases/thread-and-task-architecture-guide) utilization. Provided for each database in the pool, as well as for the pool itself. There are different limits on the number of worker threads at the database level, and at the pool level, therefore monitoring this metric at both levels is recommended. Available in the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view in every database, and in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `workers_percent`, and can be viewed in Azure portal.|Below 80%. Spikes up to 100% will cause connection attempts and queries to fail.| |`avg_data_io_percent`|IOPS utilization for read and write physical IO. Provided for each database in the pool, as well as for the pool itself. There are different limits on the number of IOPS at the database level, and at the pool level, therefore monitoring this metric at both levels is recommended. Available in the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view in every database, and in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `physical_data_read_percent`, and can be viewed in Azure portal.|Below 80%. Occasional short spikes up to 100% may be acceptable.| |`avg_log_write_percent`|Throughput utilizations for transaction log write IO. Provided for each database in the pool, as well as for the pool itself. There are different limits on the log throughput at the database level, and at the pool level, therefore monitoring this metric at both levels is recommended. Available in the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view in every database, and in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `log_write_percent`, and can be viewed in Azure portal. When this metric is close to 100%, all database modifications (INSERT, UPDATE, DELETE, MERGE statements, SELECT … INTO, BULK INSERT, etc.) will be slower.|Below 90%. Occasional short spikes up to 100% may be acceptable.|
-|`oom_per_second`|The rate of out-of-memory (OOM) errors in an elastic pool, which is an indicator of memory pressure. Available in the [sys.dm_resource_governor_resource_pools_history_ex](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-resource-pools-history-ex-azure-sql-database) view. See [Examples](#examples) for a sample query to calculate this metric. For more information, see resource limits for [elastic pools using DTUs](resource-limits-dtu-elastic-pools.md) or [elastic pools using vCores](resource-limits-vcore-elastic-pools.md), and [Troubleshoot out of memory errors with Azure SQL Database](troubleshoot-memory-errors-issues.md).|0|
+|`oom_per_second`|The rate of out-of-memory (OOM) errors in an elastic pool, which is an indicator of memory pressure. Available in the [sys.dm_resource_governor_resource_pools_history_ex](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-resource-pools-history-ex-azure-sql-database) view. See [Examples](#examples) for a sample query to calculate this metric. For more information, see resource limits for [elastic pools using DTUs](resource-limits-dtu-elastic-pools.md) or [elastic pools using vCores](resource-limits-vcore-elastic-pools.md), and [Troubleshoot out of memory errors with Azure SQL Database](troubleshoot-memory-errors-issues.md). If you encounter out of memory errors, review [sys.dm_os_out_of_memory_events](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-out-of-memory-events).|0|
|`avg_storage_percent`|Total storage space used by data in all databases within an elastic pool. Does not include empty space in database files. Available in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `storage_percent`, and can be viewed in Azure portal.|Below 80%. Can approach 100% for pools with no data growth.| |`avg_allocated_storage_percent`|Total storage space used by database files in storage in all databases within an elastic pool. Includes empty space in database files. Available in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `allocated_data_storage_percent`, and can be viewed in Azure portal.|Below 90%. Can approach 100% for pools with no data growth.| |`tempdb_log_used_percent`|Transaction log space utilization in the `tempdb` database. Even though temporary objects created in one database are not visible in other databases in the same elastic pool, `tempdb` is a shared resource for all databases in the same pool. A long running or orphaned transaction in `tempdb` started from one database in the pool can consume a large portion of transaction log, and cause failures for queries in other databases in the same pool. Derived from [sys.dm_db_log_space_usage](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-log-space-usage-transact-sql) and [sys.database_files](/sql/relational-databases/system-catalog-views/sys-database-files-transact-sql) views. This metric is also emitted to Azure Monitor, and can be viewed in Azure portal. See [Examples](#examples) for a sample query to return the current value of this metric.|Below 50%. Occasional spikes up to 80% are acceptable.|
azure-sql Monitoring With Dmvs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/monitoring-with-dmvs.md
FROM cte
ORDER BY SerialDesiredMemory DESC; ```
+If you encounter out of memory errors in Azure SQL Database, review [sys.dm_os_out_of_memory_events](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-out-of-memory-events).
+ ### Identify the top 10 active memory grants Use the following query to identify the top 10 active memory grants:
SPID;
``` > [!NOTE]
-> When executing the **sys.dm_exec_requests** and **sys.dm_exec_sessions views**, if you have **VIEW DATABASE STATE** permission on the database, you see all executing sessions on the database; otherwise, you see only the current session.
+> When executing the `sys.dm_exec_requests` and `sys.dm_exec_sessions views`, if you have **VIEW DATABASE STATE** permission on the database, you see all executing sessions on the database; otherwise, you see only the current session.
## Monitor resource use
You can also monitor usage using these views:
### sys.dm_db_resource_stats
-You can use the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view in every database. The **sys.dm_db_resource_stats** view shows recent resource use data relative to the service tier. Average percentages for CPU, data IO, log writes, and memory are recorded every 15 seconds and are maintained for 1 hour.
+You can use the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view in every database. The `sys.dm_db_resource_stats` view shows recent resource use data relative to the service tier. Average percentages for CPU, data IO, log writes, and memory are recorded every 15 seconds and are maintained for 1 hour.
-Because this view provides a more granular look at resource use, use **sys.dm_db_resource_stats** first for any current-state analysis or troubleshooting. For example, this query shows the average and maximum resource use for the current database over the past hour:
+Because this view provides a more granular look at resource use, use `sys.dm_db_resource_stats` first for any current-state analysis or troubleshooting. For example, this query shows the average and maximum resource use for the current database over the past hour:
```sql SELECT
HAVING AVG(avg_cpu_percent) >= 80;
### sys.resource_stats
-The [sys.resource_stats](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) view in the **master** database has additional information that can help you monitor the performance of your database at its specific service tier and compute size. The data is collected every 5 minutes and is maintained for approximately 14 days. This view is useful for a longer-term historical analysis of how your database uses resources.
+The [sys.resource_stats](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) view in the `master` database has additional information that can help you monitor the performance of your database at its specific service tier and compute size. The data is collected every 5 minutes and is maintained for approximately 14 days. This view is useful for a longer-term historical analysis of how your database uses resources.
The following graph shows the CPU resource use for a Premium database with the P2 compute size for each hour in a week. This graph starts on a Monday, shows five work days, and then shows a weekend, when much less happens on the application.
From the data, this database currently has a peak CPU load of just over 50 perce
Other application types might interpret the same graph differently. For example, if an application tries to process payroll data each day and has the same chart, this kind of "batch job" model might do fine at a P1 compute size. The P1 compute size has 100 DTUs compared to 200 DTUs at the P2 compute size. The P1 compute size provides half the performance of the P2 compute size. So, 50 percent of CPU use in P2 equals 100 percent CPU use in P1. If the application does not have timeouts, it might not matter if a job takes 2 hours or 2.5 hours to finish, if it gets done today. An application in this category probably can use a P1 compute size. You can take advantage of the fact that there are periods of time during the day when resource use is lower, so that any "big peak" might spill over into one of the troughs later in the day. The P1 compute size might be good for that kind of application (and save money), as long as the jobs can finish on time each day.
-The database engine exposes consumed resource information for each active database in the **sys.resource_stats** view of the **master** database in each server. The data in the table is aggregated for 5-minute intervals. With the Basic, Standard, and Premium service tiers, the data can take more than 5 minutes to appear in the table, so this data is more useful for historical analysis rather than near-real-time analysis. Query the **sys.resource_stats** view to see the recent history of a database and to validate whether the reservation you chose delivered the performance you want when needed.
+The database engine exposes consumed resource information for each active database in the `sys.resource_stats` view of the `master` database in each server. The data in the table is aggregated for 5-minute intervals. With the Basic, Standard, and Premium service tiers, the data can take more than 5 minutes to appear in the table, so this data is more useful for historical analysis rather than near-real-time analysis. Query the `sys.resource_stats` view to see the recent history of a database and to validate whether the reservation you chose delivered the performance you want when needed.
> [!NOTE]
-> On Azure SQL Database, you must be connected to the **master** database to query **sys.resource_stats** in the following examples.
+> On Azure SQL Database, you must be connected to the `master` database to query `sys.resource_stats` in the following examples.
This example shows you how the data in this view is exposed:
ORDER BY start_time DESC;
![The sys.resource_stats catalog view](./media/monitoring-with-dmvs/sys_resource_stats.png)
-The next example shows you different ways that you can use the **sys.resource_stats** catalog view to get information about how your database uses resources:
+The next example shows you different ways that you can use the `sys.resource_stats` catalog view to get information about how your database uses resources:
1. To look at the past week's resource use for the database userdb1, you can run this query:
The next example shows you different ways that you can use the **sys.resource_st
ORDER BY start_time DESC; ```
-2. To evaluate how well your workload fits the compute size, you need to drill down into each aspect of the resource metrics: CPU, reads, writes, number of workers, and number of sessions. Here's a revised query using **sys.resource_stats** to report the average and maximum values of these resource metrics:
+2. To evaluate how well your workload fits the compute size, you need to drill down into each aspect of the resource metrics: CPU, reads, writes, number of workers, and number of sessions. Here's a revised query using `sys.resource_stats` to report the average and maximum values of these resource metrics:
```sql SELECT
The next example shows you different ways that you can use the **sys.resource_st
WHERE database_name = 'userdb1' AND start_time > DATEADD(day, -7, GETDATE()); ```
-3. With this information about the average and maximum values of each resource metric, you can assess how well your workload fits into the compute size you chose. Usually, average values from **sys.resource_stats** give you a good baseline to use against the target size. It should be your primary measurement stick. For an example, you might be using the Standard service tier with S2 compute size. The average use percentages for CPU and IO reads and writes are below 40 percent, the average number of workers is below 50, and the average number of sessions is below 200. Your workload might fit into the S1 compute size. It's easy to see whether your database fits in the worker and session limits. To see whether a database fits into a lower compute size with regard to CPU, reads, and writes, divide the DTU number of the lower compute size by the DTU number of your current compute size, and then multiply the result by 100:
+3. With this information about the average and maximum values of each resource metric, you can assess how well your workload fits into the compute size you chose. Usually, average values from `sys.resource_stats` give you a good baseline to use against the target size. It should be your primary measurement stick. For an example, you might be using the Standard service tier with S2 compute size. The average use percentages for CPU and IO reads and writes are below 40 percent, the average number of workers is below 50, and the average number of sessions is below 200. Your workload might fit into the S1 compute size. It's easy to see whether your database fits in the worker and session limits. To see whether a database fits into a lower compute size with regard to CPU, reads, and writes, divide the DTU number of the lower compute size by the DTU number of your current compute size, and then multiply the result by 100:
`S1 DTU / S2 DTU * 100 = 20 / 50 * 100 = 40`
WHERE D.name = 'MyDatabase';
Again, these queries return a point-in-time count. If you collect multiple samples over time, you'll have the best understanding of your session use.
-You can get historical statistics on sessions by querying the [sys.resource_stats](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) view and reviewing the **active_session_count** column.
+You can get historical statistics on sessions by querying the [sys.resource_stats](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) view and reviewing the `active_session_count` column.
## Monitoring query performance
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS q
ORDER BY highest_cpu_queries.total_worker_time DESC; ```
-## See also
+## Next steps
- [Introduction to Azure SQL Database and Azure SQL Managed Instance](sql-database-paas-overview.md) - [Diagnose and troubleshoot high CPU on Azure SQL Database](high-cpu-diagnose-troubleshoot.md)
azure-sql Resource Limits Logical Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-logical-server.md
Previously updated : 02/25/2022 Last updated : 03/30/2022 # Resource management in Azure SQL Database
When encountering high compute utilization, mitigation options include:
When data space used reaches the maximum data size limit, either at the database level or at the elastic pool level, inserts and updates that increase data size fail and clients receive an [error message](troubleshoot-common-errors-issues.md). SELECT and DELETE statements remain unaffected.
-In Premium and Business Critical service tiers, clients also receive an error message if combined storage consumption by data, transaction log, and tempdb for a single database or an elastic pool exceeds maximum local storage size. For more information, see [Storage space governance](#storage-space-governance).
+In Premium and Business Critical service tiers, clients also receive an error message if combined storage consumption by data, transaction log, and `tempdb` for a single database or an elastic pool exceeds maximum local storage size. For more information, see [Storage space governance](#storage-space-governance).
When encountering high space utilization, mitigation options include:
When encountering high space utilization, mitigation options include:
- If the database is in an elastic pool, then alternatively the database can be moved outside of the pool, so that its storage space isn't shared with other databases. - Shrink a database to reclaim unused space. In elastic pools, shrinking a database provides more storage for other databases in the pool. For more information, see [Manage file space in Azure SQL Database](file-space-manage.md). - Check if high space utilization is due to a spike in the size of Persistent Version Store (PVS). PVS is a part of each database, and is used to implement [Accelerated Database Recovery](../accelerated-database-recovery.md). To determine current PVS size, see [PVS troubleshooting](/sql/relational-databases/accelerated-database-recovery-management#troubleshooting). A common reason for large PVS size is a transaction that is open for a long time (hours), preventing cleanup of row older versions in PVS.-- For databases and elastic pools in Premium and Business Critical service tiers that consume large amounts of storage, you may receive an out-of-space error even though used space in the database or elastic pool is below its maximum data size limit. This may happen if tempdb or transaction log files consume a large amount of storage toward the maximum local storage limit. [Fail over](high-availability-sla.md#testing-application-fault-resiliency) the database or elastic pool to reset tempdb to its initial smaller size, or [shrink](file-space-manage.md#shrinking-transaction-log-file) transaction log to reduce local storage consumption.
+- For databases and elastic pools in Premium and Business Critical service tiers that consume large amounts of storage, you may receive an out-of-space error even though used space in the database or elastic pool is below its maximum data size limit. This may happen if `tempdb` or transaction log files consume a large amount of storage toward the maximum local storage limit. [Fail over](high-availability-sla.md#testing-application-fault-resiliency) the database or elastic pool to reset `tempdb` to its initial smaller size, or [shrink](file-space-manage.md#shrinking-transaction-log-file) transaction log to reduce local storage consumption.
### Sessions, workers, and requests
After database engine startup, as the workload starts reading data from storage,
Besides the data cache, memory is used in other components of the database engine. When there is demand for memory and all available memory has been used by the data cache, the database engine will dynamically reduce data cache size to make memory available to other components, and will dynamically grow data cache when other components release memory.
-In rare cases, a sufficiently demanding workload may cause an insufficient memory condition, leading to out-of-memory errors. This can happen at any level of memory utilization between 0% and 100%. This is more likely to occur on smaller compute sizes that have proportionally smaller memory limits, and/or with workloads using more memory for query processing, such as in [dense elastic pools](elastic-pool-resource-management.md).
+In rare cases, a sufficiently demanding workload may cause an insufficient memory condition, leading to out-of-memory errors. This can happen at any level of memory utilization between 0% and 100%. This is more likely to occur on smaller compute sizes that have proportionally smaller memory limits, and/or with workloads using more memory for query processing, such as in [dense elastic pools](elastic-pool-resource-management.md).
When encountering out-of-memory errors, mitigation options include:
+- Review the details of the OOM condition in [sys.dm_os_out_of_memory_events](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-out-of-memory-events).
- Increasing the service tier or compute size of the database or elastic pool. See [Scale single database resources](single-database-scale.md) and [Scale elastic pool resources](elastic-pool-scale.md). - Optimizing queries and configuration to reduce memory utilization. Common solutions are described in the following table.
Azure SQL Database resource governance is hierarchical in nature. From top to bo
Data I/O governance is a process in Azure SQL Database used to limit both read and write physical I/O against data files of a database. IOPS limits are set for each service level to minimize the "noisy neighbor" effect, to provide resource allocation fairness in a multi-tenant service, and to stay within the capabilities of the underlying hardware and storage.
-For single databases, workload group limits are applied to all storage I/O against the database. For elastic pools, workload group limits apply to each database in the pool. Additionally, the resource pool limit additionally applies to the cumulative I/O of the elastic pool. Tempdb I/O is subject to workload group limits, with the exception of Basic, Standard, and General Purpose service tier, where higher tempdb I/O limits apply. In general, resource pool limits may not be achievable by the workload against a database (either single or pooled), because workload group limits are lower than resource pool limits and limit IOPS/throughput sooner. However, pool limits may be reached by the combined workload against multiple databases in the same pool.
+For single databases, workload group limits are applied to all storage I/O against the database. For elastic pools, workload group limits apply to each database in the pool. Additionally, the resource pool limit additionally applies to the cumulative I/O of the elastic pool. In `tempdb`, I/O is subject to workload group limits, with the exception of Basic, Standard, and General Purpose service tier, where higher `tempdb` I/O limits apply. In general, resource pool limits may not be achievable by the workload against a database (either single or pooled), because workload group limits are lower than resource pool limits and limit IOPS/throughput sooner. However, pool limits may be reached by the combined workload against multiple databases in the same pool.
For example, if a query generates 1000 IOPS without any I/O resource governance, but the workload group maximum IOPS limit is set to 900 IOPS, the query won't be able to generate more than 900 IOPS. However, if the resource pool maximum IOPS limit is set to 1500 IOPS, and the total I/O from all workload groups associated with the resource pool exceeds 1500 IOPS, then the I/O of the same query may be reduced below the workgroup limit of 900 IOPS.
Log rate governor traffic shaping is surfaced via the following wait types (expo
| HADR_THROTTLE_LOG_RATE_LOG_SIZE | Feedback control, limiting rates to avoid an out of log space condition | | HADR_THROTTLE_LOG_RATE_MISMATCHED_SLO | Geo-replication feedback control, limiting log rate to avoid high data latency and unavailability of geo-secondaries| - When encountering a log rate limit that is hampering desired scalability, consider the following options: - Scale up to a higher service level in order to get the maximum log rate of a service tier, or switch to a different service tier. The [Hyperscale](service-tier-hyperscale.md) service tier provides 100 MB/s log rate regardless of chosen service level.-- If data being loaded is transient, such as staging data in an ETL process, it can be loaded into tempdb (which is minimally logged).
+- If data being loaded is transient, such as staging data in an ETL process, it can be loaded into `tempdb` (which is minimally logged).
- For analytic scenarios, load into a clustered [columnstore](/sql/relational-databases/indexes/columnstore-indexes-overview) table, or a table with indexes that use [data compression](/sql/relational-databases/data-compression/data-compression). This reduces the required log rate. This technique does increase CPU utilization and is only applicable to data sets that benefit from clustered columnstore indexes or data compression. ### Storage space governance
WHERE database_id = DB_ID();
|`database_name`|Database name| |`slo_name`|Service objective name, including hardware generation| |`user_data_directory_space_quota_mb`|**Maximum local storage**, in MB|
-|`user_data_directory_space_usage_mb`|Current local storage consumption by data files, transaction log files, and tempdb files, in MB. Updated every five minutes.|
-
+|`user_data_directory_space_usage_mb`|Current local storage consumption by data files, transaction log files, and `tempdb` files, in MB. Updated every five minutes.|
This query should be executed in the user database, not in the master database. For elastic pools, the query can be executed in any database in the pool. Reported values apply to the entire pool. > [!IMPORTANT]
-> In Premium and Business Critical service tiers, if the workload attempts to increase combined local storage consumption by data files, transaction log files, and tempdb files over the **maximum local storage** limit, an out-of-space error will occur.
+> In Premium and Business Critical service tiers, if the workload attempts to increase combined local storage consumption by data files, transaction log files, and `tempdb` files over the **maximum local storage** limit, an out-of-space error will occur.
As databases are created, deleted, and increase or decrease in size, local storage consumption on a machine fluctuates over time. If the system detects that available local storage on a machine is low, and a database or an elastic pool is at risk of running out of space, it will move the database or elastic pool to a different machine with sufficient local storage available. This move occurs in an online fashion, similarly to a database scaling operation, and has a similar [impact](single-database-scale.md#impact), including a short (seconds) failover at the end of the operation. This failover terminates open connections and rolls back transactions, potentially impacting applications using the database at that time.
-Because all data is copied to local storage volumes on different machines, moving larger databases may require a substantial amount of time. During that time, if local space consumption by a database or an elastic pool, or by the tempdb database grows rapidly, the risk of running out of space increases. The system initiates database movement in a balanced fashion to minimize out-of-space errors while avoiding unnecessary failovers.
+Because all data is copied to local storage volumes on different machines, moving larger databases may require a substantial amount of time. During that time, if local space consumption by a database or an elastic pool, or by the `tempdb` database grows rapidly, the risk of running out of space increases. The system initiates database movement in a balanced fashion to minimize out-of-space errors while avoiding unnecessary failovers.
> [!NOTE] > Database movement due to insufficient local storage only occurs in the Premium or Business Critical service tiers. It does not occur in the Hyperscale, General Purpose, Standard, and Basic service tiers, because in those tiers data files are not stored in local storage. ## Tempdb sizes
-Size limits for tempdb in Azure SQL Database depend on the purchasing and deployment model.
+Size limits for `tempdb` in Azure SQL Database depend on the purchasing and deployment model.
-To learn more, review tempdb size limits for:
+To learn more, review `tempdb` size limits for:
- vCore purchasing model: [single databases](resource-limits-vcore-single-databases.md), [pooled databases](resource-limits-vcore-elastic-pools.md) - DTU purchasing model: [single databases](resource-limits-dtu-single-databases.md#tempdb-sizes), [pooled databases](resource-limits-dtu-elastic-pools.md#tempdb-sizes).
To learn more, review tempdb size limits for:
- For information about general Azure limits, see [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md). - For information about DTUs and eDTUs, see [DTUs and eDTUs](purchasing-models.md#dtu-purchasing-model).-- For information about tempdb size limits, see [single vCore databases](resource-limits-vcore-single-databases.md), [pooled vCore databases](resource-limits-vcore-elastic-pools.md), [single DTU databases](resource-limits-dtu-single-databases.md#tempdb-sizes), and [pooled DTU databases](resource-limits-dtu-elastic-pools.md#tempdb-sizes).
+- For information about `tempdb` size limits, see [single vCore databases](resource-limits-vcore-single-databases.md), [pooled vCore databases](resource-limits-vcore-elastic-pools.md), [single DTU databases](resource-limits-dtu-single-databases.md#tempdb-sizes), and [pooled DTU databases](resource-limits-dtu-elastic-pools.md#tempdb-sizes).
azure-sql Service Tiers Sql Database Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tiers-sql-database-vcore.md
This article reviews the [vCore purchasing model](service-tiers-vcore.md) for [A
[!INCLUDE [vcore-overview](../includes/vcore-overview.md)]
+> [!IMPORTANT]
+> Compute resources, I/O, and data and log storage are charged per database or elastic pool. Backup storage is charged per each database.
+ The vCore purchasing model used by Azure SQL Database provides several benefits over the DTU purchasing model: - Higher compute, memory, I/O, and storage limits.
azure-sql Service Tiers Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tiers-vcore.md
This article provides a brief overview of the vCore purchasing model used by bot
[!INCLUDE [vcore-overview](../includes/vcore-overview.md)]
+> [!IMPORTANT]
+> In Azure SQL Database, compute resources, I/O, and data and log storage are charged per database or elastic pool. Backup storage is charged per each database.
The vCore purchasing model provides transparency in the hardware details that power compute, control over the hardware generation, higher scaling granularity, and pricing discounts with the [Azure Hybrid Benefit (AHB)](../azure-hybrid-benefit.md) and [Reserved Instance (RI)](../database/reserved-capacity-overview.md).
azure-sql Troubleshoot Memory Errors Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/troubleshoot-memory-errors-issues.md
Title: Troubleshoot memory issues
-description: Provides steps to troubleshoot out of memory issues in Azure SQL Database
+description: Provides steps to investigate and troubleshoot out of memory issues in Azure SQL Database
Previously updated : 11/03/2021 Last updated : 01/14/2022 # Troubleshoot out of memory errors with Azure SQL Database
Try the following avenues of investigation in response to:
- Error code 701 with error message "There is insufficient system memory in resource pool '%ls' to run this query." - Error code 802 with error message "There is insufficient memory available in the buffer pool."
+## View out of memory events
+
+If you encounter out of memory errors, review [sys.dm_os_out_of_memory_events](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-out-of-memory-events). Introduced in January 2022, this view includes predicted out of memory cause information that is determined by a heuristic algorithm and is provided with a finite degree of confidence.
+
+```sql
+SELECT * FROM sys.dm_os_out_of_memory_events ORDER BY event_time DESC;
+```
+
+<!-- XE -->
+ ## Investigate memory allocation If out of memory errors persist in Azure SQL Database, consider at least temporarily increasing the service level objective of the database in the Azure portal. If out of memory errors persist, use the following queries to look for unusually high query memory grants that may contribute to an insufficient memory condition. Run the following example queries in the database that experienced the error (not in the `master` database of the Azure SQL logical server).
+### Use DMV to view out of memory events
+
+Beginning in April 2022, a new dynamic management view (DMV) has been added to allow visibility to the events and causes of out of memory (OOM) events in Azure SQL Database, `sys.dm_os_out_of_memory_events`. For more information, see [sys.dm_os_out_of_memory_events](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-out-of-memory-events).
+ ### Use DMVs to view memory clerks Start with a broad investigation, if the out of memory error occurred recently, by viewing the allocation of memory to memory clerks. Memory clerks are internal to the database engine for this Azure SQL Database. The top memory clerks in terms of pages allocated might be informative to what type of query or feature of SQL Server is consuming the most memory.
ORDER BY max_query_max_used_memory DESC, avg_query_max_used_memory DESC;
``` ### Extended events
-In addition to the previous information, it may be helpful to capture a trace of the activities on the server to thoroughly investigate an out of memory issue in Azure SQL Database.
+In addition to the previous information, it may be helpful to capture a trace of the activities on the server to thoroughly investigate an out of memory issue in Azure SQL Database.
-There are two ways to capture traces in SQL Server; Extended Events (XEvents) and Profiler Traces. However, [SQL Server Profiler](/sql/tools/sql-server-profiler/sql-server-profiler) is deprecated trace technology not supported for Azure SQL Database. [Extended Events](/sql/relational-databases/extended-events/extended-events) is the newer tracing technology that allows more versatility and less impact to the observed system, and its interface is integrated into SQL Server Management Studio (SSMS).
+There are two ways to capture traces in SQL Server; Extended Events (XEvents) and Profiler Traces. However, [SQL Server Profiler](/sql/tools/sql-server-profiler/sql-server-profiler) is deprecated trace technology not supported for Azure SQL Database. [Extended Events](/sql/relational-databases/extended-events/extended-events) is the newer tracing technology that allows more versatility and less impact to the observed system, and its interface is integrated into SQL Server Management Studio (SSMS). For more information on querying extended events in Azure SQL Database, see [Extended events in Azure SQL Database](/azure/azure-sql/database/xevent-db-diff-from-svr).
Refer to the document that explains how to use the [Extended Events New Session Wizard](/sql/relational-databases/extended-events/quick-start-extended-events-in-sql-server) in SSMS. For Azure SQL databases however, SSMS provides an Extended Events subfolder under each database in Object Explorer. Use an Extended Events session to capture these useful events, and identify the queries generating them:
azure-sql Xevent Code Event File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/xevent-code-event-file.md
Title: XEvent Event File code
-description: Provides PowerShell and Transact-SQL for a two-phase code sample that demonstrates the Event File target in an extended event on Azure SQL Database. Azure Storage is a required part of this scenario.
+description: Provides PowerShell and Transact-SQL for a two-phase code sample that demonstrates the Event File target in an extended event on Azure SQL Database and SQL Managed Instance. Azure Storage is a required part of this scenario.
Previously updated : 06/06/2020 Last updated : 03/25/2022
-# Event File target code for extended events in Azure SQL Database
+# Event File target code for extended events in Azure SQL Database and SQL Managed Instance
[!INCLUDE [sql-database-xevents-selectors-1-include](../../../includes/sql-database-xevents-selectors-1-include.md)] You want a complete code sample for a robust way to capture and report information for an extended event.
-In Microsoft SQL Server, the [Event File target](/previous-versions/sql/sql-server-2016/ff878115(v=sql.130)) is used to store event outputs into a local hard drive file. But such files are not available to Azure SQL Database. Instead we use the Azure Storage service to support the Event File target.
+In Microsoft SQL Server, the [Event File target](/sql/relational-databases/extended-events/targets-for-extended-events-in-sql-server) is used to store event outputs into a local hard drive file. But local storage is not available to Azure SQL Database or SQL Managed Instance. Instead, use Azure Blob Storage to support the Event File target.
-This topic presents a two-phase code sample:
+This article presents a two-phase code sample:
- PowerShell, to create an Azure Storage container in the cloud. - Transact-SQL:
This topic presents a two-phase code sample:
- Optionally you can [create an **AdventureWorksLT** demonstration database](single-database-create-quickstart.md) in minutes. -- SQL Server Management Studio (ssms.exe), ideally its latest monthly update version.
- You can download the latest ssms.exe from:
-
- - Topic titled [Download SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).
- - [A direct link to the download.](https://go.microsoft.com/fwlink/?linkid=616025)
+- SQL Server Management Studio (ssms.exe), ideally its latest monthly update version: [Download SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms)
- You must have the [Azure PowerShell modules](https://go.microsoft.com/?linkid=9811175) installed.
- - The modules provide commands such as - **New-AzStorageAccount**.
+ - The modules provide commands, such as - `New-AzStorageAccount`.
## Phase 1: PowerShell code for Azure Storage container
The script starts with commands to clean up after a possible previous run, and i
2. Start PowerShell ISE as an Administrator. 3. At the prompt, type<br/>`Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser`<br/>and then press Enter. 4. In PowerShell ISE, open your **.ps1** file. Run the script.
-5. The script first starts a new window in which you log in to Azure.
+5. The script first starts a new window in which you sign in to Azure.
- If you rerun the script without disrupting your session, you have the convenient option of commenting out the **Add-AzureAccount** command.
The script starts with commands to clean up after a possible previous run, and i
### PowerShell code
-This PowerShell script assumes you have already installed the Az module. For information, see [Install the Azure PowerShell module](/powershell/azure/install-Az-ps).
+This PowerShell script assumes you've already installed the `Az` module. For information, see [Install the Azure PowerShell module](/powershell/azure/install-Az-ps).
```powershell ## TODO: Before running, find all 'TODO' and make each edit!!
The script starts with commands to clean up after a possible previous run, and i
The PowerShell script printed a few named values when it ended. You must edit the Transact-SQL script to use those values. Find **TODO** in the Transact-SQL script to locate the edit points. 1. Open SQL Server Management Studio (ssms.exe).
-2. Connect to your database in Azure SQL Database.
-3. Click to open a new query pane.
+2. Connect to your database in Azure SQL Database or SQL Managed Instance.
+3. Select to open a new query pane.
4. Paste the following Transact-SQL script into the query pane. 5. Find every **TODO** in the script and make the appropriate edits. 6. Save, and then run the script.
The PowerShell script printed a few named values when it ended. You must edit th
- TODO: First, run the earlier PowerShell portion of this two-part code sample. - TODO: Second, find every 'TODO' in this Transact-SQL file, and edit each. -- Transact-SQL code for Event File target on Azure SQL Database.
+- Transact-SQL code for Event File target on Azure SQL Database or SQL Managed Instance.
SET NOCOUNT ON; GO
GO
## Output
-When the Transact-SQL script completes, click a cell under the **event_data_XML** column header. One **\<event>** element is displayed which shows one UPDATE statement.
+When the Transact-SQL script completes, select a cell under the **event_data_XML** column header. One **\<event>** element is displayed which shows one UPDATE statement.
Here is one **\<event>** element that was generated during testing:
An explanation of advanced options for the viewing of data from extended events
Suppose you wanted to run the preceding Transact-SQL sample on Microsoft SQL Server. -- For simplicity, you would want to completely replace use of the Azure Storage container with a simple file such as *C:\myeventdata.xel*. The file would be written to the local hard drive of the computer that hosts SQL Server.
+- For simplicity, you would want to completely replace use of the Azure Storage container with a simple file such as `C:\myeventdata.xel`. The file would be written to the local hard drive of the computer that hosts SQL Server.
- You would not need any kind of Transact-SQL statements for **CREATE MASTER KEY** and **CREATE CREDENTIAL**.-- In the **CREATE EVENT SESSION** statement, in its **ADD TARGET** clause, you would replace the Http value assigned made to **filename=** with a full path string like *C:\myfile.xel*.
+- In the **CREATE EVENT SESSION** statement, in its **ADD TARGET** clause, you would replace the Http value assigned made to **filename=** with a full path string like `C:\myfile.xel`.
- - No Azure Storage account need be involved.
+ - An Azure Storage account is not needed.
-## More information
+## Next steps
For more info about accounts and containers in the Azure Storage service, see:
azure-sql Identify Query Performance Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/identify-query-performance-issues.md
Once you have eliminated a suboptimal plan and *Waiting-related* problems that a
If the workload uses temporary tables or there are `tempdb` spills in the plans, the queries might have a problem with `tempdb` throughput. To investigate further, review [identify tempdb issues](database/monitoring-with-dmvs.md#identify-tempdb-performance-issues). - **Memory-related problems**
- If the workload doesn't have enough memory, the page life expectancy might drop, or the queries might get less memory than they need. In some cases, built-in intelligence in Query Optimizer will fix memory-related problems. See using DMVs to [identify memory grant issues](database/monitoring-with-dmvs.md#identify-memory-grant-wait-performance-issues). For more information and sample queries, see [Troubleshoot out of memory errors with Azure SQL Database](database/troubleshoot-memory-errors-issues.md).
+ If the workload doesn't have enough memory, the page life expectancy might drop, or the queries might get less memory than they need. In some cases, built-in intelligence in Query Optimizer will fix memory-related problems. See using DMVs to [identify memory grant issues](database/monitoring-with-dmvs.md#identify-memory-grant-wait-performance-issues). For more information and sample queries, see [Troubleshoot out of memory errors with Azure SQL Database](database/troubleshoot-memory-errors-issues.md). If you encounter out of memory errors, review [sys.dm_os_out_of_memory_events](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-out-of-memory-events).
### Methods to show top wait categories
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/doc-changes-updates-release-notes-whats-new.md
The following table lists the features of Azure SQL Managed Instance that have t
|[Linked server - pass-through Azure AD authentication](/sql/relational-databases/system-stored-procedures/sp-addlinkedserver-transact-sql#i-create-sql-managed-instance-linked-server-with-pass-through-azure-ad-authentication) |November 2021 | Create a linked server with pass-through Azure AD authentication for your Azure SQL Managed Instance. | |[Long-term backup retention](long-term-backup-retention-configure.md) |November 2021 | Store full backups for a specific database with configured redundancy for up to 10 years in Azure Blob storage, restoring the database as a new database. | |[Move instance to different subnet](vnet-subnet-move-instance.md)| November 2021 | Move SQL Managed Instance to a different subnet using the Azure portal, Azure PowerShell or the Azure CLI. |
-|[Audit management operations](../database/auditing-overview.md#auditing-of-microsoft-support-operations) | March 2021 | Azure SQL audit capabilities enable you to audit operations done by Microsoft support engineers when they need to access your SQL assets during a support request, enabling more transparency in your workforce. |
-|[Granular permissions for dynamic data masking](../database/dynamic-data-masking-overview.md)| March 2021 | Dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to designate how much of the sensitive data to reveal with minimal impact on the application layer. It's a policy-based security feature that hides the sensitive data in the result set of a query over designated database fields, while the data in the database is not changed. It's now possible to assign granular permissions for data that's been dynamically masked. To learn more, see [Dynamic data masking](../database/dynamic-data-masking-overview.md#permissions). |
-|[Machine Learning Service](machine-learning-services-overview.md) | March 2021 | Machine Learning Services is a feature of Azure SQL Managed Instance that provides in-database machine learning, supporting both Python and R scripts. The feature includes Microsoft Python and R packages for high-performance predictive analytics and machine learning. |
- ## Documentation changes
Learn about significant changes to the Azure SQL Managed Instance documentation.
| Changes | Details | | | | | **Data virtualization preview** | It's now possible to query data in external sources such as Azure Data Lake Storage Gen2 or Azure Blob Storage, joining it with locally stored relational data. This feature is currently in preview. To learn more, see [Data virtualization](data-virtualization-overview.md). |
-| **Log Replay Service migration** | Use the Log Replay Service to migrate from SQL Server to Azure SQL Managed Instance. This feature is currently in preview. To learn more, see [Migrate with Log Replay Service](log-replay-service-migrate.md). |
| **Managed Instance link guidance** | We've published a number of guides for using the [Managed Instance link feature](managed-instance-link-feature-overview.md), including how to [prepare your environment](managed-instance-link-preparation.md), [configure replication by using SSMS](managed-instance-link-use-ssms-to-replicate-database.md), [configure replication via scripts](managed-instance-link-use-scripts-to-replicate-database.md), [fail over your database by using SSMS](managed-instance-link-use-ssms-to-failover-database.md), [fail over your database via scripts](managed-instance-link-use-scripts-to-failover-database.md) and some [best practices](managed-instance-link-best-practices.md) when using the link feature (currently in preview). | | **Maintenance window GA, advance notifications preview** | The [maintenance window](../database/maintenance-window.md) feature is now generally available, allowing you to configure a maintenance schedule for your Azure SQL Managed Instance. It's also possible to receive advance notifications for planned maintenance events, which is currently in preview. Review [Maintenance window advance notifications (preview)](../database/advance-notifications.md) to learn more. | | **Windows Auth for Azure Active Directory principals preview** | Windows Authentication for managed instances empowers customers to move existing services to the cloud while maintaining a seamless user experience, and provides the basis for infrastructure modernization. Learn more in [Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance](winauth-azuread-overview.md). |
azure-sql Log Replay Service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/log-replay-service-migrate.md
Title: Migrate databases to SQL Managed Instance using Log Replay Service
-description: Learn how to migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service
+description: Learn how to migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (LRS).
Previously updated : 03/23/2022 Last updated : 03/29/2022 # Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview) [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article explains how to manually configure database migration from SQL Server 2008-2019 to Azure SQL Managed Instance by using Log Replay Service (LRS), currently in public preview. LRS is a free of charge cloud service enabled for SQL Managed Instance based on SQL Server log-shipping technology.
+This article explains how to manually configure database migration from SQL Server 2008-2019 to Azure SQL Managed Instance by using Log Replay Service (LRS), currently in public preview. LRS is a free of charge cloud service enabled for Azure SQL Managed Instance based on SQL Server log-shipping technology.
-[Azure Database Migration Service](../../dms/tutorial-sql-server-to-managed-instance.md) and LRS use the same underlying migration technology and the same APIs. By releasing LRS, we're further enabling complex custom migrations and hybrid architectures between on-premises SQL Server and SQL Managed Instance.
+[Azure Database Migration Service](../../dms/tutorial-sql-server-to-managed-instance.md) and LRS use the same underlying migration technology and APIs. LRS further enables complex custom migrations and hybrid architectures between on-premises SQL Server and SQL Managed Instance.
## When to use Log Replay Service When you can't use Azure Database Migration Service for migration, you can use LRS directly with PowerShell, Azure CLI cmdlets, or APIs to manually build and orchestrate database migrations to SQL Managed Instance.
-You might consider using LRS in the following cases:
+Consider using LRS in the following cases:
- You need more control for your database migration project.-- There's little tolerance for downtime on migration cutover.-- Database Migration Service executable file can't be installed in your environment.-- Database Migration Service executable file doesn't have file access to your database backups.
+- There's little tolerance for downtime during migration cutover.
+- The Database Migration Service executable file can't be installed to your environment.
+- The Database Migration Service executable file doesn't have file access to your database backups.
- No access to the host OS is available, or there are no administrator privileges. - You can't open network ports from your environment to Azure. - Network throttling, or proxy blocking issues exist in your environment.
You might consider using LRS in the following cases:
Building a custom solution to migrate databases to the cloud with LRS requires several orchestration steps, as shown in the diagram and a table later in this section.
-The migration consists of making database backups on SQL Server with `CHECKSUM` enabled, and copying backup files to Azure Blob Storage. Full, log, and differential backups are supported. LRS cloud service is used to restore backup files from Azure Blob Storage to SQL Managed Instance. Blob Storage is intermediary storage between SQL Server and SQL Managed Instance.
+Migration consists of making database backups on SQL Server with `CHECKSUM` enabled, and copying backup files to Azure Blob Storage. Full, log, and differential backups are supported. LRS cloud service is used to restore backup files from Azure Blob Storage to SQL Managed Instance. Blob Storage is intermediary storage between SQL Server and SQL Managed Instance.
-LRS monitors Blob Storage for any new differential or log backups added after the full backup has been restored. LRS then automatically restores these new files. You can use the service to monitor the progress of backup files being restored on SQL Managed Instance, and you can stop the process if necessary.
+LRS monitors Blob Storage for any new differential or log backups added after the full backup has been restored. LRS then automatically restores these new files. You can use the service to monitor the progress of backup files being restored to SQL Managed Instance, and stop the process if necessary.
-LRS does not require a specific naming convention for backup files. It scans all files placed on Blob Storage and constructs the backup chain from reading the file headers only. Databases are in a "restoring" state during the migration process. Databases are restored in [NORECOVERY](/sql/t-sql/statements/restore-statements-transact-sql#comparison-of-recovery-and-norecovery) mode, so they can't be used for reading or writing until the migration process is completed.
+LRS does not require a specific naming convention for backup files. It scans all files placed on Blob Storage and constructs the backup chain from reading the file headers only. Databases are in a **restoring** state during the migration process. Databases are restored in [NORECOVERY](/sql/t-sql/statements/restore-statements-transact-sql#comparison-of-recovery-and-norecovery) mode, so they can't be used for read or write workloads until the migration process completes.
If you're migrating several databases, you need to: - Place backup files for each database in a separate folder on Azure Blob Storage in a flat-file structure. For example, use separate database folders: `bolbcontainer/database1/files`, `blobcontainer/database2/files`, etc.-- Do not use nested folders inside database folders as this structure is not supported. For example, do not use subfolders: `blobcontainer/database1/subfolder/files`.
+- Don't use nested folders inside database folders as this structure is not supported. For example, do not use subfolders: `blobcontainer/database1/subfolder/files`.
- Start LRS separately for each database.-- Specify different URI path to separate database folders on Azure Blob Storage.
+- Specify different URI paths to separate database folders on Azure Blob Storage.
-You can start LRS in either *autocomplete* or *continuous* mode. When you start it in autocomplete mode, the migration will complete automatically when the last of the specified backup files has been restored. When you start LRS in continuous mode, the service will continuously restore any new backup files added, and the migration will complete on the manual cutover only.
+You can start LRS in either *autocomplete* or *continuous* mode. When you start it in autocomplete mode, the migration will complete automatically when the last of the specified backup files have been restored. When you start LRS in continuous mode, the service will continuously restore any new backup files added, and the migration completes during manual cutover only.
-We recommend that you manually cut over after the final log-tail backup has been taken and is shown as restored on SQL Managed Instance. The final cutover step will make the database come online and available for read and write use on SQL Managed Instance.
+We recommend that you manually cut over after the final log-tail backup is shown as restored on SQL Managed Instance. The final cutover step makes the database come online and available for read and write use on SQL Managed Instance.
-After LRS is stopped, either automatically through autocomplete, or manually through cutover, you can't resume the restore process for a database that was brought online on SQL Managed Instance. For example, once migration has been completed you are no longer able to restore additional differential backups for an online database on Managed Instance. To restore more backup files after the migration completes through autocomplete or cutover, you need to delete the database and perform the migration again from the scratch.
+After LRS is stopped, either automatically through autocomplete, or manually through cutover, you can't resume the restore process for a database that was brought online on SQL Managed Instance. For example, once migration completes, you are no longer able to restore additional differential backups for an online database. To restore more backup files after migration completes, you need to delete the database from the managed instance and restart the migration from the beginning.
:::image type="content" source="./media/log-replay-service-migrate/log-replay-service-conceptual.png" alt-text="Diagram that explains the Log Replay Service orchestration steps for SQL Managed Instance." border="false"::: | Operation | Details | | :-- | :- |
-| **1. Copy database backups from SQL Server to Blob Storage**. | Copy full, differential, and log backups from SQL Server to a Blob Storage container by using [AzCopy](../../storage/common/storage-use-azcopy-v10.md) or [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/). <br /><br />Use any file names. LRS doesn't require a specific file-naming convention.<br /><br />In migrating several databases, you need a separate folder for each database. |
-| **2. Start LRS in the cloud**. | You can restart the service with a choice of cmdlets: PowerShell ([start-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/start-azsqlinstancedatabaselogreplay)) or Azure CLI ([az_sql_midb_log_replay_start cmdlets](/cli/azure/sql/midb/log-replay#az-sql-midb-log-replay-start)). <br /><br /> Start LRS separately for each database that points to a backup folder on Blob Storage. <br /><br /> After you start the service, it will take backups from the Blob Storage container and start restoring them on SQL Managed Instance.<br /><br /> If you started LRS in continuous mode, after all initially uploaded backups are restored, the service will watch for any new files uploaded to the folder. The service will continuously apply logs based on the log sequence number (LSN) chain until it's stopped. |
-| **2.1. Monitor the operation's progress**. | You can monitor progress of the restore operation with a choice of cmdlets: PowerShell ([get-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/get-azsqlinstancedatabaselogreplay)) or Azure CLI ([az_sql_midb_log_replay_show cmdlets](/cli/azure/sql/midb/log-replay#az-sql-midb-log-replay-show)). |
-| **2.2. Stop the operation if needed**. | If you need to stop the migration process, you have a choice of cmdlets: PowerShell ([stop-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/stop-azsqlinstancedatabaselogreplay)) or Azure CLI ([az_sql_midb_log_replay_stop](/cli/azure/sql/midb/log-replay#az-sql-midb-log-replay-stop)). <br /><br /> Stopping the operation will delete the database that you're restoring on SQL Managed Instance. After you stop an operation, you can't resume LRS for a database. You need to restart the migration process from the scratch. |
-| **3. Cut over to the cloud when you're ready**. | Stop the application and the workload. Take the last log-tail backup and upload it to Azure Blob Storage.<br /><br /> Complete the cutover by initiating an LRS `complete` operation with a choice of cmdlets: PowerShell ([complete-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/complete-azsqlinstancedatabaselogreplay)) or Azure CLI [az_sql_midb_log_replay_complete](/cli/azure/sql/midb/log-replay#az-sql-midb-log-replay-complete). This operation will stop LRS and cause the database to come online for read and write use on SQL Managed Instance.<br /><br /> Repoint the application connection string from SQL Server to SQL Managed Instance. You will need to orchestrate this step yourself, either through a manual connection string change in your application, or automatically (for example, if your application can read the connection string from a property, or a database). |
+| **1. Copy database backups from SQL Server to Blob Storage**. | Copy full, differential, and log backups from SQL Server to a Blob Storage container by using [AzCopy](../../storage/common/storage-use-azcopy-v10.md) or [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/). <br /><br />Use any file names. LRS doesn't require a specific file-naming convention.<br /><br />Use a separate folder for each database when migrating several databases. |
+| **2. Start LRS in the cloud**. | You can start the service with PowerShell ([start-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/start-azsqlinstancedatabaselogreplay)) or the Azure CLI ([az_sql_midb_log_replay_start cmdlets](/cli/azure/sql/midb/log-replay#az-sql-midb-log-replay-start)). <br /><br /> Start LRS separately for each database that points to a backup folder on Blob Storage. <br /><br /> After the service starts, it will take backups from the Blob Storage container and start restoring them to SQL Managed Instance.<br /><br /> When started in continuous mode, LRS restores all the backups initially uploaded and then watches for any new files uploaded to the folder. The service will continuously apply logs based on the log sequence number (LSN) chain until it's stopped manually. |
+| **2.1. Monitor the operation's progress**. | You can monitor progress of the restore operation with PowerShell ([get-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/get-azsqlinstancedatabaselogreplay)) or the Azure CLI ([az_sql_midb_log_replay_show cmdlets](/cli/azure/sql/midb/log-replay#az-sql-midb-log-replay-show)). |
+| **2.2. Stop the operation if needed**. | If you need to stop the migration process, use PowerShell ([stop-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/stop-azsqlinstancedatabaselogreplay)) or the Azure CLI ([az_sql_midb_log_replay_stop](/cli/azure/sql/midb/log-replay#az-sql-midb-log-replay-stop)). <br /><br /> Stopping the operation deletes the database that you're restoring to SQL Managed Instance. After you stop an operation, you can't resume LRS for a database. You need to restart the migration process from the beginning. |
+| **3. Cut over to the cloud when you're ready**. | Stop the application and workload. Take the last log-tail backup and upload it to Azure Blob Storage.<br /><br /> Complete the cutover by initiating an LRS `complete` operation with PowerShell ([complete-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/complete-azsqlinstancedatabaselogreplay)) or the Azure CLI [az_sql_midb_log_replay_complete](/cli/azure/sql/midb/log-replay#az-sql-midb-log-replay-complete). This operation stops LRS and brings the database online for read and write workloads on SQL Managed Instance.<br /><br /> Repoint the application connection string from SQL Server to SQL Managed Instance. You will need to orchestrate this step yourself, either through a manual connection string change in your application, or automatically (for example, if your application can read the connection string from a property, or a database). |
-## Requirements for getting started
+## Getting started
-### SQL Server side
-- SQL Server 2008-2019
+Consider the requirements in this section to get started with using LRS to migrate.
+
+### SQL Server
+
+Make sure you have the following requirements for SQL Server:
+
+- SQL Server versions 2008 to 2019
- Full backup of databases (one or multiple files) - Differential backup (one or multiple files) - Log backup (not split for a transaction log file) - `CHECKSUM` enabled for backups (mandatory)
-### Azure side
+### Azure
+
+Make sure you have the following requirements for Azure:
+ - PowerShell Az.SQL module version 2.16.0 or later ([installed](https://www.powershellgallery.com/packages/Az.Sql/) or accessed through [Azure Cloud Shell](/azure/cloud-shell/)) - Azure CLI version 2.19.0 or later ([installed](/cli/azure/install-azure-cli)) - Azure Blob Storage container provisioned
After LRS is stopped, either automatically through autocomplete, or manually thr
Running LRS through the provided clients requires one of the following Azure roles: - Subscription Owner role-- [Managed Instance Contributor](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) role
+- [SQL Managed Instance Contributor](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) role
- Custom role with the following permission: `Microsoft.Sql/managedInstances/databases/*` ## Requirements
We recommend the following best practices:
> - LRS doesn't support read-only access to databases during the migration. > - After the migration completes, the migration process is finalized and cannot be resumed with additional differential backups.
-## Steps to execute
+## Steps to migrate
+
+To migrate using LRS, follow the steps in this section.
### Make database backups on SQL Server You can make database backups on SQL Server by using either of the following options: - Back up to the local disk storage, and then upload files to Azure Blob Storage, if your environment restricts direct backups to Blob Storage.-- Back up directly to Blob Storage with the `TO URL` option in T-SQL, if your environment and security procedures allow it.
+- Back up directly to Blob Storage with the `TO URL` option in Transact-SQL (T-SQL), if your environment and security procedures allow it.
Set databases that you want to migrate to the full recovery model to allow log backups.
SET RECOVERY FULL
GO ```
-To manually make full, differential, and log backups of your database on local storage, use the following sample T-SQL scripts. Ensure that the `CHECKSUM` option is enabled, because it's mandatory for LRS.
+To manually make full, differential, and log backups of your database to local storage, use the following sample T-SQL scripts. Ensure the `CHECKSUM` option is enabled, as it's mandatory for LRS.
++
+The following example takes a full database backup to the local disk:
```SQL Example of how to make a full database backup to the local disk
+-- Take full database backup to local disk
BACKUP DATABASE [SampleDB] TO DISK='C:\BACKUP\SampleDB_full.bak' WITH INIT, COMPRESSION, CHECKSUM GO
+```
Example of how to make a differential database backup to the local disk
+The following example takes a differential backup to the local disk:
+
+```sql
+-- Take differential database backup to local disk
BACKUP DATABASE [SampleDB] TO DISK='C:\BACKUP\SampleDB_diff.bak' WITH DIFFERENTIAL, COMPRESSION, CHECKSUM GO
+```
+
+The following example takes a transaction log backup to the local disk:
Example of how to make a transactional log backup to the local disk
+```sql
+-- Take transactional log backup to local disk
BACKUP LOG [SampleDB] TO DISK='C:\BACKUP\SampleDB_log.trn' WITH COMPRESSION, CHECKSUM
Azure Blob Storage is used as intermediary storage for backup files between SQL
### Copy backups from SQL Server to Blob Storage
-In migrating databases to a managed instance by using LRS, you can use the following approaches to upload backups to Blob Storage:
-- Using SQL Server native [BACKUP TO URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url) functionality-- Using [AzCopy](../../storage/common/storage-use-azcopy-v10.md) or [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer) to upload backups to a blob container-- Using Storage Explorer in the Azure portal
+When migrating databases to a managed instance by using LRS, you can use the following approaches to upload backups to Blob Storage:
+
+- SQL Server native [BACKUP TO URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url)
+- [AzCopy](../../storage/common/storage-use-azcopy-v10.md) or [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer) to upload backups to a blob container
+- Storage Explorer in the Azure portal
> [!NOTE] > To migrate multiple databases using the same Azure Blob Storage container, place all backup files of an individual database into a separate folder inside the container. Use flat-file structure for each database folder, as nested folders are not supported. > ### Make backups from SQL Server directly to Blob Storage
-If your corporate and network policies allow it, an alternative is to make backups from SQL Server directly to Blob Storage by using the SQL Server native [BACKUP TO URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url) option. If you can pursue this option, you don't need to make backups on the local storage and upload them to Blob Storage.
+
+If your corporate and network policies allow it, take backups from SQL Server directly to Blob Storage by using the SQL Server native [BACKUP TO URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url) option. If you can use this option, you don't need to take backups to local storage and upload them to Blob Storage.
As the first step, this operation requires you to generate an SAS authentication token for Blob Storage and then import the token to SQL Server. The second step is to make backups with the `TO URL` option in T-SQL. Ensure that all backups are made with the `CHEKSUM` option enabled. For reference, the following sample code makes backups to Blob Storage. This example does not include instructions on how to import the SAS token. You can find detailed instructions, including how to generate and import the SAS token to SQL Server, in the tutorial [Use Azure Blob Storage with SQL Server](/sql/relational-databases/tutorial-use-azure-blob-storage-service-with-sql-server-2016#1create-stored-access-policy-and-shared-access-storage).
+The following example takes a full database backup to a URL:
+ ```SQL Example of how to make a full database backup to a URL
+-- Take a full database backup to a URL
BACKUP DATABASE [SampleDB] TO URL = 'https://<mystorageaccountname>.blob.core.windows.net/<containername>/<databasefolder>/SampleDB_full.bak' WITH INIT, COMPRESSION, CHECKSUM GO Example of how to make a differential database backup to a URL
+```
+
+The following example takes a differential database backup to a URL:
+
+```sql
+-- Take a differential database backup to a URL
BACKUP DATABASE [SampleDB] TO URL = 'https://<mystorageaccountname>.blob.core.windows.net/<containername>/<databasefolder>/SampleDB_diff.bak' WITH DIFFERENTIAL, COMPRESSION, CHECKSUM GO
+```
+
+The following example takes a transaction log backup to a URL:
Example of how to make a transactional log backup to a URL
+```sql
+-- Take a transactional log backup to a URL
BACKUP LOG [SampleDB] TO URL = 'https://<mystorageaccountname>.blob.core.windows.net/<containername>/<databasefolder>/SampleDB_log.trn' WITH COMPRESSION, CHECKSUM
WITH COMPRESSION, CHECKSUM
### Migration of multiple databases
-If migrating multiple databases using the same Azure Blob Storage container, you must place backup files for different databases in separate folders inside the container. All backup files for a single database must be placed in a flat-file structure inside a database folder, and there must not exist nested folders within as this structure is not supported.
+If migrating multiple databases using the same Azure Blob Storage container, you must place backup files for different databases in separate folders inside the container. All backup files for a single database must be placed in a flat-file structure inside a database folder, and the folders cannot be nested, as it's not supported.
Below is an example of folder structure inside Azure Blob Storage container required to migrate multiple databases using LRS.
https://<mystorageaccountname>.blob.core.windows.net/<containername>/<database3>
### Generate a Blob Storage SAS authentication token for LRS
-Azure Blob Storage is used as intermediary storage for backup files between SQL Server and SQL Managed Instance. You need to generate an SAS authentication token, with only list and read permissions, for LRS. The token will enable LRS to access Blob Storage and use the backup files to restore them on SQL Managed Instance.
+Azure Blob Storage is used as intermediary storage for backup files between SQL Server and SQL Managed Instance. Generate an SAS authentication token for LRS with only list and read permissions. The token enables LRS to access Blob Storage and uses the backup files to restore them to SQL Managed Instance.
Follow these steps to generate the token:
-1. Open Storage Explorer from the Azure portal.
+1. Open **Storage Explorer** from the Azure portal.
2. Expand **Blob Containers**. 3. Right-click the blob container and select **Get Shared Access Signature**. :::image type="content" source="./media/log-replay-service-migrate/lrs-sas-token-01.png" alt-text="Screenshot that shows selections for generating an S A S authentication token.":::
-4. Select the timeframe for token expiration. Ensure that the token is valid for the duration of your migration.
+4. Select the time frame for token expiration. Ensure the token is valid for the duration of your migration.
5. Select the time zone for the token: UTC or your local time. > [!IMPORTANT]
- > The time zone of the token and your managed instance might mismatch. Ensure that the SAS token has the appropriate time validity, taking time zones into consideration. If possible, set the time zone to an earlier and later time of your planned migration window.
+ > The time zone of the token and your managed instance might mismatch. Ensure that the SAS token has the appropriate time validity, taking time zones into consideration. To account for time zone differences, set the validity time frame **FROM** well before your migration window starts, and the **TO** time frame well after you expect your migration to complete.
+ 6. Select **Read** and **List** permissions only. > [!IMPORTANT]
- > Don't select any other permissions. If you do, LRS won't start. This security requirement is by design.
+ > Don't select any other permissions. If you do, LRS won't start. This security requirement is by-design.
+ 7. Select **Create**. :::image type="content" source="./media/log-replay-service-migrate/lrs-sas-token-02.png" alt-text="Screenshot that shows selections for S A S token expiration, time zone, and permissions, along with the Create button.":::
The SAS authentication is generated with the time validity that you specified. Y
:::image type="content" source="./media/log-replay-service-migrate/lrs-generated-uri-token.png" alt-text="Screenshot that shows an example of the U R I version of an S A S token."::: > [!NOTE]
- > Using SAS tokens created with permissions set through defining a [stored access policy](/rest/api/storageservices/define-stored-access-policy.md) is not supported at this time. You will need to follow the instructions in this guide on manually specifying Read and List permissions for the SAS token.
+ > Using SAS tokens created with permissions set through defining a [stored access policy](/rest/api/storageservices/define-stored-access-policy.md) is not supported at this time. Follow the instructions in this article to manually specify **Read** and **List** permissions for the SAS token.
### Copy parameters from the SAS token
Before you use the SAS token to start LRS, you need to understand its structure.
:::image type="content" source="./media/log-replay-service-migrate/lrs-token-structure.png" alt-text="Example U R I for a generated S A S token for Log Replay Service." border="false":::
-The first part, starting with `https://` until the question mark (`?`), is used for the `StorageContainerURI` parameter that's fed as in input to LRS. It gives LRS information about the folder where database backup files are stored.
+The first part, starting with `https://` until the question mark (`?`), is used for the `StorageContainerURI` parameter that's fed as the input to LRS. It gives LRS information about the folder where the database backup files are stored.
-The second part, starting after the question mark (`?`) and going all the way until the end of the string, is the `StorageContainerSasToken` parameter. This part is the actual signed authentication token, which is valid for the duration of the specified time. This part does not necessarily need to start with `sp=` as shown in the example. Your case might differ.
+The second part, starting after the question mark (`?`) and going all the way until the end of the string, is the `StorageContainerSasToken` parameter. This part is the actual signed authentication token, which is valid for the duration of the specified time. This part does not necessarily need to start with `sp=` as shown in the example. Your case may differ.
Copy the parameters as follows:
-1. Copy the first part of the token, starting from `https://` all the way until the question mark (`?`). Use it as the `StorageContainerUri` parameter in PowerShell or the Azure CLI for starting LRS.
+1. Copy the first part of the token, starting from `https://` all the way until the question mark (`?`). Use it as the `StorageContainerUri` parameter in PowerShell or the Azure CLI when starting LRS.
:::image type="content" source="./media/log-replay-service-migrate/lrs-token-uri-copy-part-01.png" alt-text="Screenshot that shows copying the first part of the token.":::
-2. Copy the second part of the token, starting after the question mark (`?`) all the way until the end of the string. Use it as the `StorageContainerSasToken` parameter in PowerShell or the Azure CLI for starting LRS.
+2. Copy the second part of the token, starting after the question mark (`?`) all the way until the end of the string. Use it as the `StorageContainerSasToken` parameter in PowerShell or the Azure CLI when starting LRS.
:::image type="content" source="./media/log-replay-service-migrate/lrs-token-uri-copy-part-02.png" alt-text="Screenshot that shows copying the second part of the token.":::
Select-AzSubscription -SubscriptionId <subscription ID>
You start the migration by starting LRS. You can start the service in either autocomplete or continuous mode.
-When you use autocomplete mode, the migration will complete automatically when the last of the specified backup files has been restored. This option requires the start command to specify the filename of the last backup file.
+When you use autocomplete mode, the migration completes automatically when the last of the specified backup files have been restored. This option requires the start command to specify the filename of the last backup file.
-When you use continuous mode, the service will continuously restore any new backup files that were added. The migration will complete on the manual cutover only.
+When you use continuous mode, the service continuously restores any new backup files that were added. The migration only completes during manual cutover.
> [!NOTE] > When migrating multiple databases, LRS must be started separately for each database pointing to the full URI path of Azure Blob storage container and the individual database folder.
When you use continuous mode, the service will continuously restore any new back
### Start LRS in autocomplete mode
-To start LRS in autocomplete mode, use the following PowerShell or Azure CLI commands. Specify the last backup file name by using the `-LastBackupName` parameter. Upon restoring the last of the specified backup files, the service will automatically initiate a cutover.
+To start LRS in autocomplete mode, use PowerShell or Azure CLI commands. Specify the last backup file name by using the `-LastBackupName` parameter. Upon restoring the last of the specified backup files, the service automatically initiates a cutover.
-Here's an example of starting LRS in autocomplete mode by using PowerShell:
+The following PowerShell example starts LRS in autocomplete mode:
```PowerShell Start-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
Start-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
-LastBackupName "last_backup.bak" ```
-Here's an example of starting LRS in autocomplete mode by using the Azure CLI:
+The following Azure CLI example starts LRS in autocomplete mode:
```CLI az sql midb log-replay start -g mygroup --mi myinstance -n mymanageddb -a --last-bn "backup.bak"
az sql midb log-replay start -g mygroup --mi myinstance -n mymanageddb -a --last
### Start LRS in continuous mode
-Here's an example of starting LRS in continuous mode by using PowerShell:
+The following PowerShell example starts LRS in continuous mode:
```PowerShell Start-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
Start-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
-StorageContainerSasToken "sv=2019-02-02&ss=b&srt=sco&sp=rl&se=2023-12-02T00:09:14Z&st=2019-11-25T16:09:14Z&spr=https&sig=92kAe4QYmXaht%2Fgjocqwerqwer41s%3D" ```
-Here's an example of starting LRS in continuous mode by using the Azure CLI:
+The following Azure CLI example starts LRS in continuous mode:
```CLI az sql midb log-replay start -g mygroup --mi myinstance -n mymanageddb
az sql midb log-replay start -g mygroup --mi myinstance -n mymanageddb
--storage-sas "sv=2019-02-02&ss=b&srt=sco&sp=rl&se=2023-12-02T00:09:14Z&st=2019-11-25T16:09:14Z&spr=https&sig=92kAe4QYmXaht%2Fgjocqwerqwer41s%3D" ```
-PowerShell and CLI clients to start LRS in continuous mode are synchronous. This means that clients will wait for the API response to report on success or failure to start the job.
+PowerShell and CLI clients that start LRS in continuous mode are synchronous. This means the client waits for the API response to report on success or failure to start the job.
During this wait, the command won't return control to the command prompt. If you're scripting the migration experience, and you need the LRS start command to give back control immediately to continue with rest of the script, you can run PowerShell as a background job with the `-AsJob` switch. For example:
az sql midb log-replay start <required parameters> &
``` > [!IMPORTANT]
-> After you start LRS, any system-managed software patches are halted for 36 hours. After this window, the next automated software patch will automatically stop LRS. If that happens, you can't resume migration and need to restart it from scratch.
+> After you start LRS, any system-managed software patches are halted for 36 hours. After this window, the next automated software patch will automatically stop LRS. If that happens, you can't resume migration and need to restart it from the beginning.
-## Monitor the migration progress
+## Monitor migration progress
-To monitor the progress of the migration through PowerShell, use the following command:
+To monitor migration progress through PowerShell, use the following command:
```PowerShell Get-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
Get-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
-Name "ManagedDatabaseName" ```
-To monitor the progress of the migration through the Azure CLI, use the following command:
+To monitor migration progress through the Azure CLI, use the following command:
```CLI az sql midb log-replay show -g mygroup --mi myinstance -n mymanageddb
az sql midb log-replay show -g mygroup --mi myinstance -n mymanageddb
## Stop the migration
-If you need to stop the migration, use the following cmdlets. Stopping the migration will delete the restoring database on SQL Managed Instance, so resuming the migration won't be possible.
+If you need to stop the migration, use PowerShell or the Azure CLI. Stopping the migration deletes the restoring database on SQL Managed Instance, so resuming the migration won't be possible.
To stop the migration process through PowerShell, use the following command:
az sql midb log-replay stop -g mygroup --mi myinstance -n mymanageddb
## Complete the migration (continuous mode)
-If you started LRS in continuous mode, after you've ensured that all backups have been restored, initiating the cutover will complete the migration. After the cutover, the database will be migrated and ready for read and write access.
+If you started LRS in continuous mode, after you've ensured that all backups have been restored, initiating the cutover will complete the migration. After the cutover, the database is migrated and ready for read and write access.
To complete the migration process in LRS continuous mode through PowerShell, use the following command:
To complete the migration process in LRS continuous mode through the Azure CLI,
az sql midb log-replay complete -g mygroup --mi myinstance -n mymanageddb --last-backup-name "backup.bak" ```
-## Functional limitations
+## Limitations
+
+Consider the following limitations of LRS:
-Functional limitations of LRS are:
-- During the migration process, databases being migrated cannot be used for read-only access on Managed Instance.-- System-managed software patches are blocked for 36 hours once the LRS has been started. After this time window expires, the next software maintenance update will stop LRS. You will need to restart LRS from scratch.
+- During the migration process, databases being migrated cannot be used for read-only access on SQL Managed Instance.
+- System-managed software patches are blocked for 36 hours once the LRS has been started. After this time window expires, the next software maintenance update stops LRS. You will need to restart the LRS migration from the beginning.
- LRS requires databases on SQL Server to be backed up with the `CHECKSUM` option enabled.-- The SAS token that LRS will use must be generated for the entire Azure Blob Storage container, and it must have Read and List permissions only. For example, if you grant Read, List and Write permissions, LRS will not be able to start because of the extra Write permission.-- Using SAS tokens created with permissions set through defining a [stored access policy](/rest/api/storageservices/define-stored-access-policy.md) is not supported at this time. You will need to follow the instructions in this guide on manually specifying Read and List permissions for the SAS token.
+- The SAS token that LRS uses must be generated for the entire Azure Blob Storage container, and it must have **Read** and **List** permissions only. For example, if you grant **Read**, **List** and **Write** permissions, LRS will not be able to start because of the extra **Write** permission.
+- Using SAS tokens created with permissions set through defining a [stored access policy](/rest/api/storageservices/define-stored-access-policy.md) is not supported at this time. Follow the instructions in this article to manually specify **Read** and **List** permissions for the SAS token.
- Backup files containing % and $ characters in the file name cannot be consumed by LRS. Consider renaming such file names. - Backup files for different databases must be placed in separate folders on Blob Storage in a flat-file structure. Nested folders inside individual database folders are not supported. - LRS must be started separately for each database pointing to the full URI path containing an individual database folder. - LRS can support up to 100 simultaneous restore processes per single managed instance. > [!NOTE]
-> If you require database to be R/O accessible during the migration, and if you require migration window larger than 36 hours, please consider an alternative online migrations solution [link feature for Managed Instance](managed-instance-link-feature-overview.md) providing such capability.
+> If you require database to be R/O accessible during the migration, and if you require migration window larger than 36 hours, please consider the [link feature for Managed Instance](managed-instance-link-feature-overview.md) as an alternative migration solution.
## Troubleshooting
-After you start LRS, use the monitoring cmdlet (`get-azsqlinstancedatabaselogreplay` or `az_sql_midb_log_replay_show`) to see the status of the operation. If LRS fails to start after some time and you get an error, check for the most common issues:
+After you start LRS, use the monitoring cmdlet (PowerShell: `get-azsqlinstancedatabaselogreplay` or Azure CLI: `az_sql_midb_log_replay_show`) to see the status of the operation. If LRS fails to start after some time and you get an error, check for the most common issues:
-- Does an existing database on SQL Managed Instance have the same name as the one you're trying to migrate from SQL Server? Resolve this conflict by renaming one of databases.
+- Does an existing database on SQL Managed Instance have the same name as the one you're trying to migrate from SQL Server? Resolve this conflict by renaming one of the databases.
- Was the database backup on SQL Server made via the `CHECKSUM` option?-- Are the permissions granted for the SAS token Read and List only?
+- Are the permissions granted for the SAS **token Read** and **List** _only_?
- Did you copy the SAS token for LRS after the question mark (`?`), with content starting like this: `sv=2020-02-10...`?  - Is the SAS token validity time applicable for the time window of starting and completing the migration? There might be mismatches due to the different time zones used for SQL Managed Instance and the SAS token. Try regenerating the SAS token and extending the token validity of the time window before and after the current date. - Are the database name, resource group name, and managed instance name spelled correctly? - If you started LRS in autocomplete mode, was a valid filename for the last backup file specified? ## Next steps-- Learn more about [migrating to Managed Instance using the link feature](managed-instance-link-feature-overview.md).+
+- Learn more about [migrating to SQL Managed Instance using the link feature](managed-instance-link-feature-overview.md).
- Learn more about [migrating from SQL Server to SQL Managed instance](../migration-guides/managed-instance/sql-server-to-managed-instance-guide.md). - Learn more about [differences between SQL Server and SQL Managed Instance](transact-sql-tsql-differences-sql-server.md). - Learn more about [best practices to cost and size workloads migrated to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs).
azure-sql Machine Learning Services Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/machine-learning-services-differences.md
Memory usage depends on how much is used in your R scripts and the number of par
If you receive one of these errors, you can resolve it by scaling your database to a higher service tier.
+If you encounter out of memory errors in Azure SQL Managed Instance, review [sys.dm_os_out_of_memory_events](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-out-of-memory-events).
+ ## SQL Managed Instance pools Machine Learning Services is currently not supported on [Azure SQL Managed Instance pools (preview)](instance-pools-overview.md).
azure-sql Winauth Azuread Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-overview.md
Last updated 03/01/2022
## Key capabilities and scenarios
-As customers modernize their infrastructure, application, and data tiers, they also modernize their identity management capabilities by shifting to Azure AD. Azure SQL offers multiple [Azure AD Authentication](../database/authentication-aad-overview.md) options:
+As customers modernize their infrastructure, application, and data tiers, they also modernize their identity management capabilities by shifting to Azure AD. Azure SQL offers multiple [Azure AD Authentication](/azure/azure-sql/database/authentication-aad-overview) options:
- 'Azure Active Directory - Password' offers authentication with Azure AD credentials - 'Azure Active Directory - Universal with MFA' adds multi-factor authentication
azure-sql Winauth Azuread Run Trace Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-run-trace-managed-instance.md
To use Windows Authentication to connect to and run a trace against a managed in
- To create or modify extended events sessions, ensure that your account has the [server permission](/sql/t-sql/statements/grant-server-permissions-transact-sql) of ALTER ANY EVENT SESSION on the managed instance. - To create or modify traces in SQL Server Profiler, ensure that your account has the [server permission](/sql/t-sql/statements/grant-server-permissions-transact-sql) of ALTER TRACE on the managed instance.
-If you have not yet enabled Windows authentication for Azure AD principals against your managed instance, you may run a trace against a managed instance using an [Azure AD Authentication](../database/authentication-aad-overview.md) option, including:
+If you have not yet enabled Windows authentication for Azure AD principals against your managed instance, you may run a trace against a managed instance using an [Azure AD Authentication](/azure/azure-sql/database/authentication-aad-overview) option, including:
- 'Azure Active Directory - Password' - 'Azure Active Directory - Universal with MFA'
azure-sql Winauth Azuread Setup Incoming Trust Based Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup-incoming-trust-based-flow.md
To implement the incoming trust-based authentication flow, first ensure that the
|Prerequisite |Description | ||| |Client must run Windows 10, Windows Server 2012, or a higher version of Windows. | |
-|Clients must be joined to AD. The domain must have a functional level of Windows Server 2012 or higher. | You can determine if the client is joined to AD by running the [dsregcmd command](../../active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
+|Clients must be joined to AD. The domain must have a functional level of Windows Server 2012 or higher. | You can determine if the client is joined to AD by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd): `dsregcmd.exe /status` |
|Azure AD Hybrid Authentication Management Module. | This PowerShell module provides management features for on-premises setup. | |Azure tenant. | | |Azure subscription under the same Azure AD tenant you plan to use for authentication.| |
Install-Module -Name AzureADHybridAuthenticationManagement -AllowClobber
- Enter the password for your Azure AD global administrator account. - If your organization uses other modern authentication methods such as MFA (Azure Multi-Factor Authentication) or Smart Card, follow the instructions as requested for sign in.
- If this is the first time you're configuring Azure AD Kerberos settings, the [Get-AzureAdKerberosServer cmdlet](/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises#view-and-verify-the-azure-ad-kerberos-server) will display empty information, as in the following sample output:
+ If this is the first time you're configuring Azure AD Kerberos settings, the [Get-AzureAdKerberosServer cmdlet](/azure/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises#view-and-verify-the-azure-ad-kerberos-server) will display empty information, as in the following sample output:
``` ID :
Install-Module -Name AzureADHybridAuthenticationManagement -AllowClobber
1. Add the Trusted Domain Object.
- Run the [Set-AzureAdKerberosServer PowerShell cmdlet](/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises#create-a-kerberos-server-object) to add the Trusted Domain Object. Be sure to include `-SetupCloudTrust` parameter. If there is no Azure AD service account, this command will create a new Azure AD service account. If there is an Azure AD service account already, this command will only create the requested Trusted Domain object.
+ Run the [Set-AzureAdKerberosServer PowerShell cmdlet](/azure/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises#create-a-kerberos-server-object) to add the Trusted Domain Object. Be sure to include `-SetupCloudTrust` parameter. If there is no Azure AD service account, this command will create a new Azure AD service account. If there is an Azure AD service account already, this command will only create the requested Trusted Domain object.
```powershell Set-AzureAdKerberosServer -Domain $domain `
Install-Module -Name AzureADHybridAuthenticationManagement -AllowClobber
## Configure the Group Policy Object (GPO)
-1. Identify your [Azure AD tenant ID](../../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
+1. Identify your [Azure AD tenant ID](/azure/active-directory/fundamentals/active-directory-how-to-find-tenant).
1. Deploy the following Group Policy setting to client machines using the incoming trust-based flow:
Learn more about implementing Windows Authentication for Azure AD principals on
- [Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)](winauth-azuread-kerberos-managed-instance.md) - [What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance? (Preview)](winauth-azuread-overview.md)-- [How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md)
+- [How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md)
azure-sql Winauth Azuread Setup Modern Interactive Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup-modern-interactive-flow.md
There is no AD to Azure AD set up required for enabling software running on Azur
|Prerequisite |Description | ||| |Clients must run Windows 10 20H1, Windows Server 2022, or a higher version of Windows. | |
-|Clients must be joined to Azure AD or Hybrid Azure AD. | You can determine if this prerequisite is met by running the [dsregcmd command](../../active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
+|Clients must be joined to Azure AD or Hybrid Azure AD. | You can determine if this prerequisite is met by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd): `dsregcmd.exe /status` |
|Application must connect to the managed instance via an interactive session. | This supports applications such as SQL Server Management Studio (SSMS) and web applications, but won't work for applications that run as a service. | |Azure AD tenant. | | |Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
Learn more about implementing Windows Authentication for Azure AD principals on
- [How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory and Kerberos (Preview)](winauth-implementation-aad-kerberos.md) - [How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)](winauth-azuread-setup-incoming-trust-based-flow.md) - [Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)](winauth-azuread-kerberos-managed-instance.md)-- [Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance](winauth-azuread-troubleshoot.md)
+- [Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance](winauth-azuread-troubleshoot.md)
azure-sql Winauth Azuread Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup.md
Following this, a system administrator configures authentication flows. Two auth
### Synchronize AD with Azure AD
-Customers should first implement [Azure AD Connect](../../active-directory/hybrid/whatis-azure-ad-connect.md) to integrate on-premises directories with Azure AD.
+Customers should first implement [Azure AD Connect](/azure/active-directory/hybrid/whatis-azure-ad-connect) to integrate on-premises directories with Azure AD.
### Select which authentication flow(s) you will implement
The following prerequisites are required to implement the modern interactive aut
|Prerequisite |Description | ||| |Clients must run Windows 10 20H1, Windows Server 2022, or a higher version of Windows. | |
-|Clients must be joined to Azure AD or Hybrid Azure AD. | You can determine if this prerequisite is met by running the [dsregcmd command](../../active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
+|Clients must be joined to Azure AD or Hybrid Azure AD. | You can determine if this prerequisite is met by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd): `dsregcmd.exe /status` |
|Application must connect to the managed instance via an interactive session. | This supports applications such as SQL Server Management Studio (SSMS) and web applications, but won't work for applications that run as a service. | |Azure AD tenant. | | |Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
The following prerequisites are required to implement the incoming trust-based a
|Prerequisite |Description | ||| |Client must run Windows 10, Windows Server 2012, or a higher version of Windows. | |
-|Clients must be joined to AD. The domain must have a functional level of Windows Server 2012 or higher. | You can determine if the client is joined to AD by running the [dsregcmd command](../../active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
+|Clients must be joined to AD. The domain must have a functional level of Windows Server 2012 or higher. | You can determine if the client is joined to AD by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd): `dsregcmd.exe /status` |
|Azure AD Hybrid Authentication Management Module. | This PowerShell module provides management features for on-premises setup. | |Azure tenant. | | |Azure subscription under the same Azure AD tenant you plan to use for authentication.| |
azure-video-analyzer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/language-support.md
This section describes language support in Video Analyzer for Media.
- Keywords - Topics - Labels
- - [NEW] Frame Patters (Only to Hebrew as of now).
--- Search in specific language
+ - Frame patterns (Only to Hebrew as of now)
- Language customization
-
-| **Language** | **Code** | **Transcription** | **LID* | **MLID** | **Translation** | **Customization** (Speech custom model) |
-|::|::|:--:|:-:|:-:|:-:|:-:|
-| Afrikaans | `af-ZA` | | | | Γ£ö | Γ£ö |
+
+| **Language** | **Code** | **Transcription** | **LID** | **MLID** | **Translation** | **Customization** (Language model) |
+|::|::|:--:|:-:|:-:|:-:|::|
+| Afrikaans | `af-ZA` | | | | Γ£ö | Γ£ö |
| Arabic (Iraq) | `ar-IQ` | Γ£ö | | | Γ£ö | Γ£ö | | Arabic (Israel) | `ar-IL` | Γ£ö | | | Γ£ö | Γ£ö | | Arabic (Jordan) | `ar-JO` | Γ£ö | | | Γ£ö | Γ£ö |
This section describes language support in Video Analyzer for Media.
The following table describes language support in the Video Analyzer for Media frontend experiences.
-* [portal](https://aka.ms/vi-portal-link) experience provided in the settings page
-* [widgets](video-indexer-embed-widgets.md), as provided in the language dropdown in the insights widget
+* portal - the portal column lists supported languages for the [web portal](https://aka.ms/vi-portal-link)
+* widgets - the [widgets](video-indexer-embed-widgets.md) column lists supported languages for translating the index file
-| **Language** | **Code** | **Web experience** | **Widgets experience** |
+| **Language** | **Code** | **Portal** | **Widgets** |
|::|::|:--:|:-:| | Afrikaans | `af-ZA` | | Γ£ö | | Arabic (Iraq) | `ar-IQ` | | |
azure-video-analyzer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-embed-widgets.md
Title: Embed Azure Video Analyzer for Media (formerly Video Indexer) widgets in your apps description: Learn how to embed Azure Video Analyzer for Media (formerly Video Indexer) widgets in your apps. Previously updated : 01/25/2021 Last updated : 03/29/2022
A Cognitive Insights widget includes all visual insights that were extracted fro
|Name|Definition|Description| ||||
-|`widgets` | Strings separated by comma | Allows you to control the insights that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?widgets=people,keywords` renders only people and keywords UI insights.<br/>Available options: people, animatedCharacters ,keywords, labels, sentiments, emotions, topics, keyframes, transcript, ocr, speakers, scenes, and namedEntities.|
+|`widgets` | Strings separated by comma | Allows you to control the insights that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?widgets=people,keywords` renders only people and keywords UI insights.<br/>Available options: people, animatedCharacters, keywords, labels, sentiments, emotions, topics, keyframes, transcript, ocr, speakers, scenes, and namedEntities.|
|`controls`|Strings separated by comma|Allows you to control the controls that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?controls=search,download` renders only search option and download button.<br/>Available options: search, download, presets, language.| |`language`|A short language code (language name)|Controls insights language.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?language=es-es` <br/>or `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?language=spanish`| |`locale` | A short language code | Controls the language of the UI. The default value is `en`. <br/>Example: `locale=de`.| |`tab` | The default selected tab | Controls the **Insights** tab that's rendered by default. <br/>Example: `tab=timeline` renders the insights with the **Timeline** tab selected.| |`location` ||The `location` parameter must be included in the embedded links, see [how to get the name of your region](regions.md). If your account is in preview, the `trial` should be used for the location value. `trial` is the default value for the `location` parameter.|
+|`search`|A free text for search |Allows you to control the initial search term. Example - `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?search=vi` renders the insights filtered by the word "vi".|
### Player widget
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
az feature show --name AzureArcForAVS --namespace Microsoft.AVS
Use the following steps to guide you through the process to onboard in Arc for Azure VMware Solution (Preview).
-1. Sign into the jumpbox VM and extract the contents from the compressed file from the following [location](https://github.com/Azure/ArcOnAVS/releases). The extracted file contains the scripts to install the preview software.
+1. Sign into the jumpbox VM and extract the contents from the compressed file from the following [location](https://github.com/Azure/ArcOnAVS/releases/tag/v2.0.0). The extracted file contains the scripts to install the preview software.
1. Open the 'config_avs.json' file and populate all the variables. **Config JSON**
Use the following steps to guide you through the process to onboard in Arc for A
```json { "subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "resourceGroup": "test-rg ",
+ "resourceGroup": "test-rg",
"applianceControlPlaneIpAddress": "10.14.10.10", "privateCloud": "test-pc", "isStatic": true,
This section will demonstrate how to use custom roles to manage granular access
#### Arc-enabled VMware vSphere custom roles
-We provide three custom roles to meet your Role-based access controls (RBACs). These roles can be applied to a whole subscription, resource group, or a single resource.
+Three custom roles are provided to meet your Role-based access control (RBAC) requirements. These roles can be applied to a whole subscription, resource group, or a single resource.
- Azure Arc VMware Administrator role - Azure Arc VMware Private Cloud User role
Use the **`Set Credential`** command to update the provider credentials for appl
1. Run the onboard command again. See step 3 in the [Process to onboard]() in Arc for Azure VMware Preview. > [!NOTE]
-> Customers need to ensure kubeconfig and SSH remain available as they will be required for log collection, appliance Upgrade, and credential rotation. These parameters will be required at the time of upgrade, log collection, and credential update scenarios.
+> Customers need to ensure kubeconfig and SSH keys remain available as they will be required for log collection, appliance Upgrade, and credential rotation. These parameters will be required at the time of upgrade, log collection, and credential update scenarios.
**Parameters**
Use the following steps to perform a manual upgrade for Arc appliance virtual ma
1. Power off the VM. 1. Delete the VM. 1. Delete the download template corresponding to the VM.
+1. Delete the appliance ARM resource.
1. Get the previous script `Config_avs` file and add the following configuration item: 1. `"register":false` 1. Download the latest version of the Azure VMware Solution onboarding script.
backup Disk Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-support-matrix.md
Title: Azure Disk Backup support matrix description: Provides a summary of support settings and limitations Azure Disk Backup. Previously updated : 12/01/2021 Last updated : 03/30/2022
Azure Disk Backup is available in all public cloud and Sovereign cloud regions.
- Currently, the Backup policy can't be modified, and the Snapshot Resource group that is assigned to a backup instance when you configure the backup of a disk can't be changed. -- Currently, the Azure portal experience to configure the backup of disks is limited to a maximum of 20 disks from the same subscription.--- Azure Disk Backup supports PowerShell. Currently, Azure CLI isnΓÇÖt supported.- - When configuring backup, the disk selected to be backed up and the snapshot resource group where the snapshots are to be stored must be part of the same subscription. You can't create an incremental snapshot for a particular disk outside of that disk's subscription. Learn more about [incremental snapshots](../virtual-machines/disks-incremental-snapshots.md#restrictions) for managed disk. For more information on how to choose a snapshot resource group, see [Configure backup](backup-managed-disks.md#configure-backup). - For successful backup and restore operations, role assignments are required by the Backup vaultΓÇÖs managed identity. Use only the role definitions provided in the documentation. Use of other roles like owner, contributor, and so on, isn't supported. You may face permission issues, if you start configuring backup or restore operations soon after assigning roles. This is because the role assignments take a few minutes to take effect.
backup Soft Delete Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/soft-delete-virtual-machines.md
Soft delete is currently supported in the West Central US, East Asia, Canada Cen
![Screenshot of Azure portal Backup Items](./media/backup-azure-security-feature-cloud/backup-stopped.png)
-2. In the following window, you'll be given a choice to delete or retain the backup data. If you choose **Delete backup data** and then **Stop backup**, the VM backup won't be permanently deleted. Rather, the backup data will be retained for 14 days in the soft deleted state. If **Delete backup data** is chosen, a delete email alert is sent to the configured email ID informing the user that 14 days remain of extended retention for backup data. Also, an email alert is sent on the 12th day informing that there are two more days left to resurrect the deleted data. The deletion is deferred until the 15th day, when permanent deletion will occur and a final email alert is sent informing about the permanent deletion of the data.
+2. In the following window, you'll be given a choice to delete or retain the backup data. If you choose **Retain backup data** and then **Stop backup**, the VM backup won't be permanently deleted. Rather, this stops all scheduled backup jobs and retains backup data. In this scenario, retention range set in the policy does not apply to the backup data. It continues the pricing as is until you remove the data manually. If **Delete backup data** is chosen, a delete email alert is sent to the configured email ID informing the user that 14 days remain of extended retention for backup data. Also, an email alert is sent on the 12th day informing that there are two more days left to resurrect the deleted data. The deletion is deferred until the 15th day, when permanent deletion will occur and a final email alert is sent informing about the permanent deletion of the data.
![Screenshot of Azure portal, Stop Backup screen](./media/backup-azure-security-feature-cloud/delete-backup-data.png)
batch Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-bicep.md
+
+ Title: Azure Quickstart - Create a Batch account - Bicep file
+description: This quickstart shows how to create a Batch account by using a Bicep file.
+ Last updated : 03/22/2022+++
+tags: azure-resource-manager, bicep
++
+# Quickstart: Create a Batch account by using a Bicep file
+
+Get started with Azure Batch by using a Bicep file to create a Batch account, including storage. You need a Batch account to create compute resources (pools of compute nodes) and Batch jobs. You can link an Azure Storage account with your Batch account, which is useful to deploy applications and store input and output data for most real-world workloads.
+
+After completing this quickstart, you'll understand the key concepts of the Batch service and be ready to try Batch with more realistic workloads at larger scale.
++
+## Prerequisites
+
+You must have an active Azure subscription.
+
+- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/batchaccount-with-storage/).
++
+Two Azure resources are defined in the Bicep file:
+
+- [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts): Creates a storage account.
+- [Microsoft.Batch/batchAccounts](/azure/templates/microsoft.batch/batchaccounts): Creates a Batch account.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+If you plan to continue on with more of our [tutorials](./tutorial-parallel-dotnet.md), you may want to leave these resources in place. When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and all of its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created a Batch account and a storage account using Bicep. To learn more about Azure Batch, continue to the Azure Batch tutorials.
+
+> [!div class="nextstepaction"]
+> [Azure Batch tutorials](./tutorial-parallel-dotnet.md)
cognitive-services Logo Detector Mobile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/logo-detector-mobile.md
az account set --subscription "<subscription name or subscription id>"
Then create your service principal. (This process might take some time to finish.) ```azurecli
-az ad sp create-for-rbac --name <servicePrincipalName> --role Contributor --password <yourSPStrongPassword>
+az ad sp create-for-rbac --name <servicePrincipalName> --role Contributor --scopes /subscriptions/<subscription_id> --password <yourSPStrongPassword>
``` Upon successful completion, you should see the following JSON output, including the necessary credentials.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following table lists the prebuilt neural voices supported in each language.
| English (United States) | `en-US` | Female | `en-US-CoraNeural` | General | | English (United States) | `en-US` | Female | `en-US-ElizabethNeural` | General | | English (United States) | `en-US` | Female | `en-US-JennyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Female | `en-US-JennyMultilingualNeural` | General, multi-lingual capabilities available [using SSML](speech-synthesis-markup.md#adjust-speaking-languages) |
| English (United States) | `en-US` | Female | `en-US-MichelleNeural`| General | | English (United States) | `en-US` | Female | `en-US-MonicaNeural` | General | | English (United States) | `en-US` | Female | `en-US-SaraNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
The following table lists the prebuilt neural voices supported in each language.
> [!IMPORTANT] > The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021.
+> The `en-US-JennyMultilingualNeural` voice supports multiple languages. Check the [voices list API](rest-text-to-speech.md#get-a-list-of-voices) for a supported languages list.
+ > If you're using container Neural TTS, [download](speech-container-howto.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30,2021, all requests with previous versions will be rejected. > Two styles for `fr-FR-DeniseNeural` now are available for public preview: `cheerful` and `sad` in 3 regions: East US, West Europe, and Southeast Asia.
The following neural voices are in public preview.
| English (United Kingdom) | `en-GB` | Male | `en-GB-NoahNeural` <sup>New</sup> | General | | English (United Kingdom) | `en-GB` | Male | `en-GB-OliverNeural` <sup>New</sup> | General | | English (United Kingdom) | `en-GB` | Male | `en-GB-ThomasNeural` <sup>New</sup> | General |
-| English (United States) | `en-US` | Female | `en-US-JennyMultilingualNeural` | General,multi-lingual capabilities available [using SSML](speech-synthesis-markup.md#create-an-ssml-document) |
| French (France) | `fr-FR` | Female | `fr-FR-BrigitteNeural` <sup>New</sup> | General | | French (France) | `fr-FR` | Female | `fr-FR-CelesteNeural` <sup>New</sup> | General | | French (France) | `fr-FR` | Female | `fr-FR-CoralieNeural` <sup>New</sup> | General |
The following neural voices are in public preview.
> [!IMPORTANT] > Voices/Styles in public preview are only available in three service regions: East US, West Europe, and Southeast Asia.
-> The `en-US-JennyNeuralMultilingual` voice supports multiple languages. Check the [voices list API](rest-text-to-speech.md#get-a-list-of-voices) for a supported languages list.
- > For more information about regional availability, see [regions](regions.md#prebuilt-neural-voices). > To learn how you can configure and adjust neural voices, such as Speaking Styles, see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles).
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md
Available regions for speaker recognition are in the following table.
| Americas | West Central US | `westcentralus` | | Americas | West US | `westus` | | Americas | West US 2 | `westus2` |
+| Americas | West US 3 | `westus3` |
| Asia Pacific | East Asia | `eastasia` | | Asia Pacific | Southeast Asia | `southeastasia` | | Asia Pacific | Central India | `centralindia` |
+| Asia Pacific | Japan East | `japaneast` |
+| Asia Pacific | Japan West | `japanwest` |
+| Asia Pacific | Korea Central | `koreacentral` |
| Australia | Australia East | `australiaeast` |
+| Canada | Canada Central | `canadacentral` |
| Europe | North Europe | `northeurope` | | Europe | West Europe | `westeurope` |
+| Europe | France Central | `francecentral` |
+| Europe | Germany West Central | `germanywestcentral` |
+| Europe | Norway East | `norwayeast` |
+| Europe | Switzerland West | `switzerlandwest` |
| Europe | UK South | `uksouth` | ### Keyword recognition
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The following table has descriptions of each supported role.
## Adjust speaking languages
-All neural voices are multilingual. By default, they are fluent in their own language and English. You can adjust the speaking language per voice at the sentence level and word level by using the `<lang xml:lang>` element.
+All neural voices are multilingual. By default, they are fluent in their own language and English without using `<lang xml:lang>` element. For example, if the input text in English is "I'm excited to try text to speech" and you use the `es-ES-ElviraNeural` voice, the text is spoken in English with a Spanish accent. With most neural voices, setting a specific speaking language with `<lang xml:lang>` element at the sentence or word level is currently not supported.
-The `en-US-JennyMultilingualNeural` neural voice is multilingual in 14 languages (For example: English, Spanish, and Chinese). The supported languages are provided in a table following the `<lang>` syntax and attribute definitions.
+You can adjust the speaking language for the `en-US-JennyMultilingualNeural` neural voice at the sentence level and word level by using the `<lang xml:lang>` element. The `en-US-JennyMultilingualNeural` neural voice is multilingual in 14 languages (For example: English, Spanish, and Chinese). The supported languages are provided in a table following the `<lang>` syntax and attribute definitions.
**Syntax**
The `en-US-JennyMultilingualNeural` neural voice is multilingual in 14 languages
| Attribute | Description | Required or optional | |--|-||
-| `lang` | Specifies the language that you want the voice to speak. Speaking different languages are voice specific. | Required if adjusting the speaking language for a neural voice. If you're using `lang xml:lang`, the locale must be provided. |
+| `lang` | Specifies the language that you want the neural voice to speak. | Required to adjust the speaking language for the neural voice. If you're using `lang xml:lang`, the locale must be provided. |
> [!NOTE] > The `<lang xml:lang>` element is incompatible with the `prosody` and `break` elements. You can't adjust pause and prosody like pitch, contour, rate, or volume in this element.
communication-services Service Principal From Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/identity/service-principal-from-cli.md
To create a registered application from the Azure CLI, you need to be logged in
The following examples uses the Azure CLI to create a new registered application ```azurecli
-az ad sp create-for-rbac --name <application-name> --role Contributor
+az ad sp create-for-rbac --name <application-name> --role Contributor --scopes /subscriptions/<subscription-id>
``` The `az ad sp create-for-rbac` command will return a list of service principal properties in JSON format. Copy these values so that you can use them to create the necessary environment variables in the next step.
confidential-ledger Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-cli.md
+
+ Title: Quickstart ΓÇô Microsoft Azure confidential ledger with the Azure CLI
+description: Learn to use the Microsoft Azure confidential ledger through the Azure CLI
++ Last updated : 03/22/2022+++++
+# Quickstart: Create a confidential ledger using the Azure CLI
+
+Azure confidential ledger is a cloud service that provides a high integrity store for sensitive data logs and records that must be kept intact. In this quickstart you will use the [Azure CLI](/cli/azure/) to create a confidential ledger, view and update its properties, and delete it.
+
+For more information on Azure confidential ledger, and for examples of what can be stored in a confidential ledger, see [About Microsoft Azure confidential ledger](overview.md).
+++
+## Create a resource group
++
+## Get your principal ID
+
+To create a confidential ledger, you'll need your Azure Active Directory principal ID (also called your object ID). To obtain your principal ID, use the Azure CLI [az ad signed-in-user](/cli/azure/ad/signed-in-user) command, and filter the results by `objectId`:
+
+```azurecli
+az ad signed-in-user show --query objectId
+```
+
+Your result will be in the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+
+## Create a confidential ledger
+
+Use the Azure CLI [az confidentialledger create](/cli/azure/confidentialledger#az-confidentialledger-create) command to create a confidential ledger in your new resource group.
+
+```azurecli
+az confidentialledger create --name "myLedger" --resource-group "myResourceGroup" --location "EastUS" --ledger-type "Public" --aad-based-security-principals ledger-role-name="Administrator" principal-id="<your-principal-id>"
+```
+
+A successful operation will return the properties of the newly created ledger. Take note of the **ledgerUri**. In the example above, this URI is "https://myledger.confidential-ledger.azure.com".
+
+You'll need this URI to transact with the confidential ledger from the data plane.
+
+## View and update your confidential ledger properties
+
+You can view the properties associated with your newly created confidential ledger using the Azure CLI [az confidentialledger show](/cli/azure/confidentialledger#az-confidentialledger-show) command.
+
+```azurecli
+az confidentialledger show --name "myLedger" --resource-group "myResourceGroup"
+```
+
+The returned JSON object displays the ledger's properties, including your role:
+
+```json
+"ledgerRoleName": "Administrator",
+```
+
+To update the properties of a confidential ledger, use do so, use the Azure CLI [az confidentialledger update](/cli/azure/confidentialledger#az-confidentialledger-update) command. For instance, to update your ledger to change your role to "Reader", run:
+
+```azurecli
+az confidentialledger update --name "myLedger" --resource-group "myResourceGroup" --ledger-type "Public" --aad-based-security-principals ledger-role-name="Reader" principal-id="<your-principal-id>"
+```
+
+If you again run [az confidentialledger show](/cli/azure/confidentialledger#az-confidentialledger-show), you'll see that the role has been updated.
+
+```json
+"ledgerRoleName": "Reader",
+```
+
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you created a confidential ledger by using the Azure CLI. To learn more about Azure confidential ledger and how to integrate it with your applications, continue on to the articles below.
+
+- [Overview of Microsoft Azure confidential ledger](overview.md)
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
description: Learn about Azure Cosmos DB transactional (row-based) and analytica
Previously updated : 11/02/2021 Last updated : 03/24/2022 + # What is Azure Cosmos DB analytical store?
At the end of each execution of the automatic sync process, your transactional d
* Azure Synapse Analytics Spark pools can read all data, including the most recent updates, through Spark tables, which are updated automatically, or via the `spark.read` command, that always reads the last state of the data.
-* Azure Synapse Analytics SQL Serverless pools can read all data, including the most recent updates, through views, which are updated automatically, or via `SELECT` together with the` OPENROWSET` commands, which always reads the latest status of the data.
+* Azure Synapse Analytics SQL Serverless pools can read all data, including the most recent updates, through views, which are updated automatically, or via `SELECT` together with the `OPENROWSET` commands, which always reads the latest status of the data.
> [!NOTE]
-> Your transactional data will be synchronized to analytical store even if your transactional TTL is smaller than 2 minutes.
+> Your transactional data will be synchronized to analytical store even if your transactional time-to-live (TTL) is smaller than 2 minutes.
> [!NOTE] > Please note that if you delete your container, analytical store is also deleted.
This is achieved by translating the leaf properties of the operational data into
In the full fidelity schema representation, each datatype of each property will generate a column for that datatype. Each of them count as one of the 1000 maximum properties.
-For example, letΓÇÖs take the following sample document in the transactional store:
+For example, let's take the following sample document in the transactional store:
```json {
salary: 1000000
} ```
-The leaf property `streetNo` within the nested object `address` will be represented in the analytical store schema as a column `address.object.streetNo.int32`. The datatype is added as a suffix to the column. This way, if another document is added to the transactional store where the value of leaf property `streetNo` is "123" (note itΓÇÖs a string), the schema of the analytical store automatically evolves without altering the type of a previously written column. A new column added to the analytical store as `address.object.streetNo.string` where this value of "123" is stored.
+The leaf property `streetNo` within the nested object `address` will be represented in the analytical store schema as a column `address.object.streetNo.int32`. The datatype is added as a suffix to the column. This way, if another document is added to the transactional store where the value of leaf property `streetNo` is "123" (note it's a string), the schema of the analytical store automatically evolves without altering the type of a previously written column. A new column added to the analytical store as `address.object.streetNo.string` where this value of "123" is stored.
**Data type to suffix map**
Here's a map of all the property data types and their suffix representations in
|Original data type |Suffix |Example | ||||
-| Double | ".float64" | 24.99|
-| Array | ".array" | ["a", "b"]|
-|Binary | ".binary" |0|
-|Boolean | ".bool" |True|
-|Int32 | ".int32" |123|
-|Int64 | ".int64" |255486129307|
-|NULL | ".NULL" | NULL|
-|String| ".string" | "ABC"|
-|Timestamp | ".timestamp" | Timestamp(0, 0)|
-|DateTime |".date" | ISODate("2020-08-21T07:43:07.375Z")|
-|ObjectId |".objectId" | ObjectId("5f3f7b59330ec25c132623a2")|
-|Document |".object" | {"a": "a"}|
+| Double | ".float64" | 24.99|
+| Array | ".array" | ["a", "b"]|
+|Binary | ".binary" |0|
+|Boolean | ".bool" |True|
+|Int32 | ".int32" |123|
+|Int64 | ".int64" |255486129307|
+|NULL | ".NULL" | NULL|
+|String| ".string" | "ABC"|
+|Timestamp | ".timestamp" | Timestamp(0, 0)|
+|DateTime |".date" | ISODate("2020-08-21T07:43:07.375Z")|
+|ObjectId |".objectId" | ObjectId("5f3f7b59330ec25c132623a2")|
+|Document |".object" | {"a": "a"}|
* Expect different behavior in regard to explicit `NULL` values: * Spark pools in Azure Synapse will read these values as `0` (zero).
The possible ATTL configurations are:
Some points to consider:
-* After the analytical store is enabled with an ATTL value, it can be updated to a different valid value later.
-* While TTTL can be set at the container or item level, ATTL can only be set at the container level currently.
-* You can achieve longer retention of your operational data in the analytical store by setting ATTL >= TTTL at the container level.
-* The analytical store can be made to mirror the transactional store by setting ATTL = TTTL.
-* If you have ATTL bigger than TTTL, at some point in time you'll have data that only exists in analytical store. This data is read only.
+* After the analytical store is enabled with an ATTL value, it can be updated to a different valid value later.
+* While TTTL can be set at the container or item level, ATTL can only be set at the container level currently.
+* You can achieve longer retention of your operational data in the analytical store by setting ATTL >= TTTL at the container level.
+* The analytical store can be made to mirror the transactional store by setting ATTL = TTTL.
+* If you have ATTL bigger than TTTL, at some point in time you'll have data that only exists in analytical store. This data is read only.
How to enable analytical store on a container:
If you have a globally distributed Azure Cosmos DB account, after you enable ana
## Partitioning
-Analytical store partitioning is completely independent of partitioning in the transactional store. By default, data in analytical store isn't partitioned. If your analytical queries have frequently used filters, you have the option to partition based on these fields for better query performance. To learn more, see the [introduction to custom partitioning](custom-partitioning-analytical-store.md) and [how to configure custom partitioning](configure-custom-partitioning.md) articles.
+Analytical store partitioning is completely independent of partitioning in the transactional store. By default, data in analytical store isn't partitioned. If your analytical queries have frequently used filters, you have the option to partition based on these fields for better query performance. To learn more, see [introduction to custom partitioning](custom-partitioning-analytical-store.md) and [how to configure custom partitioning](configure-custom-partitioning.md).
## Security
cosmos-db Apache Cassandra Consistency Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/apache-cassandra-consistency-mapping.md
Previously updated : 10/12/2020-- Last updated : 03/24/2022+ # Apache Cassandra and Azure Cosmos DB Cassandra API consistency levels [!INCLUDE[appliesto-cassandra-api](../includes/appliesto-cassandra-api.md)]
-Unlike Azure Cosmos DB, Apache Cassandra does not natively provide precisely defined consistency guarantees. Instead, Apache Cassandra provides a write consistency level and a read consistency level, to enable the high availability, consistency, and latency tradeoffs. When using Azure Cosmos DBΓÇÖs Cassandra API:
+Unlike Azure Cosmos DB, Apache Cassandra does not natively provide precisely defined consistency guarantees. Instead, Apache Cassandra provides a write consistency level and a read consistency level, to enable the high availability, consistency, and latency tradeoffs. When using Azure Cosmos DB's Cassandra API:
* The write consistency level of Apache Cassandra is mapped to the default consistency level configured on your Azure Cosmos account. Consistency for a write operation (CL) can't be changed on a per-request basis. * Azure Cosmos DB will dynamically map the read consistency level specified by the Cassandra client driver to one of the Azure Cosmos DB consistency levels configured dynamically on a read request.
-## Multi-region writes vs Single-region writes
+## Multi-region writes vs single-region writes
Apache Cassandra database is a multi-master system by default, and does not provide an out-of-box option for single-region writes with multi-region replication for reads. However, Azure Cosmos DB provides turnkey ability to have either single region, or [multi-region](../how-to-multi-master.md) write configurations. One of the advantages of being able to choose a single region write configuration across multiple regions is the avoidance of cross-region conflict scenarios, and the option of maintaining strong consistency across multiple regions.
With single-region writes, you can maintain strong consistency, while still main
## Mapping consistency levels
-The Azure Cosmos DB platform provides a set of five well-defined, business use-case oriented consistency settings with respect to replication and the tradeoffs defined by the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem) and [PACLC theorem](https://en.wikipedia.org/wiki/PACELC_theorem). As this approach differs significantly from Apache Cassandra, we would recommend that you take time to review and understand Azure Cosmos DB consistency settings in our [documentation](../consistency-levels.md), or watch this short [video](https://www.youtube.com/watch?v=t1--kZjrG-o) guide to understanding consistency settings in the Azure Cosmos DB platform.
+The Azure Cosmos DB platform provides a set of five well-defined, business use-case oriented consistency settings with respect to replication and the tradeoffs defined by the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem) and [PACLC theorem](https://en.wikipedia.org/wiki/PACELC_theorem). As this approach differs significantly from Apache Cassandra, we would recommend that you take time to review and understand [Azure Cosmos DB consistency](../consistency-levels.md), or watch this short [video guide to understanding consistency settings](https://aka.ms/docs.consistency-levels) in the Azure Cosmos DB platform.
The following table illustrates the possible mappings between Apache Cassandra and Azure Cosmos DB consistency levels when using Cassandra API. This shows configurations for single region, multi-region reads with single-region writes, and multi-region writes.
cosmos-db Cassandra Adoption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-adoption.md
Previously updated : 11/30/2021- Last updated : 03/24/2022+ # From Apache Cassandra to Cassandra API
For information about offline migration, see [Migrate data from Cassandra to an
### Consistency
-Although the approaches to replication consistency in Apache Cassandra and Azure Cosmos DB are similar, it's important to understand how they are different. A [mapping document](apache-cassandra-consistency-mapping.md) compares Apache Cassandra and Azure Cosmos DB approaches to replication consistency. However, we highly recommend that you specifically review [Azure Cosmos DB consistency settings](../consistency-levels.md) or watch a brief [video guide to understanding consistency settings in the Azure Cosmos DB platform](https://www.youtube.com/watch?v=t1--kZjrG-o).
+Although the approaches to replication consistency in Apache Cassandra and Azure Cosmos DB are similar, it's important to understand how they are different. A [mapping document](apache-cassandra-consistency-mapping.md) compares Apache Cassandra and Azure Cosmos DB approaches to replication consistency. However, we highly recommend that you specifically review [Azure Cosmos DB consistency settings](../consistency-levels.md) or watch a brief [video guide to understanding consistency settings in the Azure Cosmos DB platform](https://aka.ms/docs.consistency-levels).
## Recommended client configurations
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
# Choose an API in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-📺 <B><a href="https://aka.ms/cosmos-db-video-explore-cosmos-db-apis" target="_blank">Video: Explore Azure Cosmos DB APIs</a></b>
- Azure Cosmos DB is a fully managed NoSQL database for modern app development. Azure Cosmos DB takes database administration off your hands with automatic management, updates, and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand.
+>
+> [!VIDEO https://aka.ms/docs.essential-apis]
+ ## APIs in Azure Cosmos DB Azure Cosmos DB offers multiple database APIs, which include the Core (SQL) API, API for MongoDB, Cassandra API, Gremlin API, and Table API. By using these APIs, you can model real world data using documents, key-value, graph, and column-family data models. These APIs allow your applications to treat Azure Cosmos DB as if it were various other databases technologies, without the overhead of management, and scaling approaches. Using these APIs, Azure Cosmos DB helps you to use the ecosystems, tools, and skills you already have for data modeling and querying.
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
The following options create a container with analytical store by using PowerShe
## <a id="update-analytical-ttl"></a> Enable analytical store on an existing container
-> [!NOTE]
-> Due to short-term capacity constraints, you need to register to enable Synapse Link on your existing containers. Depending on the pending requests, approving this request may take anywhere from a day to a week. Instructions to check the request status are provided below. This step is required once per subscription, and all new database accounts will also have this capability enabled. You need **contributor** or **administrator** Azure built-in roles on your subscription to be able to register your request to use the existing containers feature. If you have any issues or questions, please reach out to [cosmosdbsynapselink@microsoft.com](mailto:cosmosdbsynapselink@microsoft.com).
- > [!NOTE] > You can turn on analytical store on existing Azure Cosmos DB SQL API containers. This capability is general available and can be used for production workloads.
- Please note the following details when enabling Synapse Link on your existing containers:
+Please note the following details when enabling Azure Synapse Link on your existing containers:
* The same performance isolation of the analytical store auto-sync process applies to the initial sync and there is no performance impact on your OLTP workload.
The following options create a container with analytical store by using PowerShe
### Azure portal
-1. Sign in to the [Azure portal](https://portal.azure.com/) or the [Azure Cosmos DB Explorer](https://cosmos.azure.com/).
-2. Navigate to your Azure Cosmos DB account and open the **Synapse Link"** tab in the **Integrations** section. In this tab you can:
-3. Click **Register** to request approval for your subscription. To see the status of request, please come back to this same portal pane.
-4. When approved, you will see your accountΓÇÖs containers list and you will be able to select those that will have analytical store enabled.
-5. Optionally, you can go to the **Power BI** tab, in the **Integrations** section, to create Power BI dashboards on your Synapse Link enabled containers.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Navigate to your Azure Cosmos DB account and open the **Azure Synapse Link"** tab in the **Integrations** left navigation section. In this tab you can enable Synapse Link in your database account and you can enable Synapse Link on your existing containers.
+4. After you click the blue **Enable Synapse Link on your container(s)** button, you will start to see the progress of your containers initial sync progress.
+5. Optionally, you can go to the **Power BI** tab, also in the **Integrations** section, to create Power BI dashboards on your Synapse Link enabled containers.
### Command-Line Tools
-Set the `analytical TTL` property to the required value to create an analytical store enabled container. For the list of allowed values, see the [analytical TTL supported values](analytical-store-introduction.md#analytical-ttl) article.
+Set the `analytical TTL` property to `-1` for infinite retention or use a positive integer to specify the number of seconds that the data will be retain in analytical store. For more information, see the [analytical TTL supported values](analytical-store-introduction.md#analytical-ttl) article.
### Azure CLI
-Use the following steps to enable analytical store on an existing container by using Azure CLI. Set the `--analytical-storage-ttl` property to the required value in seconds or use `-1` for infinite retention. This setting can be changed later.
-
-* [Register for approval](/cli/azure/feature/registration) by using `az feature registration create --namespace Microsoft.DocumentDB --name AnalyticalStoreMigration`.
-* [Check the request status](/cli/azure/feature/registration) by using `az feature registration show --namespace Microsoft.DocumentDB --name AnalyticalStoreMigration`.
-* [Update Analytical ttl](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-update) to `-1` after the request approval.
+* Use [az cosmosdb sql container update](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-update) to update `--analytical-storage-ttl`.
* Check the migration status in the Azure portal. ### PowerShell
-Use the following steps to enable analytical store on an existing container by using PowerShell. Set the `-AnalyticalStorageTtl` property to the required value in seconds or use `-1` for infinite retention. This setting can be changed later.
-
-* [Register for approval](/powershell/module/az.resources/register-azproviderfeature) using `Register-AzProviderFeature -ProviderName "Microsoft.DocumentDB" -FeatureName "AnalyticalStoreMigration"`.
-* [Check the request status](/powershell/module/az.resources/get-azproviderfeature).
-* [Update Analytical ttl](/powershell/module/az.cosmosdb/update-azcosmosdbsqlcontainer) to `-1` after the request approval.
+* Use [Update Analytical ttl](/powershell/module/az.cosmosdb/update-azcosmosdbsqlcontainer) to update `-AnalyticalStorageTtl`.
* Check the migration status in the Azure portal. -- ## <a id="update-analytical-ttl"></a> Optional - Update the analytical store time to live After the analytical store is enabled with a particular TTL value, you may want to update it to a different valid value. You can update the value by using the Azure portal, Azure CLI, PowerShell, or Cosmos DB SDKs. For information on the various Analytical TTL config options, see the [analytical TTL supported values](analytical-store-introduction.md#analytical-ttl) article.
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
# Consistency levels in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-📺 <B><a href="https://aka.ms/cosmos-db-video-consistency-levels" target="_blank">Video: Explore Consistency levels</a></b>
- Distributed databases that rely on replication for high availability, low latency, or both, must make a fundamental tradeoff between the read consistency, availability, latency, and throughput as defined by the [PACELC theorem](https://en.wikipedia.org/wiki/PACELC_theorem). The linearizability of the strong consistency model is the gold standard of data programmability. But it adds a steep price from higher write latencies due to data having to replicate and commit across large distances. Strong consistency may also suffer from reduced availability (during failures) because data cannot replicate and commit in every region. Eventual consistency offers higher availability and better performance, but it's more difficult to program applications because data may not be completely consistent across all regions.
+>
+> [!VIDEO https://aka.ms/docs.consistency-levels]
+ Most commercially available distributed NoSQL databases available in the market today provide only strong and eventual consistency. Azure Cosmos DB offers five well-defined levels. From strongest to weakest, the levels are: - [*Strong*](#strong-consistency)
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
description: Azure Cosmos DB's point-in-time restore feature helps to recover da
Previously updated : 11/22/2021 Last updated : 03/24/2022
# Continuous backup with point-in-time restore in Azure Cosmos DB [!INCLUDE[appliesto-sql-mongodb-api](includes/appliesto-sql-mongodb-api.md)]
-📺 <B><a href="https://aka.ms/cosmos-db-video-continuous-backup-restore-intro" target="_blank">Video: Learn more about continuous backup and point-in-time restore</a></b>
- Azure Cosmos DB's point-in-time restore feature helps in multiple scenarios such as the following: * To recover from an accidental write or delete operation within a container. * To restore a deleted account, database, or a container. * To restore into any region (where backups existed) at the restore point in time.
+>
+> [!VIDEO https://aka.ms/docs.continuous-backup-restore]
+ Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database. Continuous backups are taken in every region where the account exists. The following image shows how a container with write region in West US, read regions in East and East US 2 is backed up to a remote Azure Blob Storage account in the respective regions. By default, each region stores the backup in Locally Redundant storage accounts. If the region has [Availability zones](/azure/architecture/reliability/architect) enabled then the backup is stored in Zone-Redundant storage accounts. :::image type="content" source="./media/continuous-backup-restore-introduction/continuous-backup-restore-blob-storage.png" alt-text="Azure Cosmos DB data backup to the Azure Blob Storage." lightbox="./media/continuous-backup-restore-introduction/continuous-backup-restore-blob-storage.png" border="false":::
The following example is based on the price for an Azure Cosmos account deployed
$0.15/GB * Data size in GB.
-For example, if you have 1-TB of data in two regions then:
+For example, if you have 1 TB of data in two regions then:
* Backup storage cost is calculated as (1000 * 0.20 * 2) = $400 per month
cosmos-db How Pricing Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-pricing-works.md
Previously updated : 12/07/2021 Last updated : 03/24/2022 + # Pricing model in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-📺 <B><a href="https://aka.ms/cosmos-db-video-overview-pricing-options" target="_blank">Video: Overview of Azure Cosmos DB pricing options</a></b>
- The pricing model of Azure Cosmos DB simplifies the cost management and planning. With Azure Cosmos DB, you pay for the operations you perform against the database and for the storage consumed by your data.
+>
+> [!VIDEO https://aka.ms/docs.how-pricing-works]
+ - **Database operations**: The way you get charged for your database operations depends on the type of Azure Cosmos account you are using. - **Provisioned Throughput**: [Provisioned throughput](set-throughput.md) (also called reserved throughput) provides high performance at any scale. You specify the throughput that you need in [Request Units](request-units.md) per second (RU/s), and Azure Cosmos DB dedicates the resources required to provide the configured throughput. You can [provision throughput on either a database or a container](set-throughput.md). Based on your workload needs, you can scale throughput up/down at any time or use [autoscale](provision-throughput-autoscale.md) (although there is a minimum throughput required on a database or a container to guarantee the SLAs). You are billed hourly for the maximum provisioned throughput for a given hour.
Azure Cosmos DB offers many options for developers to it for free. These options
* **Try Azure Cosmos DB for free**: Azure Cosmos DB offers a time-limited experience by using try Azure Cosmos DB for free accounts. You can create an Azure Cosmos DB account, create database and collections and run a sample application by using the Quickstarts and tutorials. You can run the sample application without subscribing to an Azure account or using your credit card. [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) offers Azure Cosmos DB for one month, with the ability to renew your account any number of times.
-* **Azure Cosmos DB emulator**: Azure Cosmos DB emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Emulator is offered at no cost and with high fidelity to the cloud service. Using Azure Cosmos DB emulator, you can develop and test your applications locally, without creating an Azure subscription or incurring any costs. You can develop your applications by using the emulator locally before going into production. After you are satisfied with the functionality of the application against the emulator, you can switch to using the Azure Cosmos DB account in the cloud and significantly save on cost. For more information about emulator, see [Using Azure Cosmos DB for development and testing](local-emulator.md) article for more details.
+* **Azure Cosmos DB emulator**: Azure Cosmos DB emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Emulator is offered at no cost and with high fidelity to the cloud service. Using Azure Cosmos DB emulator, you can develop and test your applications locally, without creating an Azure subscription or incurring any costs. You can develop your applications by using the emulator locally before going into production. After you are satisfied with the functionality of the application against the emulator, you can switch to using the Azure Cosmos DB account in the cloud and significantly save on cost. For more information, see [Using Azure Cosmos DB for development and testing](local-emulator.md) for more details.
## Pricing with reserved capacity
Reserved capacity provides a billing discount and does not affect the runtime st
You can learn more about optimizing the costs for your Azure Cosmos DB resources in the following articles: * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](./sql/estimate-ru-with-capacity-planner.md) * Learn about [Optimizing for development and testing](optimize-dev-test.md) * Learn more about [Understanding your Azure Cosmos DB bill](understand-your-bill.md)
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Last updated 08/26/2021+ # Welcome to Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-📺 <B><a href="https://aka.ms/cosmos-db-video-what-is-cosmos-db" target="_blank">Video: What is Cosmos DB?</a></b>
- Today's applications are required to be highly responsive and always online. To achieve low latency and high availability, instances of these applications need to be deployed in datacenters that are close to their users. Applications need to respond in real time to large changes in usage at peak hours, store ever increasing volumes of data, and make this data available to users in milliseconds. Azure Cosmos DB is a fully managed NoSQL database for modern app development. Single-digit millisecond response times, and automatic and instant scalability, guarantee speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security. App development is faster and more productive thanks to turnkey multi region data distribution anywhere in the world, open source APIs and SDKs for popular languages. As a fully managed service, Azure Cosmos DB takes database administration off your hands with automatic management, updates and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand.
+>
+> [!VIDEO https://aka.ms/docs.essential-introduction]
+ You can [Try Azure Cosmos DB for Free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments or use the [Azure Cosmos DB free tier](free-tier.md) to get an account with the first 1000 RU/s and 25 GB of storage free. > [!div class="nextstepaction"]
cosmos-db Mongodb Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/mongodb-indexing.md
# Manage indexing in Azure Cosmos DB API for MongoDB [!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
-📺 <B><a href="https://aka.ms/cosmos-db-video-indexing-best-practices-mongodb-api" target="_blank">Video: Explore indexing best practices for the Azure Cosmos DB API for MongoDB</a></b>
- Azure Cosmos DB API for MongoDB takes advantage of the core index-management capabilities of Azure Cosmos DB. This article focuses on how to add indexes using Azure Cosmos DB API for MongoDB. Indexes are specialized data structures that make querying your data roughly an order of magnitude faster.
+>
+> [!VIDEO https://aka.ms/docs.mongo-indexing]
+ ## Indexing for MongoDB server version 3.6 and higher Azure Cosmos DB API for MongoDB server version 3.6+ automatically indexes the `_id` field and the shard key (only in sharded collections). The API automatically enforces the uniqueness of the `_id` field per shard key.
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
Previously updated : 02/08/2022 Last updated : 03/24/2022 + # Partitioning and horizontal scaling in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-📺 <B><a href="https://aka.ms/cosmos-db-video-data-partitioning-best-practices" target="_blank">Video: Data partitioning best practices</a></b>
- Azure Cosmos DB uses partitioning to scale individual containers in a database to meet the performance needs of your application. In partitioning, the items in a container are divided into distinct subsets called *logical partitions*. Logical partitions are formed based on the value of a *partition key* that is associated with each item in a container. All the items in a logical partition have the same partition key value. For example, a container holds items. Each item has a unique value for the `UserID` property. If `UserID` serves as the partition key for the items in the container and there are 1,000 unique `UserID` values, 1,000 logical partitions are created for the container. In addition to a partition key that determines the item's logical partition, each item in a container has an *item ID* (unique within a logical partition). Combining the partition key and the *item ID* creates the item's *index*, which uniquely identifies the item. [Choosing a partition key](#choose-partitionkey) is an important decision that will affect your application's performance.
+>
+> [!VIDEO https://aka.ms/docs.partitioning-overview]
+ This article explains the relationship between logical and physical partitions. It also discusses best practices for partitioning and gives an in-depth view at how horizontal scaling works in Azure Cosmos DB. It's not necessary to understand these internal details to select your partition key but we have covered them so you have clarity on how Azure Cosmos DB works. + ## Logical partitions A logical partition consists of a set of items that have the same partition key. For example, in a container that contains data about food nutrition, all items contain a `foodGroup` property. You can use `foodGroup` as the partition key for the container. Groups of items that have specific values for `foodGroup`, such as `Beef Products`, `Baked Products`, and `Sausages and Luncheon Meats`, form distinct logical partitions.
The following image shows how logical partitions are mapped to physical partitio
:::image type="content" source="./media/partitioning-overview/logical-partitions.png" alt-text="An image that demonstrates Azure Cosmos DB partitioning" border="false":::
-## <a id="choose-partitionkey"></a>Choosing a partition key
+## <a id="choose-partitionkey"></a>Choose a partition key
A partition key has two components: **partition key path** and the **partition key value**. For example, consider an item `{ "userId" : "Andrew", "worksFor": "Microsoft" }` if you choose "userId" as the partition key, the following are the two partition key components:
If your container could grow to more than a few physical partitions, then you sh
* Your container will store over 100 GB of data
-## Using item ID as the partition key
+## Use item ID as the partition key
If your container has a property that has a wide range of possible values, it is likely a great partition key choice. One possible example of such a property is the *item ID*. For small read-heavy containers or write-heavy containers of any size, the *item ID* is naturally a great choice for the partition key.
cosmos-db Request Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/request-units.md
Previously updated : 08/26/2021 Last updated : 03/24/2022 + # Request Units in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-📺 <B><a href="https://aka.ms/cosmos-db-video-what-is-request-unit" target="_blank">Video: What is a Request Unit?</a></b>
- Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation. The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request unit is a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB.
-The cost to do a point read (i.e. fetching a single item by its ID and partition key value) for a 1 KB item is 1 Request Unit (or 1 RU). All other database operations are similarly assigned a cost using RUs. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs.
+The cost to do a point read (fetching a single item by its ID and partition key value) for a 1-KB item is 1 Request Unit (or 1 RU). All other database operations are similarly assigned a cost using RUs. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs.
+
+>
+> [!VIDEO https://aka.ms/docs.essential-request-units]
The following image shows the high-level idea of RUs:
The following image shows the high-level idea of RUs:
To manage and plan capacity, Azure Cosmos DB ensures that the number of RUs for a given database operation over a given dataset is deterministic. You can examine the response header to track the number of RUs that are consumed by any database operation. When you understand the [factors that affect RU charges](request-units.md#request-unit-considerations) and your application's throughput requirements, you can run your application cost effectively.
-The type of Azure Cosmos account you're using determines the way consumed RUs get charged. There are 3 modes in which you can create an account:
+The type of Azure Cosmos account you're using determines the way consumed RUs get charged. There are three modes in which you can create an account:
1. **Provisioned throughput mode**: In this mode, you provision the number of RUs for your application on a per-second basis in increments of 100 RUs per second. To scale the provisioned throughput for your application, you can increase or decrease the number of RUs at any time in increments or decrements of 100 RUs. You can make your changes either programmatically or by using the Azure portal. You are billed on an hourly basis for the number of RUs per second you have provisioned. To learn more, see the [Provisioned throughput](set-throughput.md) article.
If you provision *'R'* RUs on a Cosmos container (or database), Cosmos DB ensure
Assuming that a Cosmos container is configured with *'R'* RUs and there are *'N'* regions associated with the Cosmos account, the total RUs available globally on the container = *R* x *N*.
-Your choice of [consistency model](consistency-levels.md) also affects the throughput. You can get approximately 2x read throughput for the more relaxed consistency levels (e.g., *session*, *consistent prefix* and *eventual* consistency) compared to stronger consistency levels (e.g., *bounded staleness* or *strong* consistency).
+Your choice of [consistency model](consistency-levels.md) also affects the throughput. You can get approximately 2x read throughput for the more relaxed consistency levels (*session*, *consistent prefix* and *eventual* consistency) compared to stronger consistency levels (*bounded staleness* or *strong* consistency).
## Next steps
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/best-practice-dotnet.md
Previously updated : 01/25/2022 Last updated : 03/24/2022 -++ # Best practices for Azure Cosmos DB .NET SDK
This article walks through the best practices for using the Azure Cosmos DB .NET
Watch the video below to learn more about using the .NET SDK from a Cosmos DB engineer! -
-> [!VIDEO https://www.youtube.com/embed/McZIQhZpvew?start=118]
>
+> [!VIDEO https://aka.ms/docs.dotnet-best-practices]
## Checklist |Checked | Topic |Details/Links |
cosmos-db Best Practice Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/best-practice-java.md
+
+ Title: Azure Cosmos DB best practices for Java SDK v4
+description: Learn the best practices for using the Azure Cosmos DB Java SDK v4
++++ Last updated : 03/28/2022++++
+# Best practices for Azure Cosmos DB Java SDK
+
+This article walks through the best practices for using the Azure Cosmos DB Java SDK. Following these practices, will help improve your latency, availability, and boost overall performance.
+
+## Checklist
+|Checked | Topic |Details/Links |
+||||
+|<input type="checkbox"/> | SDK Version | Always using the [latest version](sql-api-sdk-java-v4.md) of the Cosmos DB SDK available for optimal performance. |
+| <input type="checkbox"/> | Singleton Client | Use a [single instance](/jav#sdk-usage). |
+| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [automatic failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover won't succeed due to lack of region connectivity. To learn how to add multiple regions using the Java SDK [visit here](tutorial-global-distribution-sql-api.md) |
+| <input type="checkbox"/> | Availability and Failovers | Set the [preferredRegions](/jav). |
+| <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is very high. |
+| <input type="checkbox"/> | Hosting | For most common cases of production workloads, we highly recommend using at least 4-cores and 8-GB memory VMs whenever possible. |
+| <input type="checkbox"/> | Connectivity Modes | Use [Direct mode](sql-sdk-connection-modes.md) for the best performance. For instructions on how to do this, see the [V4 SDK documentation](performance-tips-java-sdk-v4-sql.md#networking).|
+| <input type="checkbox"/> | Networking | If using a virtual machine to run your application, enable [Accelerated Networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) on your VM to help with bottlenecks due to high traffic and reduce latency or CPU jitter. You might also want to consider using a higher end Virtual Machine where the max CPU usage is under 70%. |
+| <input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we recommend setting the [`idleEndpointTimeout`](/java/api/com.azure.cosmos.directconnectionconfig.setidleendpointtimeout?view=azure-java-stable#com-azure-cosmos-directconnectionconfig-setidleendpointtimeout(java-time-duration)&preserve-view=true) to a higher value. The `idleEndpointTimeout` property in `DirectConnectionConfig` helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections to an endpoint are kept open for 1 hour. If there aren't requests to a specific endpoint for idle endpoint timeout duration, direct client closes all connections to that endpoint to save resources and I/O cost. |
+| <input type="checkbox"/> | Use Appropriate Scheduler (Avoid stealing Event loop IO Netty threads) | Avoid blocking calls: `.block()`. The entire call stack is asynchronous in order to benefit from [async API](https://projectreactor.io/docs/core/release/reference/#intro-reactive) patterns and use of appropriate [threading and schedulers](https://projectreactor.io/docs/core/release/reference/#schedulers) |
+| <input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use [project reactor's timeout API](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#timeout-java.time.Duration-). For more details on timeouts with Cosmos DB [visit here](troubleshoot-request-timeout-java-sdk-v4-sql.md) |
+| <input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK won't retry on writes for transient failures as writes aren't idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit here](conceptual-resilient-sdk-applications.md#should-my-application-retry-on-errors) |
+| <input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `CosmosAsyncDatabase#read()` or `CosmosAsyncContainer#read()` will result in metadata calls to the service, which consume from the system-reserved RU limit. `createDatabaseIfNotExists()` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. |
+| <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-java-sdk-v4-sql.md#sdk-usage) for better latency and throughput on your queries. We recommend setting the `maxDegreeOfParallelism` property within the `CosmosQueryRequestsOptions` to the number of partitions you have. If you aren't aware of the number of partitions, set the value to `-1` that will give you the best latency. Also, set the `maxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
+| <input type="checkbox"/> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-java-sdk-v4-sql.md#sdk-usage) intervals. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries. |
+| <input type="checkbox"/> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths `IndexingPolicy#getIncludedPaths()` and `IndexingPolicy#getExcludedPaths()`. Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit here](performance-tips-java-sdk-v4-sql.md#indexing-policy) |
+| <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. |
+| <input type="checkbox"/> | Enabling Query Metrics | For additional logging of your backend query executions, follow instructions on how to capture SQL Query Metrics using [Java SDK](troubleshoot-java-sdk-v4-sql.md#query-operations) |
+| <input type="checkbox"/> | SDK Logging | Use SDK logging to capture additional diagnostics information and troubleshoot latency issues. Log the [CosmosDiagnostics](/jav#capture-the-diagnostics) |
+
+## Best practices when using Gateway mode
+Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to tweak [maxConnectionPoolSize](/java/api/com.azure.cosmos.gatewayconnectionconfig.setmaxconnectionpoolsize?view=azure-java-stable#com-azure-cosmos-gatewayconnectionconfig-setmaxconnectionpoolsize(int)&preserve-view=true) to a different value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In Java v4 SDK, the default value for `GatewayConnectionConfig#maxConnectionPoolSize` is 1000. To change the value, you can set `GatewayConnectionConfig#maxConnectionPoolSize` to a different value.
+
+## Best practices for write-heavy workloads
+For workloads that have heavy create payloads, set the `CosmosClientBuilder#contentResponseOnWriteEnabled()` request option to `false`. The service will no longer return the created or updated resource to the SDK. Normally, because the application has the object that's being created, it doesn't need the service to return it. The header values are still accessible, like a request charge. Disabling the content response can help improve performance, because the SDK no longer needs to allocate memory or serialize the body of the response. It also reduces the network bandwidth usage to further help performance.
+
+## Next steps
+To learn more about performance tips for Java SDK, see [Performance tips for Azure Cosmos DB Java SDK v4](performance-tips-java-sdk-v4-sql.md).
+
+To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Change Feed Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-design-patterns.md
Previously updated : 08/26/2021 Last updated : 03/24/2022+ # Change feed design patterns in Azure Cosmos DB [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-📺 <B><a href="https://aka.ms/cosmos-db-video-deploy-event-sourcing-solution-with-azure-functions-dotnet" target="_blank">Video: Deploy an event sourcing solution with Azure Functions + .NET in 7 minutes</a></b>
- The Azure Cosmos DB change feed enables efficient processing of large datasets with a high volume of writes. Change feed also offers an alternative to querying an entire dataset to identify what has changed. This document focuses on common change feed design patterns, design tradeoffs, and change feed limitations.
+>
+> [!VIDEO https://aka.ms/docs.change-feed-azure-functions]
++ Azure Cosmos DB is well-suited for IoT, gaming, retail, and operational logging applications. A common design pattern in these applications is to use changes to the data to trigger additional actions. Examples of additional actions include: * Triggering a notification or a call to an API, when an item is inserted or updated.
For example, you might receive and store event data from devices, sensors, infra
:::image type="content" source="../media/change-feed/lambda.png" alt-text="Azure Cosmos DB-based lambda pipeline for ingestion and query" border="false":::
-In many cases, stream processing implementations first receive a high volume of incoming data into a temporary message queue such as Azure Event Hub or Apache Kafka. The change feed is a great alternative due to Azure Cosmos DB's ability to support a sustained high rate of data ingestion with guaranteed low read and write latency. The advantages of the Azure Cosmos DB change feed over a message queue include:
+In many cases, stream processing implementations first receive a high volume of incoming data into a temporary message queue such as Azure Event Hubs or Apache Kafka. The change feed is a great alternative due to Azure Cosmos DB's ability to support a sustained high rate of data ingestion with guaranteed low read and write latency. The advantages of the Azure Cosmos DB change feed over a message queue include:
### Data persistence
-Data written to Azure Cosmos DB will show up in the change feed and be retained until deleted. Message queues typically have a maximum retention period. For example, [Azure Event Hub](https://azure.microsoft.com/services/event-hubs/) offers a maximum data retention of 90 days.
+Data written to Azure Cosmos DB will show up in the change feed and be retained until deleted. Message queues typically have a maximum retention period. For example, [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) offers a maximum data retention of 90 days.
### Querying ability
Here are some real-world change feed code examples that extend beyond the scope
* [Options to read change feed](read-change-feed.md) * [Using change feed with Azure Functions](change-feed-functions.md) * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Changefeed Ecommerce Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/changefeed-ecommerce-solution.md
ms.devlang: java Previously updated : 05/28/2019-+ Last updated : 03/24/2022+ # Use Azure Cosmos DB change feed to visualize real-time data analytics
The Azure Cosmos DB change feed is a mechanism to get a continuous and increment
This article describes how change feed can be used by an e-commerce company to understand user patterns, perform real-time data analysis and visualization. You will analyze events such as a user viewing an item, adding an item to their cart, or purchasing an item. When one of these events occurs, a new record is created, and the change feed logs that record. Change feed then triggers a series of steps resulting in visualization of metrics that analyze the company performance and activity. Sample metrics that you can visualize include revenue, unique site visitors, most popular items, and average price of the items that are viewed versus added to a cart versus purchased. These sample metrics can help an e-commerce company evaluate its site popularity, develop its advertising and pricing strategies, and make decisions regarding what inventory to invest in.
-Interested in watching a video about the solution before getting started, see the following video:
-
-> [!VIDEO https://www.youtube.com/embed/AYOiMkvxlzo]
+>
+> [!VIDEO https://aka.ms/docs.ecomm-change-feed]
> ## Solution components
The following diagram represents the data flow and components involved in the so
3. **Change Feed:** The change feed will listen for changes to the Azure Cosmos container. Each time a new document is added into the collection (that is when an event occurs such a user viewing an item, adding an item to their cart, or purchasing an item), the change feed will trigger an [Azure Function](../../azure-functions/functions-overview.md).
-4. **Azure Function:** The Azure Function processes the new data and sends it to an [Azure Event Hub](../../event-hubs/event-hubs-about.md).
+4. **Azure Function:** The Azure Function processes the new data and sends it to [Azure Event Hubs](../../event-hubs/event-hubs-about.md).
-5. **Event Hub:** The Azure Event Hub stores these events and sends them to [Azure Stream Analytics](../../stream-analytics/stream-analytics-introduction.md) to perform further analysis.
+5. **Azure event hub:** The event hub stores these events and sends them to [Azure Stream Analytics](../../stream-analytics/stream-analytics-introduction.md) to perform further analysis.
6. **Azure Stream Analytics:** Azure Stream Analytics defines queries to process the events and perform real-time data analysis. This data is then sent to [Microsoft Power BI](/power-bi/desktop-what-is-desktop).
The following diagram represents the data flow and components involved in the so
## Create Azure resources
-Create the Azure resources - Azure Cosmos DB, Storage account, Event Hub, Stream Analytics required by the solution. You will deploy these resources through an Azure Resource Manager template. Use the following steps to deploy these resources:
+Create the Azure resources: Azure Cosmos DB, storage account, event hub, and Stream Analytics required by the solution. You will deploy these resources through an Azure Resource Manager template. Use the following steps to deploy these resources:
1. Set the Windows PowerShell execution policy to **Unrestricted**. To do so, open **Windows PowerShell as an Administrator** and run the following commands:
Create the Azure resources - Azure Cosmos DB, Storage account, Event Hub, Stream
2. From the GitHub repository you downloaded in the previous step, navigate to the **Azure Resource Manager** folder, and open the file called **parameters.json** file.
-3. Provide values for cosmosdbaccount_name, eventhubnamespace_name, storageaccount_name, parameters as indicated in **parameters.json** file. You'll need to use the names that you give to each of your resources later.
+3. Provide values for `cosmosdbaccount_name`, `eventhubnamespace_name`, `storageaccount_name`, parameters as indicated in **parameters.json** file. You'll need to use the names that you give to each of your resources later.
4. From **Windows PowerShell**, navigate to the **Azure Resource Manager** folder and run the following command:
Create the Azure resources - Azure Cosmos DB, Storage account, Event Hub, Stream
You will now create a collection to hold e-commerce site events. When a user views an item, adds an item to their cart, or purchases an item, the collection will receive a record that includes the action ("viewed", "added", or "purchased"), the name of the item involved, the price of the item involved, and the ID number of the user cart involved.
-1. Go to [Azure portal](https://portal.azure.com/) and find the **Azure Cosmos DB Account** thatΓÇÖs created by the template deployment.
+1. Go to [Azure portal](https://portal.azure.com/) and find the **Azure Cosmos DB Account** that's been created by the template deployment.
2. From the **Data Explorer** pane, select **New Collection** and fill the form with the following details:
You will now create a collection to hold e-commerce site events. When a user vie
### Get the Azure Cosmos DB connection string
-1. Go to [Azure portal](https://portal.azure.com/) and find the **Azure Cosmos DB Account** thatΓÇÖs created by the template deployment.
+1. Go to [Azure portal](https://portal.azure.com/) and find the **Azure Cosmos DB Account** that's created by the template deployment.
2. Navigate to the **Keys** pane, copy the PRIMARY CONNECTION STRING and copy it to a notepad or another document that you will have access to throughout the lab. You should label it **Cosmos DB Connection String**. You'll need to copy the string into your code later, so take a note and remember where you are storing it.
Azure Storage Accounts allow users to store data. In this lab, you will use a st
### Get the event hub namespace connection string
-An Azure Event Hub receives the event data, stores, processes, and forwards the data. In this lab, the Azure Event Hub will receive a document every time a new event occurs (i.e. an item is viewed by a user, added to a user's cart, or purchased by a user) and then will forward that document to Azure Stream Analytics.
+An Azure event hub receives the event data, stores, processes, and forwards the data. In this lab, the event hub will receive a document every time a new event occurs (whenever an item is viewed by a user, added to a user's cart, or purchased by a user) and then will forward that document to Azure Stream Analytics.
-1. Return to your resource group and open the **Event Hub Namespace** that you created and named in the prelab.
+1. Return to your resource group and open the **Event Hubs Namespace** that you created and named in the prelab.
2. Select **Shared access policies** from the menu on the left-hand side.
An Azure Event Hub receives the event data, stores, processes, and forwards the
## Set up Azure Function to read the change feed
-When a new document is created, or a current document is modified in a Cosmos container, the change feed automatically adds that modified document to its history of collection changes. You will now build and run an Azure Function that processes the change feed. When a document is created or modified in the collection you created, the Azure Function will be triggered by the change feed. Then the Azure Function will send the modified document to the Event Hub.
+When a new document is created, or a current document is modified in a Cosmos container, the change feed automatically adds that modified document to its history of collection changes. You will now build and run an Azure Function that processes the change feed. When a document is created or modified in the collection you created, the Azure Function will be triggered by the change feed. Then the Azure Function will send the modified document to the event hub.
1. Return to the repository that you cloned on your device.
When a new document is created, or a current document is modified in a Cosmos co
## Insert data into Azure Cosmos DB
-To see how change feed processes new actions on an e-commerce site, have to simulate data that represents users viewing items from the product catalog, adding those items to their carts, and purchasing the items in their carts. This data is arbitrary and for the purpose of replicating what data on an Ecommerce site would look like.
+To see how change feed processes new actions on an e-commerce site, have to simulate data that represents users viewing items from the product catalog, adding those items to their carts, and purchasing the items in their carts. This data is arbitrary and used for replicating what data on an e-commerce site would look like.
1. Navigate back to the repository in File Explorer, and right-click **ChangeFeedFunction.sln** to open it again in a new Visual Studio window.
To see how change feed processes new actions on an e-commerce site, have to simu
## Set up a stream analytics job
-Azure Stream Analytics is a fully managed cloud service for real-time processing of streaming data. In this lab, you will use stream analytics to process new events from the Event Hub (i.e. when an item is viewed, added to a cart, or purchased), incorporate those events into real-time data analysis, and send them into Power BI for visualization.
+Azure Stream Analytics is a fully managed cloud service for real-time processing of streaming data. In this lab, you will use stream analytics to process new events from the event hub (when an item is viewed, added to a cart, or purchased), incorporate those events into real-time data analysis, and send them into Power BI for visualization.
1. From the [Azure portal](https://portal.azure.com/), navigate to your resource group, then to **streamjob1** (the stream analytics job that you created in the prelab).
Azure Stream Analytics is a fully managed cloud service for real-time processing
* In the **Input** alias field, enter **input**. * Select the option for **Select Event Hub from your subscriptions**. * Set the **Subscription** field to your subscription.
- * In the **Event Hub namespace** field, enter the name of your Event Hub Namespace that you created during the prelab.
+ * In the **Event Hubs namespace** field, enter the name of your event hub namespace that you created during the prelab.
* In the **Event Hub name** field, select the option for **Use existing** and choose **event-hub1** from the drop-down menu. * Leave **Event Hub policy** name field set to its default value. * Leave **Event serialization format** as **JSON**.
Azure Stream Analytics is a fully managed cloud service for real-time processing
``` 10. Then select **Save** in the upper left-hand corner.
-11. Now return to **streamjob1** and select the **Start** button at the top of the page. Azure Stream Analytics can take a few minutes to start up, but eventually you will see it change from "Starting" to "Running".
+11. Now return to **streamjob1** and select the **Start** button at the top of the page. Azure Stream Analytics can take a few minutes to start up, but eventually you will see it change from "Starting" to "Running".
## Connect to Power BI
Power BI is a suite of business analytics tools to analyze data and share insigh
GROUP BY TumblingWindow(second, 5) ```
- The TOP 5 query calculates the top 5 items, ranked by the number of times that they have been purchased. This metric can help e-commerce companies evaluate which items are most popular and can influence the company's advertising, pricing, and inventory decisions.
+ The TOP 5 query calculates the top five items, ranked by the number of times that they have been purchased. This metric can help e-commerce companies evaluate which items are most popular and can influence the company's advertising, pricing, and inventory decisions.
The REVENUE query calculates revenue by summing up the prices of all items purchased each minute. This metric can help e-commerce companies evaluate its financial performance and also understand what times of day contribute to most revenue. This can impact the overall company strategy, marketing in particular.
- The UNIQUE VISITORS query calculates how many unique visitors are on the site every 5 seconds by detecting unique cart ID's. This metric can help e-commerce companies evaluate their site activity and strategize how to acquire more customers.
+ The UNIQUE VISITORS query calculates how many unique visitors are on the site every five seconds by detecting unique cart ID's. This metric can help e-commerce companies evaluate their site activity and strategize how to acquire more customers.
8. You can now add tiles for these datasets as well.
Power BI is a suite of business analytics tools to analyze data and share insigh
## Optional: Visualize with an E-commerce site
-You will now observe how you can use your new data analysis tool to connect with a real e-commerce site. In order to build the e-commerce site, use an Azure Cosmos database to store the list of product categories (Women's, Men's, Unisex), the product catalog, and a list of the most popular items.
+You will now observe how you can use your new data analysis tool to connect with a real e-commerce site. In order to build the e-commerce site, use an Azure Cosmos database to store the list of product categories, the product catalog, and a list of the most popular items.
1. Navigate back to the [Azure portal](https://portal.azure.com/), then to your **Cosmos DB account**, then to **Data Explorer**.
You will now observe how you can use your new data analysis tool to connect with
8. Navigate to and open the **Checkout folder** within **EcommerceWebApp.sln.** Then open the **Web.config** file within that folder.
-9. Within the `<appSettings>` block, add the **URI** and **PRIMARY KEY** that you saved earlier where indicated. Then add in your **database name** and **collection name** as indicated. (These names should be **changefeedlabdatabase** and **changefeedlabcollection** unless you chose to name yours differently.)
+9. Within the `<appSettings>` block, add the **URI** and **PRIMARY KEY** that you saved earlier, where indicated. Then, add in your **database name** and **collection name** as indicated. (These names should be **changefeedlabdatabase** and **changefeedlabcollection** unless you chose to name yours differently.)
-10. Press **Start** at the top of the page to run the program.
+10. Press **Start** at the top of the page to run the program.
-11. Now you can play around on the e-commerce site. When you view an item, add an item to your cart, change the quantity of an item in your cart, or purchase an item, these events will be passed through the Cosmos DB change feed to Event Hub, ASA, and then Power BI. We recommend continuing to run DataGenerator to generate significant web traffic data and provide a realistic set of "Hot Products" on the e-commerce site.
+11. Now you can play around on the e-commerce site. When you view an item, add an item to your cart, change the quantity of an item in your cart, or purchase an item, these events will be passed through the Cosmos DB change feed to event hub, Stream Analytics, and then Power BI. We recommend continuing to run DataGenerator to generate significant web traffic data and provide a realistic set of "Hot Products" on the e-commerce site.
## Delete the resources
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/modeling-data.md
Previously updated : 02/15/2022 Last updated : 03/24/2022 + # Data modeling in Azure Cosmos DB [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-📺 <B><a href="https://aka.ms/cosmos-db-video-data-modeling-best-practices" target="_blank">Video: Data modeling best practices</a></b>
-- While schema-free databases, like Azure Cosmos DB, make it super easy to store and query unstructured and semi-structured data, you should spend some time thinking about your data model to get the most of the service in terms of performance and scalability and lowest cost. How is data going to be stored? How is your application going to retrieve and query data? Is your application read-heavy, or write-heavy?
+>
+> [!VIDEO https://aka.ms/docs.modeling-data]
+ After reading this article, you'll be able to answer the following questions: * What is data modeling and why should I care?
After reading this article, you'll be able to answer the following questions:
* How do I express data relationships in a non-relational database? * When do I embed data and when do I link to data?
-## Embedding data
+## <a id="embedding-data"></a>Embed data
When you start modeling data in Azure Cosmos DB try to treat your entities as **self-contained items** represented as JSON documents.
This could represent a person's stock portfolio. We have chosen to embed the sto
Stock *zaza* may be traded many hundreds of times in a single day and thousands of users could have *zaza* on their portfolio. With a data model like the above we would have to update many thousands of portfolio documents many times every day leading to a system that won't scale well.
-## Referencing data
+## <a id="referencing-data"></a>Reference data
Embedding data works nicely for many cases but there are scenarios when denormalizing your data will cause more problems than it's worth. So what do we do now?
Book documents:
In the above example, we have dropped the unbounded collection on the publisher document. Instead we just have a reference to the publisher on each book document.
-### How do I model many:many relationships?
+### How do I model many to many relationships?
In a relational database *many:many* relationships are often modeled with join tables, which just join records from other tables together.
Now, if I had an author, I immediately know which books they've written, and con
## Hybrid data models
-We've now looked at embedding (or denormalizing) and referencing (or normalizing) data which, each have their upsides and compromises as we've seen.
+We've now looked at embedding (or denormalizing) and referencing (or normalizing) data. Each approach has upsides and compromises.
-It doesn't always have to be either or, don't be scared to mix things up a little.
+It doesn't always have to be either-or, don't be scared to mix things up a little.
Based on your application's specific usage patterns and workloads there may be cases where mixing embedded and referenced data makes sense and could lead to simpler application logic with fewer server round trips while still maintaining a good level of performance.
In the example, there are **pre-calculated aggregates** values to save expensive
The ability to have a model with pre-calculated fields is made possible because Azure Cosmos DB supports **multi-document transactions**. Many NoSQL stores can't do transactions across documents and therefore advocate design decisions, such as "always embed everything", due to this limitation. With Azure Cosmos DB, you can use server-side triggers, or stored procedures, that insert books and update authors all within an ACID transaction. Now you don't **have** to embed everything into one document just to be sure that your data remains consistent.
-## Distinguishing between different document types
+## Distinguish between different document types
In some scenarios, you may want to mix different document types in the same collection; this is usually the case when you want multiple, related documents to sit in the same [partition](../partitioning-overview.md). For example, you could put both books and book reviews in the same collection and partition it by `bookId`. In such situation, you usually want to add to your documents with a field that identifies their type in order to differentiate them.
With Synapse Link, you can now directly connect to your Azure Cosmos DB containe
### Analytical store automatic schema inference
-While Azure Cosmos DB transactional store is considered row-oriented semi-structured data, analytical store has columnar and structured format. This conversion is automatically made for customers, using the schema inference rules described [here](../analytical-store-introduction.md). There are limits in the conversion process: maximum number of nested levels, maximum number of properties, unsupported data types, and more.
+While Azure Cosmos DB transactional store is considered row-oriented semi-structured data, analytical store has columnar and structured format. This conversion is automatically made for customers, using [the schema inference rules for the analytical store](../analytical-store-introduction.md). There are limits in the conversion process: maximum number of nested levels, maximum number of properties, unsupported data types, and more.
> [!NOTE] > In the context of analytical store, we consider the following structures as property:
Azure Synapse Link allows you to reduce costs from the following perspectives:
* Fewer queries running in your transactional database. * A PK optimized for data ingestion and point reads, reducing data footprint, hot partition scenarios, and partitions splits.
- * Data tiering since analytical ttl (attl) is independent from transactional ttl (tttl). You can keep your transactional data in transactional store for a few days, weeks, months, and keep the data in analytical store for years or for ever. Analytical store columnar format brings a natural data compression, from 50% up to 90%. And its cost per GB is ~10% of transactional store actual price. Please check the [analytical store overview](../analytical-store-introduction.md) to read about the current backup limitations.
+ * Data tiering since [analytical time-to-live (attl)](../analytical-store-introduction.md#analytical-ttl) is independent from transactional time-to-live (tttl). You can keep your transactional data in transactional store for a few days, weeks, months, and keep the data in analytical store for years or for ever. Analytical store columnar format brings a natural data compression, from 50% up to 90%. And its cost per GB is ~10% of transactional store actual price. For more information about the current backup limitations, see [analytical store overview](../analytical-store-introduction.md).
* No ETL jobs running in your environment, meaning that you don't need to provision RUs for them. ### Controlled redundancy
cosmos-db Powerbi Visualize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/powerbi-visualize.md
Title: Power BI tutorial for Azure Cosmos DB connector
-description: Use this Power BI tutorial to import JSON, create insightful reports, and visualize data using the Azure Cosmos DB and Power BI connector.
+ Title: Power BI tutorial for Azure Cosmos DB
+description: Use this Power BI tutorial to import JSON, create insightful reports, and visualize data using the Azure Cosmos DB.
Previously updated : 10/4/2021 Last updated : 03/28/2022
-# Visualize Azure Cosmos DB data by using the Power BI connector
+# Visualize Azure Cosmos DB data using Power BI
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-[Power BI](https://powerbi.microsoft.com/) is an online service where you can create and share dashboards and reports. Power BI Desktop is a report authoring tool that enables you to retrieve data from various data sources. Azure Cosmos DB is one of the data source that you can use with Power BI Desktop. You can connect Power BI Desktop to Azure Cosmos DB account with the Azure Cosmos DB connector for Power BI. After you import Azure Cosmos DB data to Power BI, you can transform it, create reports, and publish the reports to Power BI.
+This article describes the steps required to connect Azure Cosmos DB data to [Power BI](https://powerbi.microsoft.com/) Desktop.
-Another option is to create near real-time reports using [Azure Synapse Link for Azure Cosmos DB](../synapse-link.md). With Azure Synapse Link, you can connect Power BI to analyze your Azure Cosmos DB data, with no performance or cost impact to your transactional workloads, and no ETL pipelines. You can use either [DirectQuery](/power-bi/connect-dat).
+You can connect to Azure Cosmos DB from Power BI desktop by using one of these options:
+* Use [Azure Synapse Link](../synapse-link.md) to build Power BI reports with no performance or cost impact to your transactional workloads, and no ETL pipelines.
+
+ You can either use [DirectQuery](/power-bi/connect-data/service-dataset-modes-understand#directquery-mode) or [import](/power-bi/connect-data/service-dataset-modes-understand#import-mode) mode. With [DirectQuery](/power-bi/connect-data/service-dataset-modes-understand#directquery-mode), you can build dashboards/reports using live data from your Azure Cosmos DB accounts, without importing or copying the data into Power BI.
-This article describes the steps required to connect Azure Cosmos DB account to Power BI Desktop. After connecting, you navigate to a collection, extract the data, transform the JSON data into tabular format, and publish a report to Power BI.
+* Connect Power BI Desktop to Azure Cosmos DB account with the Azure Cosmos DB connector for Power BI. This option is only available in import mode and will consume RUs allocated for your transactional workloads.
> [!NOTE]
-> The Power BI connector for Azure Cosmos DB connects to Power BI Desktop. Reports created in Power BI Desktop can be published to PowerBI.com. Direct extraction of Azure Cosmos DB data cannot be performed from PowerBI.com.
-
-> [!NOTE]
-> Connecting to Azure Cosmos DB with the Power BI connector is currently supported for Azure Cosmos DB SQL API and Gremlin API accounts only.
-
-> [!NOTE]
-> Creating near real-time Power BI dashboards using Azure Synapse Link is currently supported for Azure Cosmos DB SQL API and Azure Cosmos DB API for MongoDB.
+> Reports created in Power BI Desktop can be published to PowerBI.com. Direct extraction of Azure Cosmos DB data cannot be performed from PowerBI.com.
## Prerequisites Before following the instructions in this Power BI tutorial, ensure that you have access to the following resources: * [Download the latest version of Power BI Desktop](https://powerbi.microsoft.com/desktop).
-* Download the [sample volcano data](https://github.com/Azure-Samples/azure-cosmos-db-sample-data/blob/main/SampleData/VolcanoData.json) from GitHub.
-
-* [Create an Azure Cosmos database account](create-cosmosdb-resources-portal.md#create-an-azure-cosmos-db-account) and import the volcano data by using the [Azure Cosmos DB data migration tool](../import-data.md). When importing data, consider the following settings for the source and destinations in the data migration tool:
-
- * **Source parameters**
-
- * **Import from:** JSON file(s)
-
- * **Target parameters**
-
- * **Connection string:** `AccountEndpoint=<Your_account_endpoint>;AccountKey=<Your_primary_or_secondary_key>;Database= <Your_database_name>`
-
- * **Partition key:** /Country
-
- * **Collection Throughput:** 1000
+* [Create an Azure Cosmos database account](create-cosmosdb-resources-portal.md#create-an-azure-cosmos-db-account) and add data to your Cosmos containers.
To share your reports in PowerBI.com, you must have an account in PowerBI.com. To learn more about Power BI and Power BI Pro, see [https://powerbi.microsoft.com/pricing](https://powerbi.microsoft.com/pricing). ## Let's get started
-In this tutorial, let's imagine that you are a geologist studying volcanoes around the world. The volcano data is stored in an Azure Cosmos DB account and the JSON document format is as follows:
-
-```json
-{
- "Volcano Name": "Rainier",
- "Country": "United States",
- "Region": "US-Washington",
- "Location": {
- "type": "Point",
- "coordinates": [
- -121.758,
- 46.87
- ]
- },
- "Elevation": 4392,
- "Type": "Stratovolcano",
- "Status": "Dendrochronology",
- "Last Known Eruption": "Last known eruption from 1800-1899, inclusive"
-}
-```
-
-You will retrieve the volcano data from the Azure Cosmos DB account and visualize data in an interactive Power BI report.
+### Building BI reports using Azure Synapse Link
-1. Run Power BI Desktop.
+You can enable Azure Synapse Link on your existing Cosmos DB containers and build BI reports on this data, in just a few clicks using Azure Cosmos DB portal. Power BI will connect to Cosmos DB using Direct Query mode, allowing you to query your live Cosmos DB data, without impacting your transactional workloads.
-2. You can **Get Data**, see **Recent Sources**, or **Open Other Reports** directly from the welcome screen. Select the "X" at the top right corner to close the screen. The **Report** view of Power BI Desktop is displayed.
-
- :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbireportview.png" alt-text="Power BI Desktop Report View - Power BI connector":::
+To build a Power BI report/dashboard:
-3. Select the **Home** ribbon, then click on **Get Data**. The **Get Data** window should appear.
+1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to your Azure Cosmos DB account.
-4. Click on **Azure**, select **Azure Cosmos DB (Beta)**, and then click **Connect**.
+1. From the **Integrations** section, open the **Power BI** pane and select **Get started**.
- :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbigetdata.png" alt-text="Power BI Desktop Get Data - Power BI connector":::
+ > [!NOTE]
+ > Currently, this option is only available for SQL API accounts. You can create T-SQL views directly in Synapse serverless SQL pools and build BI dashboards for Azure Cosmos DB API for MongoDB. See ["Use Power BI and serverless Synapse SQL pool to analyze Azure Cosmos DB data with Synapse"](../synapse-link-power-bi.md) for more information.
-5. On the **Preview Connector** page, click **Continue**. The **Azure Cosmos DB** window appears.
+1. From the **Enable Azure Synapse Link** tab, you can enable Synapse Link on your account from **Enable Azure Synapse link for this account** section. If Synapse Link is already enabled for your account, you will not see this tab. This step is a pre-requisite to start enabling Synapse Link on your containers.
-6. Specify the Azure Cosmos DB account endpoint URL you would like to retrieve the data from as shown below, and then click **OK**. To use your own account, you can retrieve the URL from the URI box in the **Keys** blade of the Azure portal. Optionally you can provide the database name, collection name or use the navigator to select the database and collection to identify where the data comes from.
-
-7. If you are connecting to this endpoint for the first time, you are prompted for the account key. For your own account, retrieve the key from the **Primary Key** box in the **Read-only Keys** blade of the Azure portal. Enter the appropriate key and then click **Connect**.
-
- We recommend that you use the read-only key when building reports. This prevents unnecessary exposure of the primary key to potential security risks. The read-only key is available from the **Keys** blade of the Azure portal.
-
-8. When the account is successfully connected, the **Navigator** pane appears. The **Navigator** shows a list of databases under the account.
-
-9. Click and expand on the database where the data for the report comes from, select ``volcanodb`` (your database name can be different).
+ > [!NOTE]
+ > Enabling Azure Synapse Link has cost implications. See [Azure Synapse Link pricing](../synapse-link.md#pricing) section for more details.
-10. Now, select a collection that contains the data to retrieve, select **volcano1** (your collection name can be different).
-
- The Preview pane shows a list of **Record** items. A Document is represented as a **Record** type in Power BI. Similarly, a nested JSON block inside a document is also a **Record**.
-
- :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbinavigator.png" alt-text="Power BI tutorial for Azure Cosmos DB Power BI connector - Navigator window":::
+1. Next from the **Enable Azure Synapse Link for your containers** section, choose the required containers to enable Synapse Link.
-12. Click **Edit** to launch the Query Editor in a new window to transform the data.
+ * If you already enabled Synapse Link on some containers, you will see the checkbox next to the container name is selected. You may optionally deselect them, based on the data you'd like to visualize in Power BI.
-## Flattening and transforming JSON documents
-1. Switch to the Power BI Query Editor window, where the **Document** column in the center pane.
+ * If Synapse Link isn't enabled, you can enable this on your existing containers.
- :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbiqueryeditor.png" alt-text="Power BI Desktop Query Editor":::
+ If enabling Synapse Link is in progress on any of the containers, the data from those containers will not be included. You should come back to this tab later and import data when the containers are enabled.
-1. Click on the expander at the right side of the **Document** column header. The context menu with a list of fields will appear. Select the fields you need for your report, for instance, Volcano Name, Country, Region, Location, Elevation, Type, Status and Last Know Eruption. Uncheck the **Use original column name as prefix** box, and then click **OK**.
-
- :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbiqueryeditorexpander.png" alt-text="Power BI tutorial for Azure Cosmos DB Power BI connector - Expand documents":::
+ :::image type="content" source="../media/integrated-power-bi-synapse-link/synapse-link-progress-existing-containers.png" alt-text="Progress of Synapse Link enabled on existing containers." border="true" lightbox="../media/integrated-power-bi-synapse-link/synapse-link-progress-existing-containers.png":::
-1. The center pane displays a preview of the result with the fields selected.
-
- :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbiresultflatten.png" alt-text="Power BI tutorial for Azure Cosmos DB Power BI connector - Flatten results":::
+1. Depending on the amount of data in your containers, it may take a while to enable Synapse Link. To learn more, see [enable Synapse Link on existing containers](../configure-synapse-link.md#update-analytical-ttl) article.
-1. In our example, the Location property is a GeoJSON block in a document. As you can see, Location is represented as a **Record** type in Power BI Desktop.
+ You can check the progress in the portal as shown in the following screen. **Containers are enabled with Synapse Link when the progress reaches 100%.**
-1. Click on the expander at the right side of the Document.Location column header. The context menu with type and coordinates fields appear. Let's select the coordinates field, ensure **Use original column name as prefix** is not selected, and click **OK**.
-
- :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbilocationrecord.png" alt-text="Power BI tutorial for Azure Cosmos DB Power BI connector - Location record":::
+ :::image type="content" source="../media/integrated-power-bi-synapse-link/synapse-link-existing-containers-registration-complete.png" alt-text="Synapse Link successfully enabled on the selected containers." border="true" lightbox="../media/integrated-power-bi-synapse-link/synapse-link-existing-containers-registration-complete.png":::
-1. The center pane now shows a coordinates column of **List** type. As shown at the beginning of the tutorial, the GeoJSON data in this tutorial is of Point type with Latitude and Longitude values recorded in the coordinates array.
-
- The coordinates[0] element represents Longitude while coordinates[1] represents Latitude.
-
- :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbiresultflattenlist.png" alt-text="Power BI tutorial for Azure Cosmos DB Power BI connector - Coordinates list":::
-
-1. To flatten the coordinates array, create a **Custom Column** called LatLong. Select the **Add Column** ribbon and click on **Custom Column**. The **Custom Column** window appears.
-
-1. Provide a name for the new column, e.g. LatLong.
-
-1. Next, specify the custom formula for the new column. For our example, we will concatenate the Latitude and Longitude values separated by a comma as shown below using the following formula: `Text.From([coordinates]{1})&","&Text.From([coordinates]{0})`. Click **OK**.
-
- For more information on Data Analysis Expressions (DAX) including DAX functions, please visit [DAX Basics in Power BI Desktop](/power-bi/desktop-quickstart-learn-dax-basics).
-
- :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbicustomlatlong.png" alt-text="Power BI tutorial for Azure Cosmos DB Power BI connector - Add Custom Column":::
-
-1. Now, the center pane shows the new LatLong columns populated with the values.
-
- :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbicolumnlatlong.png" alt-text="Power BI tutorial for Azure Cosmos DB Power BI connector - Custom LatLong column":::
-
- If you receive an Error in the new column, make sure that the applied steps under Query Settings match the following figure:
+1. From the **Select workspace** tab, choose the Azure Synapse Analytics workspace and select **Next**. This step will automatically create T-SQL views in Synapse Analytics, for the containers selected earlier. For more information on T-SQL views required to connect your Cosmos DB to Power BI, see [Prepare views](../../synapse-analytics/sql/tutorial-connect-power-bi-desktop.md#3prepare-view) article.
+ > [!NOTE]
+ > Your Cosmos DB container proprieties will be represented as columns in T-SQL views, including deep nested JSON data. This is a quick start for your BI dashboards. These views will be available in your Synapse workspace/database; you can also use these exact same views in Synapse Workspace for data exploration, data science, data engineering, etc. Please note that advanced scenarios may demand more complex views or fine tuning of these views, for better performance. For more information. see [best practices for Synapse Link when using Synapse serverless SQL pools](../../synapse-analytics/sql/resources-self-help-sql-on-demand.md#cosmos-db-performance-issues) article.
- :::image type="content" source="./media/powerbi-visualize/power-bi-applied-steps.png" alt-text="Applied steps should be Source, Navigation, Expanded Document, Expanded Document.Location, Added Custom":::
-
- If your steps are different, delete the extra steps and try adding the custom column again.
+1. You can either choose an existing workspace or create a new one. To select an existing workspace, provide the **Subscription**, **Workspace**, and the **Database** details. Azure portal will use your Azure AD credentials to automatically connect to your Synapse workspace and create T-SQL views. Make sure you have "Synapse administrator" permissions to this workspace.
-1. Click **Close and Apply** to save the data model.
+ :::image type="content" source="../media/integrated-power-bi-synapse-link/synapse-create-views.png" alt-text="Connect to Synapse Link workspace and create views." border="true" lightbox="../media/integrated-power-bi-synapse-link/synapse-create-views.png":::
- :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbicloseapply.png" alt-text="Power BI tutorial for Azure Cosmos DB Power BI connector - Close & Apply":::
+1. Next, select **Download .pbids** to download the Power BI data source file. Open the downloaded file. It contains the required connection information and opens Power BI desktop.
-<a id="build-the-reports"></a>
-## Build the reports
+ :::image type="content" source="../media/integrated-power-bi-synapse-link/download-powerbi-desktop-files.png" alt-text="Download the Power BI desktop files in .pbids format." border="true" lightbox="../media/integrated-power-bi-synapse-link/download-powerbi-desktop-files.png":::
-Power BI Desktop Report view is where you can start creating reports to visualize data. You can create reports by dragging and dropping fields into the **Report** canvas.
+1. You can now connect to Azure Cosmos DB data from Power BI desktop. A list of T-SQL views corresponding to the data in each container are displayed.
+ For example, the following screen shows vehicle fleet data. You can load this data for further analysis or transform it before loading.
-In the Report view, you should find:
+ :::image type="content" source="../media/integrated-power-bi-synapse-link/powerbi-desktop-select-view.png" alt-text="T-SQL views corresponding to the data in each container." border="true" lightbox="../media/integrated-power-bi-synapse-link/powerbi-desktop-select-view.png":::
-1. The **Fields** pane, this is where you can see a list of data models with fields you can use for your reports.
-1. The **Visualizations** pane. A report can contain a single or multiple visualizations. Pick the visual types fitting your needs from the **Visualizations** pane.
-1. The **Report** canvas, this is where you build the visuals for your report.
-1. The **Report** page. You can add multiple report pages in Power BI Desktop.
+1. You can now start building the report using Azure Cosmos DB's analytical data. Any changes to your data will be reflected in the report, as soon as the data is replicated to analytical store, which typically happens in a couple of minutes.
-The following shows the basic steps of creating a simple interactive Map view report.
-1. For our example, we will create a map view showing the location of each volcano. In the **Visualizations** pane, click on the Map visual type as highlighted in the screenshot above. You should see the Map visual type painted on the **Report** canvas. The **Visualization** pane should also display a set of properties related to the Map visual type.
-1. Now, drag and drop the LatLong field from the **Fields** pane to the **Location** property in **Visualizations** pane.
-1. Next, drag and drop the Volcano Name field to the **Legend** property.
-1. Then, drag and drop the Elevation field to the **Size** property.
-1. You should now see the Map visual showing a set of bubbles indicating the location of each volcano with the size of the bubble correlating to the elevation of the volcano.
-1. You now have created a basic report. You can further customize the report by adding more visualizations. In our case, we added a Volcano Type slicer to make the report interactive.
-
-1. On the File menu, click **Save** and save the file as PowerBITutorial.pbix.
+### Building BI reports using Power BI connector
+> [!NOTE]
+> Connecting to Azure Cosmos DB with the Power BI connector is currently supported for Azure Cosmos DB SQL API and Gremlin API accounts only.
-## Publish and share your report
-To share your report, you must have an account in PowerBI.com.
+1. Run Power BI Desktop.
-1. In the Power BI Desktop, click on the **Home** ribbon.
-1. Click **Publish**. You are be prompted to enter the user name and password for your PowerBI.com account.
-1. Once the credential has been authenticated, the report is published to your destination you selected.
-1. Click **Open 'PowerBITutorial.pbix' in Power BI** to see and share your report on PowerBI.com.
+2. You can **Get Data**, see **Recent Sources**, or **Open Other Reports** directly from the welcome screen. Select the "X" at the top right corner to close the screen. The **Report** view of Power BI Desktop is displayed.
- :::image type="content" source="./media/powerbi-visualize/power_bi_connector_open_in_powerbi.png" alt-text="Publishing to Power BI Success! Open tutorial in Power BI":::
-
-## Create a dashboard in PowerBI.com
-Now that you have a report, lets share it on PowerBI.com
-
-When you publish your report from Power BI Desktop to PowerBI.com, it generates a **Report** and a **Dataset** in your PowerBI.com tenant. For example, after you published a report called **PowerBITutorial** to PowerBI.com, you will see PowerBITutorial in both the **Reports** and **Datasets** sections on PowerBI.com.
-
- :::image type="content" source="./media/powerbi-visualize/powerbi-reports-datasets.png" alt-text="Screenshot of the new Report and Dataset in PowerBI.com":::
-
-To create a sharable dashboard, click the **Pin Live Page** button on your PowerBI.com report.
-
- :::image type="content" source="./media/powerbi-visualize/power-bi-pin-live-tile.png" alt-text="Screenshot of how to pin a report to PowerBI.com":::
-
-Then follow the instructions in [Pin a tile from a report](https://powerbi.microsoft.com/documentation/powerbi-service-pin-a-tile-to-a-dashboard-from-a-report/#pin-a-tile-from-a-report) to create a new dashboard.
+ :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbireportview.png" alt-text="Power BI Desktop Report View - Power BI connector":::
-You can also do ad hoc modifications to report before creating a dashboard. However, it's recommended that you use Power BI Desktop to perform the modifications and republish the report to PowerBI.com.
+3. Select the **Home** ribbon, then click on **Get Data**. The **Get Data** window should appear.
-## Refresh data in PowerBI.com
-There are two ways to refresh data, ad hoc and scheduled.
+4. Click on **Azure**, select **Azure Cosmos DB (Beta)**, and then click **Connect**.
-For an ad hoc refresh, simply click **Refresh Now** to refresh the data.
+ :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbigetdata.png" alt-text="Power BI Desktop Get Data - Power BI connector":::
-For a scheduled refresh, do the following.
+5. On the **Preview Connector** page, click **Continue**. The **Azure Cosmos DB** window appears.
-1. Go to **Settings** and open the **Datasets** tab.
+6. Specify the Azure Cosmos DB account endpoint URL you would like to retrieve the data from as shown below, and then click **OK**. To use your own account, you can retrieve the URL from the URI box in the **Keys** blade of the Azure portal. Optionally you can provide the database name, collection name or use the navigator to select the database and collection to identify where the data comes from.
+
+7. If you are connecting to this endpoint for the first time, you are prompted for the account key. For your own account, retrieve the key from the **Primary Key** box in the **Read-only Keys** blade of the Azure portal. Enter the appropriate key and then click **Connect**.
+
+ We recommend that you use the read-only key when building reports. This prevents unnecessary exposure of the primary key to potential security risks. The read-only key is available from the **Keys** blade of the Azure portal.
+
+8. When the account is successfully connected, the **Navigator** pane appears. The **Navigator** shows a list of databases under the account.
-2. Click on **Scheduled Refresh** and set your schedule.
+9. Click and expand on the database where the data for the report comes from. Now, select a collection that contains the data to retrieve.
+
+ The Preview pane shows a list of **Record** items. A Document is represented as a **Record** type in Power BI. Similarly, a nested JSON block inside a document is also a **Record**.
+10. Power BI Desktop Report view is where you can start creating reports to visualize data. You can create reports by dragging and dropping fields into the **Report** canvas.
+11. There are two ways to refresh data: ad hoc and scheduled. Simply click **Refresh Now** to refresh the data. For a scheduled refresh, go to **Settings**, open the **Datasets** tab. Click on **Scheduled Refresh** and set your schedule.
+
## Next steps * To learn more about Power BI, see [Get started with Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/). * To learn more about Azure Cosmos DB, see the [Azure Cosmos DB documentation landing page](https://azure.microsoft.com/documentation/services/cosmos-db/).
cosmos-db Serverless Computing Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/serverless-computing-database.md
Previously updated : 07/17/2019+ Last updated : 03/25/2022+ # Serverless database computing using Azure Cosmos DB and Azure Functions
The following diagram illustrates each of these three integrations:
The Azure Functions trigger, input binding, and output binding for Azure Cosmos DB can be used in the following combinations:
-* An Azure Functions trigger for Cosmos DB can be used with an output binding to a different Azure Cosmos container. After a function performs an action on an item in the change feed you can write it to another container (writing it to the same container it came from would effectively create a recursive loop). Or, you can use an Azure Functions trigger for Cosmos DB to effectively migrate all changed items from one container to a different container, with the use of an output binding.
+* An Azure Functions trigger for Cosmos DB can be used with an output binding to a different Azure Cosmos container. After a function performs an action on an item in the change feed, you can write it to another container (writing it to the same container it came from would effectively create a recursive loop). Or, you can use an Azure Functions trigger for Cosmos DB to effectively migrate all changed items from one container to a different container, with the use of an output binding.
* Input bindings and output bindings for Azure Cosmos DB can be used in the same Azure Function. This works well in cases when you want to find certain data with the input binding, modify it in the Azure Function, and then save it to the same container or a different container, after the modification. * An input binding to an Azure Cosmos container can be used in the same function as an Azure Functions trigger for Cosmos DB, and can be used with or without an output binding as well. You could use this combination to apply up-to-date currency exchange information (pulled in with an input binding to an exchange container) to the change feed of new orders in your shopping cart service. The updated shopping cart total, with the current currency conversion applied, can be written to a third container using an output binding.
In gaming, when a new user is created you can search for other users who might k
1. Using an Azure Cosmos DB [graph database](../graph-introduction.md) to store all users, you can create a new function with an Azure Functions trigger for Cosmos DB. 2. Whenever a new user is inserted, the function is invoked, and then the result is stored using an **output binding**. 3. The function queries the graph database to search for all the users that are directly related to the new user and returns that dataset to the function.
-4. This data is then stored in an Azure Cosmos DB which can then be easily retrieved by any front-end application that shows the new user their connected friends.
+4. This data is then stored in Azure Cosmos DB, which can then be easily retrieved by any front-end application that shows the new user their connected friends.
### Retail use case - Multiple functions
Native integration between Azure Cosmos DB and Azure Functions is available in t
* In the Azure Cosmos DB portal, you can add an Azure Functions trigger for Cosmos DB to an existing Azure Function app in the same resource group. * In Visual Studio 2019, you can create the trigger using the [Azure Functions Tools](../../azure-functions/functions-develop-vs.md):
- >[!VIDEO https://www.youtube.com/embed/iprndNsUeeg]
+ >
+ >[!VIDEO https://aka.ms/docs.change-feed-azure-functions]
## Why choose Azure Functions integration for serverless computing?
cosmos-db Sql Api Java Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-java-application.md
ms.devlang: java Previously updated : 08/26/2021 Last updated : 03/29/2022 + # Tutorial: Build a Java web application using Azure Cosmos DB and the SQL API
Before you begin this application development tutorial, you must have the follow
* [Java Development Kit (JDK) 7+](/java/azure/jdk/). * [Eclipse IDE for Java EE Developers.](https://www.eclipse.org/downloads/packages/release/luna/sr1/eclipse-ide-java-ee-developers)
-* [An Azure Web Site with a Java runtime environment (e.g. Tomcat or Jetty) enabled.](../../app-service/quickstart-java.md)
+* [An Azure Web Site with a Java runtime environment (for example, Tomcat or Jetty) enabled.](../../app-service/quickstart-java.md)
If you're installing these tools for the first time, coreservlets.com provides a walk-through of the installation process in the quickstart section of their [Tutorial: Installing TomCat7 and Using it with Eclipse](https://www.youtube.com/watch?v=jOdCfW7-ybI&t=2s) article.
Let's start by creating an Azure Cosmos DB account. If you already have an accou
To create the JSP application:
-1. First, we'll start off by creating a Java project. Start Eclipse, then click **File**, click **New**, and then click **Dynamic Web Project**. If you don't see **Dynamic Web Project** listed as an available project, do the following: click **File**, click **New**, click **Project**…, expand **Web**, click **Dynamic Web Project**, and click **Next**.
+1. First, we'll start off by creating a Java project. Start Eclipse, then select **File**, select **New**, and then select **Dynamic Web Project**. If you don't see **Dynamic Web Project** listed as an available project, do the following: Select **File**, select **New**, select **Project**…, expand **Web**, select **Dynamic Web Project**, and select **Next**.
:::image type="content" source="./media/sql-api-java-application/image10.png" alt-text="JSP Java Application Development":::
-1. Enter a project name in the **Project name** box, and in the **Target Runtime** drop-down menu, optionally select a value (e.g. Apache Tomcat v7.0), and then click **Finish**. Selecting a target runtime enables you to run your project locally through Eclipse.
+1. Enter a project name in the **Project name** box, and in the **Target Runtime** drop-down menu, optionally select a value (e.g. Apache Tomcat v7.0), and then select **Finish**. Selecting a target runtime enables you to run your project locally through Eclipse.
-1. In Eclipse, in the Project Explorer view, expand your project. Right-click **WebContent**, click **New**, and then click **JSP File**.
+1. In Eclipse, in the Project Explorer view, expand your project. Right-click **WebContent**, select **New**, and then select **JSP File**.
-1. In the **New JSP File** dialog box, name the file **index.jsp**. Keep the parent folder as **WebContent**, as shown in the following illustration, and then click **Next**.
+1. In the **New JSP File** dialog box, name the file **index.jsp**. Keep the parent folder as **WebContent**, as shown in the following illustration, and then select **Next**.
:::image type="content" source="./media/sql-api-java-application/image11.png" alt-text="Make a New JSP File - Java Web Application Tutorial":::
-1. In the **Select JSP Template** dialog box, for the purpose of this tutorial select **New JSP File (html)**, and then click **Finish**.
+1. In the **Select JSP Template** dialog box, for the purpose of this tutorial select **New JSP File (html)**, and then select **Finish**.
1. When the *index.jsp* file opens in Eclipse, add text to display **Hello World!** within the existing `<body>` element. The updated `<body>` content should look like the following code:
To create the JSP application:
1. Save the *index.jsp* file.
-1. If you set a target runtime in step 2, you can click **Project** and then **Run** to run your JSP application locally:
+1. If you set a target runtime in step 2, you can select **Project** and then **Run** to run your JSP application locally:
:::image type="content" source="./media/sql-api-java-application/image12.png" alt-text="Hello World ΓÇô Java Application Tutorial":::
To create the JSP application:
The easiest way to pull in the SQL Java SDK and its dependencies is through [Apache Maven](https://maven.apache.org/). To do this, you need to convert your project to a Maven project by using the following steps:
-1. Right-click your project in the Project Explorer, click **Configure**, click **Convert to Maven Project**.
+1. Right-click your project in the Project Explorer, select **Configure**, select **Convert to Maven Project**.
-1. In the **Create new POM** window, accept the defaults, and click **Finish**.
+1. In the **Create new POM** window, accept the defaults, and select **Finish**.
1. In **Project Explorer**, open the pom.xml file.
-1. On the **Dependencies** tab, in the **Dependencies** pane, click **Add**.
+1. On the **Dependencies** tab, in the **Dependencies** pane, select **Add**.
1. In the **Select Dependency** window, do the following:
The easiest way to pull in the SQL Java SDK and its dependencies is through [Apa
</dependency> ```
-1. Click **OK** and Maven will install the SQL Java SDK or save the pom.xml file.
+1. Select **OK** and Maven will install the SQL Java SDK or save the pom.xml file.
## <a id="UseService"></a>Use the Azure Cosmos DB service in your Java application
First, let's define a model within a new file *TodoItem.java*. The `TodoItem` cl
Create a Data Access Object (DAO) to abstract persisting the ToDo items to Azure Cosmos DB. In order to save ToDo items to a collection, the client needs to know which database and collection to persist to (as referenced by self-links). In general, it is best to cache the database and collection when possible to avoid additional round-trips to the database.
-1. To invoke the Azure Cosmos DB service, you must instantiate a new `cosmosClient` object. In general, it is best to reuse the `cosmosClient` object rather than constructing a new client for each subsequent request. You can reuse the client by defining it within the `cosmosClientFactory` class. Update the HOST and MASTER_KEY values that you saved in [step 1](#CreateDB). Replace the HOST variable with with your URI and replace the MASTER_KEY with your PRIMARY KEY. Use the following code to create the `CosmosClientFactory` class within the *CosmosClientFactory.java* file:
+1. To invoke the Azure Cosmos DB service, you must instantiate a new `cosmosClient` object. In general, it is best to reuse the `cosmosClient` object rather than constructing a new client for each subsequent request. You can reuse the client by defining it within the `cosmosClientFactory` class. Update the HOST and MASTER_KEY values that you saved in [step 1](#CreateDB). Replace the HOST variable with your URI and replace the MASTER_KEY with your PRIMARY KEY. Use the following code to create the `CosmosClientFactory` class within the *CosmosClientFactory.java* file:
:::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/CosmosClientFactory.java":::
Create a Data Access Object (DAO) to abstract persisting the ToDo items to Azure
### Add a controller
-Add the *TodoItemController* controller to your application. In this project, you are using [Project Lombok](https://projectlombok.org/) to generate the constructor, getters, setters, and a builder. Alternatively, you can write this code manually or have the IDE generate it.:
+Add the *TodoItemController* controller to your application. In this project, you are using [Project Lombok](https://projectlombok.org/) to generate the constructor, getters, setters, and a builder. Alternatively, you can write this code manually or have the IDE generate it:
:::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/controller/TodoItemController.java":::
Next, create a servlet to route HTTP requests to the controller. Create the *Api
:::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/ApiServlet.java":::
-## <a id="Wire"></a>Wire the rest of the of Java app together
+## <a id="Wire"></a>Wire the rest of the Java app together
Now that we've finished the fun bits, all that's left is to build a quick user interface and wire it up to your DAO.
Now that we've finished the fun bits, all that's left is to build a quick user i
Azure Web Sites makes deploying Java applications as simple as exporting your application as a WAR file and either uploading it via source control (e.g. Git) or FTP.
-1. To export your application as a WAR file, right-click on your project in **Project Explorer**, click **Export**, and then click **WAR File**.
+1. To export your application as a WAR file, right-click on your project in **Project Explorer**, select **Export**, and then select **WAR File**.
1. In the **WAR Export** window, do the following: * In the Web project box, enter azure-cosmos-java-sample. * In the Destination box, choose a destination to save the WAR file.
- * Click **Finish**.
+ * Select **Finish**.
1. Now that you have a WAR file in hand, you can simply upload it to your Azure Web Site's **webapps** directory. For instructions on uploading the file, see [Add a Java application to Azure App Service Web Apps](../../app-service/quickstart-java.md). After the WAR file is uploaded to the webapps directory, the runtime environment will detect that you've added it and will automatically load it.
All the samples in this tutorial are included in the [todo](https://github.com/A
1. If Eclipse is open, close it and restart it to load Lombok.
-1. In Eclipse, on the **File** menu, click **Import**.
+1. In Eclipse, on the **File** menu, select **Import**.
-1. In the **Import** window, click **Git**, click **Projects from Git**, and then click **Next**.
+1. In the **Import** window, select **Git**, select **Projects from Git**, and then select **Next**.
-1. On the **Select Repository Source** screen, click **Clone URI**.
+1. On the **Select Repository Source** screen, select **Clone URI**.
-1. On the **Source Git Repository** screen, in the **URI** box, enter https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app, and then click **Next**.
+1. On the **Source Git Repository** screen, in the **URI** box, enter https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app, and then select **Next**.
-1. On the **Branch Selection** screen, ensure that **main** is selected, and then click **Next**.
+1. On the **Branch Selection** screen, ensure that **main** is selected, and then select **Next**.
-1. On the **Local Destination** screen, click **Browse** to select a folder where the repository can be copied, and then click **Next**.
+1. On the **Local Destination** screen, select **Browse** to select a folder where the repository can be copied, and then select **Next**.
-1. On the **Select a wizard to use for importing projects** screen, ensure that **Import existing projects** is selected, and then click **Next**.
+1. On the **Select a wizard to use for importing projects** screen, ensure that **Import existing projects** is selected, and then select **Next**.
-1. On the **Import Projects** screen, unselect the **DocumentDB** project, and then click **Finish**. The DocumentDB project contains the Azure Cosmos DB Java SDK, which we will add as a dependency instead.
+1. On the **Import Projects** screen, unselect the **DocumentDB** project, and then select **Finish**. The DocumentDB project contains the Azure Cosmos DB Java SDK, which we will add as a dependency instead.
1. In **Project Explorer**, navigate to azure-cosmos-java-sample\src\com.microsoft.azure.cosmos.sample.dao\DocumentClientFactory.java and replace the HOST and MASTER_KEY values with the URI and PRIMARY KEY for your Azure Cosmos DB account, and then save the file. For more information, see [Step 1. Create an Azure Cosmos database account](#CreateDB).
-1. In **Project Explorer**, right-click the **azure-cosmos-java-sample**, click **Build Path**, and then click **Configure Build Path**.
+1. In **Project Explorer**, right-click the **azure-cosmos-java-sample**, select **Build Path**, and then select **Configure Build Path**.
-1. On the **Java Build Path** screen, in the right pane, select the **Libraries** tab, and then click **Add External JARs**. Navigate to the location of the lombok.jar file, and click **Open**, and then click **OK**.
+1. On the **Java Build Path** screen, in the right pane, select the **Libraries** tab, and then select **Add External JARs**. Navigate to the location of the lombok.jar file, and select **Open**, and then select **OK**.
-1. Use step 12 to open the **Properties** window again, and then in the left pane click **Targeted Runtimes**.
+1. Use step 12 to open the **Properties** window again, and then in the left pane select **Targeted Runtimes**.
-1. On the **Targeted Runtimes** screen, click **New**, select **Apache Tomcat v7.0**, and then click **OK**.
+1. On the **Targeted Runtimes** screen, select **New**, select **Apache Tomcat v7.0**, and then select **OK**.
-1. Use step 12 to open the **Properties** window again, and then in the left pane click **Project Facets**.
+1. Use step 12 to open the **Properties** window again, and then in the left pane select **Project Facets**.
-1. On the **Project Facets** screen, select **Dynamic Web Module** and **Java**, and then click **OK**.
+1. On the **Project Facets** screen, select **Dynamic Web Module** and **Java**, and then select **OK**.
-1. On the **Servers** tab at the bottom of the screen, right-click **Tomcat v7.0 Server at localhost** and then click **Add and Remove**.
+1. On the **Servers** tab at the bottom of the screen, right-click **Tomcat v7.0 Server at localhost** and then select **Add and Remove**.
-1. On the **Add and Remove** window, move **azure-cosmos-java-sample** to the **Configured** box, and then click **Finish**.
+1. On the **Add and Remove** window, move **azure-cosmos-java-sample** to the **Configured** box, and then select **Finish**.
-1. In the **Servers** tab, right-click **Tomcat v7.0 Server at localhost**, and then click **Restart**.
+1. In the **Servers** tab, right-click **Tomcat v7.0 Server at localhost**, and then select **Restart**.
1. In a browser, navigate to `http://localhost:8080/azure-cosmos-java-sample/` and start adding to your task list. Note that if you changed your default port values, change 8080 to the value you selected.
All the samples in this tutorial are included in the [todo](https://github.com/A
## Next steps Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) > [!div class="nextstepaction"]
cosmos-db Sql Api Sdk Async Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-async-java.md
The SQL API Async Java SDK differs from the SQL API Java SDK by providing asynch
| | Links | |||
+| **Release Notes** | [Release notes for Async Java SDK](https://github.com/Azure/azure-cosmosdb-jav) |
| **SDK Download** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb) |
-|**API documentation** |[Java API reference documentation](/java/api/com.microsoft.azure.cosmosdb.rx.asyncdocumentclient) |
-|**Contribute to SDK** | [GitHub](https://github.com/Azure/azure-cosmosdb-java) |
-|**Get started** | [Get started with the Async Java SDK](https://github.com/Azure-Samples/azure-cosmos-db-sql-api-async-java-getting-started) |
-|**Code sample** | [GitHub](https://github.com/Azure/azure-cosmosdb-java#usage-code-sample)|
+| **API documentation** |[Java API reference documentation](/java/api/com.microsoft.azure.cosmosdb.rx.asyncdocumentclient) |
+| **Contribute to SDK** | [GitHub](https://github.com/Azure/azure-cosmosdb-java) |
+| **Get started** | [Get started with the Async Java SDK](https://github.com/Azure-Samples/azure-cosmos-db-sql-api-async-java-getting-started) |
+| **Code sample** | [GitHub](https://github.com/Azure/azure-cosmosdb-java#usage-code-sample)|
| **Performance tips**| [GitHub readme](https://github.com/Azure/azure-cosmosdb-java#guide-for-prod)| | **Minimum supported runtime**|[JDK 8](/java/azure/jdk/) |
cosmos-db Sql Api Sdk Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet-core.md
| | Links | |||
+|**Release notes**| [Release notes](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md)|
|**SDK download**| [NuGet](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core/)| |**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)| |**Samples**|[.NET code samples](sql-api-dotnet-samples.md)|
cosmos-db Sql Api Sdk Dotnet Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet-standard.md
| | Links | |||
+|**Release notes**|[Release notes](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/changelog.md)|
|**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/)| |**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)| |**Samples**|[.NET code samples](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage)| |**Get started**|[Get started with the Azure Cosmos DB .NET SDK](sql-api-get-started.md)|
+|**Best Practices**|[Best Practices for Azure Cosmos DB .NET SDK](best-practice-dotnet.md)|
|**Web app tutorial**|[Web application development with Azure Cosmos DB](sql-api-dotnet-application.md)| |**Entity Framework Core tutorial**|[Entity Framework Core with Azure Cosmos DB Provider](/ef/core/providers/cosmos/#get-started)| |**Current supported framework**|[Microsoft .NET Standard 2.0](/dotnet/standard/net-standard)|
cosmos-db Sql Api Sdk Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet.md
| | Links | |||
+|**Release notes**|[Release notes](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md)|
|**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/)| |**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)| |**Samples**|[.NET code samples](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples)|
cosmos-db Sql Api Sdk Java Spring V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-spring-v3.md
You can use Spring Data Azure Cosmos DB in your [Azure Spring Cloud](https://azu
| Content | Link | |||
-|**SDK download**| [Maven](https://mvnrepository.com/artifact/com.azure/azure-spring-data-cosmos) |
-|**API documentation** | [Java API reference documentation](/java/api/overview/azure/spring-data-cosmos-readme?view=azure-java-stable&preserve-view=true) |
-|**Contribute to SDK** | [Azure SDK for Java Central Repo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-spring-data-cosmos) |
-|**Get started** | [Quickstart: Build a Spring Data Azure Cosmos DB app to manage Azure Cosmos DB SQL API data](./create-sql-api-spring-data.md) <br> [GitHub repo with quickstart code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started) |
-|**Basic code samples** | [Azure Cosmos DB: Spring Data Azure Cosmos DB examples for the SQL API](sql-api-spring-data-sdk-samples.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples)|
+| **Release notes** | [Release notes for Spring Data Cosmos SDK v3](https://github.com/Azure/azure-sdk-for-jav) |
+| **SDK download** | [Maven](https://mvnrepository.com/artifact/com.azure/azure-spring-data-cosmos) |
+| **API documentation** | [Java API reference documentation](/java/api/overview/azure/spring-data-cosmos-readme?view=azure-java-stable&preserve-view=true) |
+| **Contribute to SDK** | [Azure SDK for Java Central Repo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-spring-data-cosmos) |
+| **Get started** | [Quickstart: Build a Spring Data Azure Cosmos DB app to manage Azure Cosmos DB SQL API data](./create-sql-api-spring-data.md) <br> [GitHub repo with quickstart code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started) |
+| **Basic code samples** | [Azure Cosmos DB: Spring Data Azure Cosmos DB examples for the SQL API](sql-api-spring-data-sdk-samples.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples)|
| **Performance tips**| [Performance tips for Java SDK v4 (applicable to Spring Data)](performance-tips-java-sdk-v4-sql.md)| | **Troubleshooting** | [Troubleshoot Java SDK v4 (applicable to Spring Data)](troubleshoot-java-sdk-v4-sql.md) | | **Azure Cosmos DB workshops and labs** |[Cosmos DB workshops home page](https://aka.ms/cosmosworkshop)
cosmos-db Sql Api Sdk Java V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-v4.md
The Azure Cosmos DB Java SDK v4 for Core (SQL) combines an Async API and a Sync
| Content | Link | |||
-|**SDK download**| [Maven](https://mvnrepository.com/artifact/com.azure/azure-cosmos) |
-|**API documentation** | [Java API reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-cosmos/latest/https://docsupdatetracker.net/index.html) |
-|**Contribute to SDK** | [Azure SDK for Java Central Repo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-cosmos) |
-|**Get started** | [Quickstart: Build a Java app to manage Azure Cosmos DB SQL API data](./create-sql-api-java.md) <br> [GitHub repo with quickstart code](https://github.com/Azure-Samples/azure-cosmos-java-getting-started) |
-|**Basic code samples** | [Azure Cosmos DB: Java examples for the SQL API](sql-api-java-sdk-samples.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples)|
-|**Console app with Change Feed**| [Change feed - Java SDK v4 sample](create-sql-api-java-changefeed.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-cosmos-java-sql-app-example)|
-|**Web app sample**| [Build a web app with Java SDK v4](sql-api-java-application.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app)|
+| **Release Notes** | [Release notes for Java SDK v4](https://github.com/Azure/azure-sdk-for-jav) |
+| **SDK download** | [Maven](https://mvnrepository.com/artifact/com.azure/azure-cosmos) |
+| **API documentation** | [Java API reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-cosmos/latest/https://docsupdatetracker.net/index.html) |
+| **Contribute to SDK** | [Azure SDK for Java Central Repo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-cosmos) |
+| **Get started** | [Quickstart: Build a Java app to manage Azure Cosmos DB SQL API data](./create-sql-api-java.md) <br> [GitHub repo with quickstart code](https://github.com/Azure-Samples/azure-cosmos-java-getting-started) |
+| **Best Practices** | [Best Practices for Java SDK v4](best-practice-java.md) |
+| **Basic code samples** | [Azure Cosmos DB: Java examples for the SQL API](sql-api-java-sdk-samples.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples)|
+| **Console app with Change Feed**| [Change feed - Java SDK v4 sample](create-sql-api-java-changefeed.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-cosmos-java-sql-app-example)|
+| **Web app sample**| [Build a web app with Java SDK v4](sql-api-java-application.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app)|
| **Performance tips**| [Performance tips for Java SDK v4](performance-tips-java-sdk-v4-sql.md)| | **Troubleshooting** | [Troubleshoot Java SDK v4](troubleshoot-java-sdk-v4-sql.md) | | **Migrate to v4 from an older SDK** | [Migrate to Java V4 SDK](migrate-java-v4-sdk.md) |
cosmos-db Tutorial Sql Api Dotnet Bulk Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-sql-api-dotnet-bulk-import.md
Previously updated : 08/26/2021- Last updated : 03/25/2022+ ms.devlang: csharp-+ # Bulk import data to Azure Cosmos DB SQL API account by using the .NET SDK [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] This tutorial shows how to build a .NET console application that optimizes provisioned throughput (RU/s) required to import data to Azure Cosmos DB.
-In this article, you will read data from a sample data source and import it into an Azure Cosmos container.
+
+>
+> [!VIDEO https://aka.ms/docs.learn-live-dotnet-bulk]
+
+In this article, you'll read data from a sample data source and import it into an Azure Cosmos container.
This tutorial uses [Version 3.0+](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) of the Azure Cosmos DB .NET SDK, which can be targeted to .NET Framework or .NET Core. This tutorial covers:
Before following the instructions in this article, make sure that you have the f
## Step 2: Set up your .NET project
-Open the Windows command prompt or a Terminal window from your local computer. You will run all the commands in the next sections from the command prompt or terminal. Run the following dotnet new command to create a new app with the name *bulk-import-demo*.
+Open the Windows command prompt or a Terminal window from your local computer. You'll run all the commands in the next sections from the command prompt or terminal. Run the following dotnet new command to create a new app with the name *bulk-import-demo*.
```bash dotnet new console -n bulk-import-demo
While still in the application directory, install the Azure Cosmos DB client lib
The sample application needs to authenticate to your Azure Cosmos account. To authenticate, you should pass the Azure Cosmos account credentials to the application. Get your Azure Cosmos account credentials by following these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Navigate to your Azure Cosmos account.
-1. Open the **Keys** pane and copy the **URI** and **PRIMARY KEY** of your account.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Navigate to your Azure Cosmos account.
+1. Open the **Keys** pane and copy the **URI** and **PRIMARY KEY** of your account.
-If you are using the Azure Cosmos DB Emulator, obtain the [emulator credentials from this article](../local-emulator.md#authenticate-requests).
+If you're using the Azure Cosmos DB Emulator, obtain the [emulator credentials from this article](../local-emulator.md#authenticate-requests).
## Step 5: Initialize the CosmosClient object with bulk execution support
-Open the generated `Program.cs` file in a code editor. You will create a new instance of CosmosClient with bulk execution enabled and use it to do operations against Azure Cosmos DB.
+Open the generated `Program.cs` file in a code editor. You'll create a new instance of CosmosClient with bulk execution enabled and use it to do operations against Azure Cosmos DB.
-Let's start by overwriting the default `Main` method and defining the global variables. These global variables will include the endpoint and authorization keys, the name of the database, container that you will create, and the number of items that you will be inserting in bulk. Make sure to replace the endpointURL and authorization key values according to your environment.
+Let's start by overwriting the default `Main` method and defining the global variables. These global variables will include the endpoint and authorization keys, the name of the database, container that you'll create, and the number of items that you'll be inserting in bulk. Make sure to replace the endpointURL and authorization key values according to your environment.
```csharp
You can then create a container to store all our items. Define `/pk` as the par
## Step 6: Populate a list of concurrent tasks To take advantage of the bulk execution support, create a list of asynchronous tasks based on the source of data and the operations you want to perform, and use `Task.WhenAll` to execute them concurrently.
-LetΓÇÖs start by using "Bogus" data to generate a list of items from our data model. In a real-world application, the items would come from your desired data source.
+Let's start by using "Bogus" data to generate a list of items from our data model. In a real-world application, the items would come from your desired data source.
First, add the Bogus package to the solution by using the dotnet add package command.
You can now proceed to the next tutorial:
>[Query Azure Cosmos DB by using the SQL API](tutorial-query-sql-api.md) Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Throughput Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/throughput-serverless.md
Previously updated : 05/25/2021 Last updated : 03/24/2022++ # How to choose between provisioned throughput and serverless
Last updated 05/25/2021
Azure Cosmos DB is available in two different capacity modes: [provisioned throughput](set-throughput.md) and [serverless](serverless.md). You can perform the exact same database operations in both modes, but the way you get billed for these operations is radically different. The following video explains the core differences between these modes and how they fit different types of workloads:
-> [!VIDEO https://www.youtube.com/embed/CgYQo6uHyt0]
+>
+> [!VIDEO https://aka.ms/docs.throughput-offers]
## Detailed comparison
In some situations, it may be unclear whether provisioned throughput or serverle
- In provisioned throughput mode, you would provision a container with 500 RU/s for a monthly cost of: $0.008 * 5 * 730 = **$29.20** - In serverless mode, you would pay for the consumed RUs: $0.25 * 250 = **$62.50**
-(these examples are not accounting for the storage cost, which is the same between the two modes)
+(These examples are not accounting for the storage cost, which is the same between the two modes.)
> [!NOTE] > The costs shown in the previous example are for demonstration purposes only. See the [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for the latest pricing information.
cosmos-db Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/use-cases.md
# Common Azure Cosmos DB use cases [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-📺 <B><a href="https://aka.ms/cosmos-db-video-top-cosmos-db-use-cases" target="_blank">Video: Top Azure Cosmos DB use cases</a></b>
- This article provides an overview of several common use cases for Azure Cosmos DB. The recommendations in this article serve as a starting point as you develop your application with Cosmos DB.
+>
+> [!VIDEO https://aka.ms/docs.modeling-data]
+ After reading this article, you'll be able to answer the following questions: * What are the common use cases for Azure Cosmos DB?
IoT use cases commonly share some patterns in how they ingest, process, and stor
Bursts of data can be ingested by Azure Event Hubs as it offers high throughput data ingestion with low latency. Data ingested that needs to be processed for real-time insight can be funneled to Azure Stream Analytics for real-time analytics. Data can be loaded into Azure Cosmos DB for adhoc querying. Once the data is loaded into Azure Cosmos DB, the data is ready to be queried. In addition, new data and changes to existing data can be read on change feed. Change feed is a persistent, append only log that stores changes to Cosmos containers in sequential order. Then all data or just changes to data in Azure Cosmos DB can be used as reference data as part of real-time analytics. In addition, data can further be refined and processed by connecting Azure Cosmos DB data to HDInsight for Pig, Hive, or Map/Reduce jobs. Refined data is then loaded back to Azure Cosmos DB for reporting.
-For a sample IoT solution using Azure Cosmos DB, EventHubs and Storm, see the [hdinsight-storm-examples repository on GitHub](https://github.com/hdinsight/hdinsight-storm-examples/).
+For a sample IoT solution using Azure Cosmos DB, Event Hubs and Apache Storm, see the [hdinsight-storm-examples repository on GitHub](https://github.com/hdinsight/hdinsight-storm-examples/).
For more information on Azure offerings for IoT, see [Create the Internet of Your Things](https://www.microsoft.com/en-us/internet-of-things).
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md
In this case, all files that were sourced under `/data/sales` are moved to `/bac
**Filter by last modified:** You can filter which files you process by specifying a date range of when they were last modified. All datetimes are in UTC.
-**Enable change data capture (Preview):** If true, you will get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs. For more details, see [Change data capture (preview)](#change-data-capture-preview).
+**Enable change data capture:** If true, you will get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs. For more details, see [Change data capture](#change-data-capture-preview).
### Sink properties
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-storage.md
Previously updated : 03/24/2022 Last updated : 03/29/2022 # Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics
These properties are supported for the linked service:
| url | Endpoint for Data Lake Storage Gen2 with the pattern of `https://<accountname>.dfs.core.windows.net`. | Yes | | servicePrincipalId | Specify the application's client ID. | Yes | | servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes |
-| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault. | Yes |
+| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault. | Yes |
| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> This property is still supported as-is for `servicePrincipalId` + `servicePrincipalKey`. As ADF adds new service principal certificate authentication, the new model for service principal authentication is `servicePrincipalId` + `servicePrincipalCredentialType` + `servicePrincipalCredential`. | No | | tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes | | azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory or Synapse pipeline's cloud environment is used. | No |
In this case, all files that were sourced under /data/sales are moved to /backup
**Enable change data capture:** If true, you will get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs. For more details, see [Change data capture](#change-data-capture). ### Sink properties
data-factory Connector Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-store.md
The following properties are supported:
| servicePrincipalId | Specify the application's client ID. | Yes | | servicePrincipalKey | Specify the application's key. Mark this field as a `SecureString` to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes | | tenant | Specify the tenant information, such as domain name or tenant ID, under which your application resides. You can retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes |
-| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the the service's cloud environment is used. | No |
+| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the service's cloud environment is used. | No |
**Example:**
In this case, all files that were sourced under /data/sales are moved to /backup
**Filter by last modified:** You can filter which files you process by specifying a date range of when they were last modified. All date-times are in UTC.
-**Enable change data capture (Preview):** If true, you will get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs. For more details, see [Change data capture (preview)](#change-data-capture-preview).
+**Enable change data capture:** If true, you will get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs. For more details, see [Change data capture](#change-data-capture-preview).
### Sink properties
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
Previously updated : 03/02/2022 Last updated : 03/25/2022
data-factory Connector Sftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sftp.md
Title: Copy data from and to SFTP server
+ Title: Copy and transform data in SFTP server using Azure Data Factory or Azure Synapse Analytics
-description: Learn how to copy data from and to SFTP server by using Azure Data Factory and Azure Synapse Analytics pipelines.
+description: Learn how to copy data from and to SFTP server, and transform data in SFTP server using Azure Data Factory or Azure Synapse Analytics.
Previously updated : 03/22/2022 Last updated : 03/25/2022
-# Copy data from and to the SFTP server using Azure Data Factory or Azure Synapse Analytics
+# Copy and transform data in SFTP server using Azure Data Factory or Azure Synapse Analytics
> [!div class="op_single_selector" title1="Select the version of the Data Factory service that you are using:"] > * [Version 1](v1/data-factory-sftp-connector.md) > * [Current version](connector-sftp.md) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to copy data from and to the secure FTP (SFTP) server. To learn more read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+This article outlines how to use Copy Activity to copy data from and to the secure FTP (SFTP) server, and use Data Flow to transform data in SFTP server. To learn more read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
## Supported capabilities
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-sink.md
Previously updated : 01/10/2022 Last updated : 03/25/2022 # Sink transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| [Dataverse](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Dynamics 365](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Dynamics CRM](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
+| [SFTP](connector-sftp.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties) <br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties) <br>[JSON](format-json.md#mapping-data-flow-properties) <br/>[ORC](format-orc.md#mapping-data-flow-properties)<br>[Parquet](format-parquet.md#mapping-data-flow-properties) | Γ£ô/Γ£ô <br>Γ£ô/Γ£ô <br>Γ£ô/Γ£ô <br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô|
| [Snowflake](connector-snowflake.md) | | Γ£ô/Γ£ô | | [SQL Server](connector-sql-server.md) | | Γ£ô/Γ£ô |
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-source.md
Previously updated : 03/02/2022 Last updated : 03/25/2022 # Source transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| [Dynamics CRM](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Hive](connector-hive.md#mapping-data-flow-properties) | | -/Γ£ô | | [Quickbase (Preview)](connector-quickbase.md#mapping-data-flow-properties) | | -/Γ£ô |
+| [SFTP](connector-sftp.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties) <br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô |
| [Smartsheet (Preview)](connector-smartsheet.md#mapping-data-flow-properties) | | -/Γ£ô | | [Snowflake](connector-snowflake.md) | | Γ£ô/Γ£ô | | [SQL Server](connector-sql-server.md) | | Γ£ô/Γ£ô |
data-factory Format Avro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-avro.md
Previously updated : 10/18/2021 Last updated : 03/25/2022
Supported **Avro write settings** under `formatSettings`:
## Mapping data flow properties
-In mapping data flows, you can read and write to avro format in the following data stores: [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties), and you can read avro format in [Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties).
+In mapping data flows, you can read and write to avro format in the following data stores: [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties) and [SFTP](connector-sftp.md#mapping-data-flow-properties), and you can read avro format in [Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties).
### Source properties
data-factory Format Delimited Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delimited-text.md
Previously updated : 03/16/2022 Last updated : 03/25/2022
Supported **delimited text write settings** under `formatSettings`:
## Mapping data flow properties
-In mapping data flows, you can read and write to delimited text format in the following data stores: [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties), and you can read delimited text format in [Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties).
+In mapping data flows, you can read and write to delimited text format in the following data stores: [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties) and [SFTP](connector-sftp.md#mapping-data-flow-properties), and you can read delimited text format in [Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties).
### Source properties
data-factory Format Excel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-excel.md
Previously updated : 10/18/2021 Last updated : 03/25/2022
The following properties are supported in the copy activity ***\*source\**** sec
## Mapping data flow properties
-In mapping data flows, you can read Excel format in the following data stores: [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties), and [Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties). You can point to Excel files either using Excel dataset or using an [inline dataset](data-flow-source.md#inline-datasets).
+In mapping data flows, you can read Excel format in the following data stores: [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties), [Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties) and [SFTP](connector-sftp.md#mapping-data-flow-properties). You can point to Excel files either using Excel dataset or using an [inline dataset](data-flow-source.md#inline-datasets).
### Source properties
source(allowSchemaDrift: true,
firstRowAsHeader: true) ~> ExcelSourceInlineDataset ```
+## Handling very large Excel files
+
+The Excel connector does not support streaming read for the Copy activity and must load the entire file into memory before data can be read. To import schema, preview data, or refresh an Excel dataset, the data must be returned before the http request timeout (100s). For large Excel files, these operations may not finish within that timeframe, causing a timeout error. If you want to move large Excel files (>100MB) into another data store, you can use one of following options to work around this limitation:
+
+- Use the self-hosted integration runtime (SHIR), then use the Copy activity to move the large Excel file into another data store with the SHIR.
+- Split the large Excel file into several smaller ones, then use the Copy activity to move the folder containing the files.
+- Use a dataflow activity to move the large Excel file into another data store. Dataflow supports streaming read for Excel and can move/transfer large files quickly.
+- Manually convert the large Excel file to CSV format, then use a Copy activity to move the file.
+ ## Next steps - [Copy activity overview](copy-activity-overview.md)
data-factory Format Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-json.md
Previously updated : 11/26/2021 Last updated : 03/25/2022
When copying data from JSON files, copy activity can automatically detect and pa
## Mapping data flow properties
-In [mapping data flows](concepts-data-flow-overview.md), you can read and write to JSON format in the following data stores: [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties), and you can read JSON format in [Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties).
+In [mapping data flows](concepts-data-flow-overview.md), you can read and write to JSON format in the following data stores: [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties) and [SFTP](connector-sftp.md#mapping-data-flow-properties), and you can read JSON format in [Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties).
### Source properties
data-factory Format Orc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-orc.md
Previously updated : 10/18/2021 Last updated : 03/25/2022
Supported **ORC write settings** under `formatSettings`:
## Mapping data flow properties
-In mapping data flows, you can read and write to ORC format in the following data stores: [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties), and you can read ORC format in [Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties).
+In mapping data flows, you can read and write to ORC format in the following data stores: [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties) and [SFTP](connector-sftp.md#mapping-data-flow-properties), and you can read ORC format in [Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties).
You can point to ORC files either using ORC dataset or using an [inline dataset](data-flow-source.md#inline-datasets).
data-factory Format Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-parquet.md
Previously updated : 10/18/2021 Last updated : 03/25/2022
Supported **Parquet write settings** under `formatSettings`:
## Mapping data flow properties
-In mapping data flows, you can read and write to parquet format in the following data stores: [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties), and you can read parquet format in [Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties).
+In mapping data flows, you can read and write to parquet format in the following data stores: [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties) and [SFTP](connector-sftp.md#mapping-data-flow-properties), and you can read parquet format in [Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties).
### Source properties
data-factory Format Xml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-xml.md
Previously updated : 10/18/2021 Last updated : 03/25/2022
Supported **XML read settings** under `formatSettings`:
## Mapping data flow properties
-In mapping data flows, you can read XML format in the following data stores: [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties), and [Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties). You can point to XML files either using XML dataset or using an [inline dataset](data-flow-source.md#inline-datasets).
+In mapping data flows, you can read XML format in the following data stores: [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties), [Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties) and [SFTP](connector-sftp.md#mapping-data-flow-properties). You can point to XML files either using XML dataset or using an [inline dataset](data-flow-source.md#inline-datasets).
### Source properties
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-schedule-trigger.md
You can create a **schedule trigger** to schedule a pipeline to run periodically
:::image type="content" source="./media/how-to-create-schedule-trigger/switch-edit-tab.png" alt-text="Switch to Edit tab"::: # [Azure Synapse](#tab/synapse-analytics)+ :::image type="content" source="./media/how-to-create-schedule-trigger/switch-edit-tab-synapse.png" alt-text="Switch to Edit tab":::
-
-2. Select **Trigger** on the menu, then select **New/Edit**.
+
+2. Select **Trigger** on the menu, then select **New/Edit**.
:::image type="content" source="./media/how-to-create-schedule-trigger/new-trigger-menu.png" alt-text="New trigger menu":::
-1. On the **Add Triggers** page, select **Choose trigger...**, then select **+New**.
+1. On the **Add Triggers** page, select **Choose trigger...**, then select **+New**.
:::image type="content" source="./media/how-to-create-schedule-trigger/add-trigger-new-button.png" alt-text="Add triggers - new trigger":::
-1. On the **New Trigger** page, do the following steps:
+1. On the **New Trigger** page, do the following steps:
1. Confirm that **Schedule** is selected for **Type**. 1. Specify the start datetime of the trigger for **Start Date**. It's set to the current datetime in Coordinated Universal Time (UTC) by default.
- 1. Specify the time zone that the trigger will be created in. The time zone setting will apply to **Start Date**, **End Date**, and **Schedule Execution Times** in Advanced recurrence options. Changing Time Zone setting will not automatically change your start date. Make sure the Start Date is correct in the specified time zone. Please note that Scheduled Execution time of Trigger will be considered post the Start Date (Ensure Start Date is atleast 1minute lesser than the Execution time else it will trigger pipeline in next recurrence).
+ 1. Specify the time zone that the trigger will be created in. The time zone setting will apply to **Start Date**, **End Date**, and **Schedule Execution Times** in Advanced recurrence options. Changing Time Zone setting will not automatically change your start date. Make sure the Start Date is correct in the specified time zone. Please note that Scheduled Execution time of Trigger will be considered post the Start Date (Ensure Start Date is at least 1 minute lesser than the Execution time else it will trigger pipeline in next recurrence).
> [!NOTE]
- > For time zones that observe daylight saving, trigger time will auto-adjust for the twice a year change. To opt out of the daylight saving change, please select a time zone that does not observe daylight saving, for instance UTC
+ > For time zones that observe daylight saving, trigger time will auto-adjust for the twice a year change, if the recurrence is set to _Days_ or above. To opt out of the daylight saving change, please select a time zone that does not observe daylight saving, for instance UTC
+
+ > [!IMPORTANT]
+ > Daylight saving adjustment only happens for trigger with recurrence set to _Days_ or above. If the trigger is set to _Hours_ or _Minutes_ frequency, it will continue to fire at regular intervals.
- 1. Specify **Recurrence** for the trigger. Select one of the values from the drop-down list (Every minute, Hourly, Daily, Weekly, and Monthly). Enter the multiplier in the text box. For example, if you want the trigger to run once for every 15 minutes, you select **Every Minute**, and enter **15** in the text box.
+ 1. Specify **Recurrence** for the trigger. Select one of the values from the drop-down list (Every minute, Hourly, Daily, Weekly, and Monthly). Enter the multiplier in the text box. For example, if you want the trigger to run once for every 15 minutes, you select **Every Minute**, and enter **15** in the text box.
1. In the **Recurrence**, if you choose "Day(s), Week(s) or Month(s)" from the drop-down, you can find "Advanced recurrence options". :::image type="content" source="./media/how-to-create-schedule-trigger/advanced.png" alt-text="Advanced recurrence options of Day(s), Week(s) or Month(s)"::: 1. To specify an end date time, select **Specify an End Date**, and specify _Ends On_, then select **OK**. There is a cost associated with each pipeline run. If you are testing, you may want to ensure that the pipeline is triggered only a couple of times. However, ensure that there is enough time for the pipeline to run between the publish time and the end time. The trigger comes into effect only after you publish the solution, not when you save the trigger in the UI.
You can create a **schedule trigger** to schedule a pipeline to run periodically
:::image type="content" source="./media/how-to-create-schedule-trigger/trigger-settings-02.png" alt-text="Trigger settings for End Date":::
-1. In the **New Trigger** window, select **Yes** in the **Activated** option, then select **OK**. You can use this checkbox to deactivate the trigger later.
+1. In the **New Trigger** window, select **Yes** in the **Activated** option, then select **OK**. You can use this checkbox to deactivate the trigger later.
:::image type="content" source="./media/how-to-create-schedule-trigger/trigger-settings-next.png" alt-text="Trigger settings - Next button":::
You can create a **schedule trigger** to schedule a pipeline to run periodically
:::image type="content" source="./media/how-to-create-schedule-trigger/publish-2.png" alt-text="Publish button":::
-1. Switch to the **Pipeline runs** tab on the left, then select **Refresh** to refresh the list. You will see the pipeline runs triggered by the scheduled trigger. Notice the values in the **Triggered By** column. If you use the **Trigger Now** option, you will see the manual trigger run in the list.
+1. Switch to the **Pipeline runs** tab on the left, then select **Refresh** to refresh the list. You will see the pipeline runs triggered by the scheduled trigger. Notice the values in the **Triggered By** column. If you use the **Trigger Now** option, you will see the manual trigger run in the list.
# [Azure Data Factory](#tab/data-factory)
You can create a **schedule trigger** to schedule a pipeline to run periodically
# [Azure Synapse](#tab/synapse-analytics) :::image type="content" source="./media/how-to-create-schedule-trigger/monitor-triggered-runs-synapse.png" alt-text="Monitor triggered runs":::
-
+
-9. Switch to the **Trigger Runs** \ **Schedule** view.
+9. Switch to the **Trigger Runs** \ **Schedule** view.
# [Azure Data Factory](#tab/data-factory)
You can create a **schedule trigger** to schedule a pipeline to run periodically
# [Azure Synapse](#tab/synapse-analytics) :::image type="content" source="./media/how-to-create-schedule-trigger/monitor-trigger-runs-synapse.png" alt-text="Monitor trigger runs":::
-
+ ## Azure PowerShell
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
Select the recommendation, or open the adaptive application controls page to vie
- It's missing a Log Analytics agent - The Log Analytics agent isn't sending events - It's a Windows machine with a pre-existing [AppLocker](/windows/security/threat-protection/windows-defender-application-control/applocker/applocker-overview) policy enabled by either a GPO or a local security policy
+ - AppLocker is not available (Windows Server Core installations)
> [!TIP] > Defender for Cloud needs at least two weeks of data to define the unique recommendations per group of machines. Machines that have recently been created, or which belong to subscriptions that were only recently protected by Microsoft Defender for servers, will appear under the **No recommendation** tab.
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud -- Previously updated : 03/10/2022++ Last updated : 03/30/2022 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
| **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiAcitivty) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium | | **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium | | **An uncommon connection attempt detected (Preview)**<br>(K8S.NODE_SuspectConnection) | Analysis of processes running within a container detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium |
-| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alertΓÇÖs extended properties. | Execution | Medium |
+| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alertΓÇÖs extended properties. | Execution | Medium |
| **Attempt to stop apt-daily-upgrade.timer service detected (Preview)**<br>(K8S.NODE_TimerServiceDisabled) | Analysis of host/device data detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational | | **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container detected execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium | | **Behavior similar to Fairware ransomware detected (Preview)**<br>(K8S.NODE_FairwareMalware) | Analysis of processes running within a container detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Execution | Medium | | **Command within a container running with high privileges (Preview)**<br>(K8S.NODE_PrivilegedExecutionInContainer) | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low | | **Container running in privileged mode (Preview)**<br>(K8S.NODE_PrivilegedContainerArtifacts) | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to the host's resources. If compromised, an attacker can use the privileged container to gain access to the host machine. | PrivilegeEscalation, Execution | Low | | **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium |
-| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[1](#footnote1)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the clusterΓÇÖs DNS server and poison it. | Lateral Movement | Low |
-| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
+| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[1](#footnote1)</sup> <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the clusterΓÇÖs DNS server and poison it. | Lateral Movement | Low |
+| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
| **Detected file download from a known malicious source (Preview)**<br>(K8S.NODE_SuspectDownload) | Analysis of processes running within a container detected download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium | | **Detected Persistence Attempt (Preview)**<br>(K8S.NODE_NewSingleUserModeStartupScript) | Analysis of processes running within a container detected installation of a startup script for single-user mode. It is extremely rare that any legitimate process needs to execute in that mode so it may indicate an attacker has added a malicious process to every run-level to guarantee persistence. | Persistence | Medium | | **Detected suspicious file download (Preview)**<br>(K8S.NODE_SuspectDownloadArtifacts) | Analysis of processes running within a container detected suspicious download of a remote file. | Persistence | Low |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High | | **Digital currency mining related behavior detected (Preview)**<br>(K8S.NODE_DigitalCurrencyMining) | Analysis of host data detected the execution of a process or command normally associated with digital currency mining. | Execution | High | | **Docker build operation detected on a Kubernetes node (Preview)**<br>(K8S.NODE_ImageBuildOnNode) | Analysis of processes running within a container indicates a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low |
-| **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low |
+| **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) <sup>[3](#footnote3)</sup> | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low |
| **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of host data detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host. | Execution | Medium | | **Execution of hidden file (Preview)**<br>(K8S.NODE_ExecuteHiddenFile) | Analysis of host data indicates that a hidden file was executed by the specified user account. | Persistence, DefenseEvasion | Informational | | **Exposed Docker daemon on TCP socket (Preview)**<br>(K8S.NODE_ExposedDocker) | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port. | Execution, Exploitation | Medium |
-| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) <sup>[2](#footnote2)</sup> | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: https://aka.ms/exposedkubeflow-blog | Initial Access | Medium |
-| **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High |
-| **Exposed Kubernetes service detected**<br>(K8S_ExposedService) <sup>[2](#footnote2)</sup> | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium |
+| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: https://aka.ms/exposedkubeflow-blog | Initial Access | Medium |
+| **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) | Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High |
+| **Exposed Kubernetes service detected**<br>(K8S_ExposedService) | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium |
| **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low |
-| **Indicators associated with DDOS toolkit detected (Preview)**<br>(K8S.NODE_KnownLinuxDDoSToolkit) | Analysis of processes running within a container detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium |
-| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
-| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
-| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low |
+| **Indicators associated with DDOS toolkit detected (Preview)**<br>(K8S.NODE_KnownLinuxDDoSToolkit) <sup>[2](#footnote2)</sup> | Analysis of processes running within a container detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium |
+| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
+| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[1](#footnote1)</sup> <sup>[3](#footnote3)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
+| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low |
| **Local host reconnaissance detected (Preview)**<br>(K8S.NODE_LinuxReconnaissance) | Analysis of processes running within a container detected the execution of a command normally associated with common Linux bot reconnaissance. | Discovery | Medium | | **Manipulation of host firewall detected (Preview)**<br>(K8S.NODE_FirewallDisabled) | Analysis of processes running within a container detected possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium | | **Microsoft Defender for Cloud test alert (not a threat). (Preview)**<br>(K8S.NODE_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High | | **MITRE Caldera agent detected (Preview)**<br>(K8S.NODE_MitreCalderaTools) | Analysis of processes running within a container indicate that a suspicious process was running. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium |
-| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected a new container in the kube-system namespace that isnΓÇÖt among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
-| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
+| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isnΓÇÖt among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
+| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
| **Possible attack tool detected (Preview)**<br>(K8S.NODE_KnownLinuxAttackTool) | Analysis of processes running within a container indicates a suspicious tool ran. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium | | **Possible backdoor detected (Preview)**<br>(K8S.NODE_LinuxBackdoorArtifact) | Analysis of processes running within a container detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium | | **Possible command line exploitation attempt (Preview)**<br>(K8S.NODE_ExploitAttempt) | Analysis of processes running within a container detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the nodeΓÇÖs resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low | | **Process associated with digital currency mining detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerArtifacts) | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium | | **Process seen accessing the SSH authorized keys file in an unusual way (Preview)**<br>(K8S.NODE_SshKeyAccess) | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low |
-| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
+| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
| **Screenshot taken on host (Preview)**<br>(K8S.NODE_KnownLinuxScreenshotTool) | Analysis of host/device data detected the use of a screen capture tool. Attackers may use these tools to access private data. | Collection | Low | | **Script extension mismatch detected (Preview)**<br>(K8S.NODE_MismatchedScriptFeatures) | Analysis of processes running within a container detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions. | DefenseEvasion | Medium | | **Security-related process termination detected (Preview)**<br>(K8S.NODE_SuspectProcessTermination) | Analysis of processes running within a container detected attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
Microsoft Defender for Containers provides security alerts on the cluster level
<sup><a name="footnote1"></a>1</sup>: **Limitations on GKE clusters**: GKE uses a Kuberenetes audit policy that doesn't support all alert types. As a result, this security alert, which is based on Kubernetes audit events, are not supported for GKE clusters.
-<sup><a name="footnote2"></a>2</sup>: This alert is supported by Windows.
+<sup><a name="footnote2"></a>2</sup>: This alert is supported on Windows nodes/containers.
+
+<sup><a name="footnote3"></a>3</sup>: Control plane alert (OS agnostic).
## <a name="alerts-sql-db-and-warehouse"></a>Alerts for SQL Database and Azure Synapse Analytics
defender-for-cloud Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-security-policies.md
zone_pivot_groups: manage-asc-initiatives
To help secure your systems and environment, Microsoft Defender for Cloud generates security recommendations. These recommendations are based on industry best practices, which are incorporated into the generic, default security policy supplied to all customers. They can also come from Defender for Cloud's knowledge of industry and regulatory standards.
-With this feature, you can add your own *custom* initiatives. You'll then receive recommendations if your environment doesn't follow the policies you create. Any custom initiatives you create will appear alongside the built-in initiatives in the regulatory compliance dashboard, as described in the tutorial [Improve your regulatory compliance](regulatory-compliance-dashboard.md).
+With this feature, you can add your own *custom* initiatives. Although custom initiatives are not included in the secure score, you'll receive recommendations if your environment doesn't follow the policies you create. Any custom initiatives you create are shown in the list of all recommendations and you can filter by initiative to see the recommendations for your initiative. They are also shonw with the built-in initiatives in the regulatory compliance dashboard, as described in the tutorial [Improve your regulatory compliance](regulatory-compliance-dashboard.md).
As discussed in [the Azure Policy documentation](../governance/policy/concepts/definition-structure.md#definition-location), when you specify a location for your custom initiative, it must be a management group or a subscription.
In the following examples, replace these variables:
- **{policyAssignmentName}** enter the name of the relevant policy assignment - **{name}** enter your name, or the name of the administrator who approved the policy change
-This example shows you how to assign the built-in Defender for Cloud initiative on a subscription or management group
+This example shows you how to assign the built-in Defender for Cloud initiative on a subscription or management group:
``` PUT
This example shows you how to assign the built-in Defender for Cloud initiative
} ```+
+This example shows you how to assign a custom Defender for Cloud initiative on a subscription or management group:
+
+ ```
+ PUT
+ PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/policySetDefinitions/{policySetDefinitionName}?api-version=2021-06-01
+
+ Request Body (JSON)
+
+ {
+ "properties": {
+ "displayName": "Cost Management",
+ "description": "Policies to enforce low cost storage SKUs",
+ "metadata": {
+ "category": "Cost Management"
+ "ASC":"true"
+ },
+ "parameters": {
+ "namePrefix": {
+ "type": "String",
+ "defaultValue": "myPrefix",
+ "metadata": {
+ "displayName": "Prefix to enforce on resource names"
+ }
+ }
+ },
+ "policyDefinitions": [
+ {
+ "policyDefinitionId": "/subscriptions/ae640e6b-ba3e-4256-9d62-2993eecfa6f2/providers/Microsoft.Authorization/policyDefinitions/7433c107-6db4-4ad1-b57a-a76dce0154a1",
+ "policyDefinitionReferenceId": "Limit_Skus",
+ "parameters": {
+ "listOfAllowedSKUs": {
+ "value": [
+ "Standard_GRS",
+ "Standard_LRS"
+ ]
+ }
+ }
+ },
+ {
+ "policyDefinitionId": "/subscriptions/ae640e6b-ba3e-4256-9d62-2993eecfa6f2/providers/Microsoft.Authorization/policyDefinitions/ResourceNaming",
+ "policyDefinitionReferenceId": "Resource_Naming",
+ "parameters": {
+ "prefix": {
+ "value": "[parameters('namePrefix')]"
+ },
+ "suffix": {
+ "value": "-LC"
+ }
+ }
+ }
+ ]
+ }
+ }
+ ```
+ This example shows you how to remove an assignment:+ ``` DELETE https://management.azure.com/{scope}/providers/Microsoft.Authorization/policyAssignments/{policyAssignmentName}?api-version=2018-05-01
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
Yes. If you have an organizational need to ignore a finding, rather than remedia
### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry? Defender for Cloud provides vulnerability assessments for every image pushed or pulled in a registry. Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it'll expose security vulnerabilities.
+## Does Microsoft Defender for Containers support AKS clusters with virtual machines scale set (VMSS)?
+Yes.
## Next steps
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 03/20/2022 Last updated : 03/30/2022 # What's new in Microsoft Defender for Cloud?
Updates in March include:
- [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013) - [Deprecated Microsoft Defender for IoT device recommendations](#deprecated-microsoft-defender-for-iot-device-recommendations) - [Deprecated Microsoft Defender for IoT device alerts](#deprecated-microsoft-defender-for-iot-device-alerts)
+- [Posture management and threat protection for AWS and GCP released for general availability (GA)](#posture-management-and-threat-protection-for-aws-and-gcp-released-for-general-availability-ga)
+ ### Deprecated the recommendations to install the network traffic data collection agent Changes in our roadmap and priorities have removed the need for the network traffic data collection agent. Consequently, the following two recommendations and their related policies were deprecated.
The following recommendations are deprecated:
All Microsoft Defender for IoT device alerts are no longer visible in Microsoft Defender for Cloud. These alerts are still available on Microsoft Defender for IoT's Alert page, and in Microsoft Sentinel.
+### Posture management and threat protection for AWS and GCP released for general availability (GA)
+
+- **Defender for Cloud's CSPM features** extend to your AWS and GCP resources. This agentless plan assesses your multi cloud resources according to cloud-specific security recommendations that are included in your secure score. The resources are assessed for compliance using the built-in standards. Defender for Cloud's asset inventory page is a multi-cloud enabled feature that allows you to manage your AWS resources alongside your Azure resources.
+
+- **Microsoft Defender for servers** brings threat detection and advanced defenses to your compute instances in AWS and GCP. The Defender for servers plan includes an integrated license for Microsoft Defender for Endpoint, vulnerability assessment scanning, and more. Learn about all of the [supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md). Automatic onboarding capabilities allow you to easily connect any existing or new compute instances discovered in your environment.
+
+Learn how to protect and connect your [AWS environment](quickstart-onboard-aws.md) and [GCP organization](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
+ ## February 2022 Updates in February include:
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md
This article lists the protocols that are supported by default in Microsoft Defe
Defender for IoT can detect the following protocols when identifying assets and devices in your network:
-|Brand |Protocols |
+|Brand / Vendor |Protocols |
||| |**ABB** | ABB 800xA DCS (IEC61850 MMS including ABB extension) | |**ASHRAE** | BACnet<br> BACnet BACapp<br> BACnet BVLC |
Defender for IoT can detect the following protocols when identifying assets and
Defender for IoT can detect the following protocols using active monitoring, such as ping sweeps and queries:
-|Brand |Protocols |
+|Brand / Vendor |Protocols |
||| |**IETF** | Ping Sweep <br>SNMP Network Layout Query<br>SNMP Query | |**Microsoft** | Windows WMI Query (req. WMI/WinRM): hardware, BIOS, version, software, patches |
defender-for-iot How To Set Up High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-high-availability.md
This deployment is implemented with an on-premises management console pair that
When a primary and secondary on-premises management console is paired: -- An on-premises management console SSL certificate is applied to create a secure connection between the primary and secondary appliances. The SLL may be the self-signed certificate installed by default or a certificate installed by the customer.
+- An on-premises management console SSL certificate is applied to create a secure connection between the primary and secondary appliances. The SSL may be the self-signed certificate installed by default or a certificate installed by the customer.
When validation is `ON`, the appliance should be able to establish connection to the CRL server defined by the certificate.
digital-twins Concepts Ontologies Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies-extend.md
description: Learn about the reasons and strategies behind extending an ontology Previously updated : 12/13/2021 Last updated : 3/29/2022
In the DTDL-based RealEstateCore ontology, the Space hierarchy is used to define
A portion of the hierarchy looks like the diagram below. For more information about the RealEstateCore ontology, see [Digital Twins Definition Language-based RealEstateCore ontology for smart buildings](https://github.com/Azure/opendigitaltwins-building) on GitHub.
To extend the industry ontology with this new concept, create a new interface th
After adding the focus room interface, the extended hierarchy shows the new room type. ### Add extra capabilities to existing interfaces
In this section, you'll see two examples:
* If you're building a solution that displays 3D drawings of spaces that you already have in an existing system, you might want to associate each digital twin to its 3D drawing (by ID) so that when the solution displays information about the space, it can also retrieve the 3D drawing from the existing system. * If your solution needs to track the online/offline status of conference rooms, then you might want to track the conference room status for use in display or queries.
-Both examples can be implemented with new properties: a `drawingId` property that associates the 3D drawing with the digital twin and an "online" property that indicates whether the conference room is online or not.
+Both examples can be implemented with new properties: a `drawingId` property that associates the 3D drawing with the digital twin and an `online` property that indicates whether the conference room is online or not.
Typically, you don't want to modify the industry ontology directly because you want to be able to incorporate updates to it in your solution in the future (which would overwrite your additions). Instead, these kinds of additions can be made in your own interface hierarchy that extends from the DTDL-based RealEstateCore ontology. Each interface you create uses multiple interface inheritances to extend its parent RealEstateCore interface and its parent interface from your extended interface hierarchy. This approach enables you to make use of the industry ontology and your additions together.
-To extend the industry ontology, you create your own interfaces that extend from the interfaces in the industry ontology and add the new capabilities to your extended interfaces. For each interface that you want to extend, you create a new interface. The extended interfaces are written in DTDL (see the DTDL for Extended Interfaces section later in this document).
+To extend the industry ontology, create your own interfaces that extend from the interfaces in the industry ontology and add the new capabilities to your extended interfaces. For each interface that you want to extend, create a new interface. The extended interfaces are written in DTDL (see [DTDL for Extended Interfaces](#dtdl-for-extended-interfaces) later in this document).
-After extending the portion of the hierarchy shown above, the extended hierarchy looks like the diagram below. Here the extended Space interface adds the `drawingId` property that will contain an ID that associates the digital twin with the 3D drawing. Additionally, the ConferenceRoom interface adds an "online" property that will contain the online status of the conference room. Through inheritance, the ConferenceRoom interface contains all capabilities from the RealEstateCore ConferenceRoom interface and all capabilities from the extended Space interface.
+After extending the portion of the hierarchy shown above, the extended hierarchy looks like the diagram below. Here the extended Space interface adds the `drawingId` property that will contain an ID that associates the digital twin with the 3D drawing. Additionally, the ConferenceRoom interface adds an `online` property that will contain the online status of the conference room. Through inheritance, the ConferenceRoom interface contains all capabilities from the RealEstateCore ConferenceRoom interface and all capabilities from the extended Space interface.
+
+You don't have to extend every interface in the industry ontology, only those where you need to add new capabilities. For example, if you need to add a new capability, such as an `arterial` property to the Hallway interface, you can extend that interface without extending other interfaces that also extend from Room.
++
+#### Relationships to extended interfaces
+
+Extended interfaces can also be used as the target for relationships, even if the relationship is originally modeled to target a base interface. As an example, in the DTDL-based RealEstateCore ontology, the Apartment interface contains a Relationship named *includes* that targets a Room interface (shown in the diagram below). This lets you create a graph of rooms to make up the apartment.
+
+Based on the portion of the Room hierarchy from the [previous section](#add-extra-capabilities-to-existing-interfaces), an Apartment digital twin can include Room-type twins, and Hallway is an extension of Room (so an Apartment can include Hallways). This also means that an Apartment can include an extended Hallway with the `arterial` property, because an extended Hallway counts as a Hallway as it's referenced in the original relationships.
+ ## Using the extended space hierarchy
-When you create digital twins using the extended Space hierarchy, each digital twin's model will be one from the extended Space hierarchy (not the original industry ontology) and will include all the capabilities from the industry ontology and the extended interfaces though interface inheritance.
+When you create digital twins using the extended Space hierarchy, each digital twin's model will be one from the extended Space hierarchy (not the original industry ontology) and will include all the capabilities from the industry ontology and the extended interfaces through interface inheritance.
Each digital twin's model will be an interface from the extended hierarchy, shown in the diagram below. When querying for digital twins using the model ID (the `IS_OF_MODEL` operator), the model IDs from the extended hierarchy should be used. For example, `SELECT * FROM DIGITALTWINS WHERE IS_OF_MODEL('dtmi:com:example:Office;1')`.
The DTDL for the extended interfaces, limited to the portion discussed above, wo
## Next steps
-Continue on the path for developing models based on ontologies: [Using ontology strategies in a model development path](concepts-ontologies.md#using-ontology-strategies-in-a-model-development-path).
+Continue on the path for developing models based on ontologies: [Using ontology strategies in a model development path](concepts-ontologies.md#using-ontology-strategies-in-a-model-development-path).
digital-twins How To Set Up Instance Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-set-up-instance-powershell.md
-
-# Mandatory fields.
Title: Set up an instance and authentication (PowerShell)-
-description: See how to set up an instance of the Azure Digital Twins service using Azure PowerShell
-- Previously updated : 02/24/2022---
-# Optional fields. Don't forget to remove # if you need a field.
-
-#
-#
--
-# Set up an Azure Digital Twins instance and authentication (PowerShell)
--
-This article covers the steps to set up a new Azure Digital Twins instance, including creating the instance and setting up authentication. After completing this article, you'll have an Azure Digital Twins instance ready to start programming against.
-
-This version of this article goes through these steps manually, one by one, using [Azure PowerShell](/powershell/azure/new-azureps-module-az).
--
-## Prepare your environment
-
-1. First, choose where to run the commands in this article. You can choose to run Azure PowerShell commands using a local installation of Azure PowerShell, or in a browser window using [Azure Cloud Shell](https://shell.azure.com).
- * If you choose to use Azure PowerShell locally:
- 1. [Install the Az PowerShell module](/powershell/azure/install-az-ps).
- 1. Open a PowerShell window on your machine.
- 1. Connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
- * If you choose to use Azure Cloud Shell:
- 1. See [Overview of Azure Cloud Shell](../cloud-shell/overview.md) for more information about Cloud Shell.
- 1. [Open a Cloud Shell window in your browser](https://shell.azure.com).
- 1. In the Cloud Shell icon bar, make sure your Cloud Shell is set to run the PowerShell version.
-
- :::image type="content" source="media/how-to-set-up-instance/cloud-shell/cloud-shell-powershell.png" alt-text="Screenshot of the Cloud Shell window in the Azure portal showing selection of the PowerShell version.":::
-
-1. If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be billed. Select a specific subscription using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
-
- ```azurepowershell-interactive
- Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
- ```
-
-1. If this is your first time using Azure Digital Twins with this subscription, you must register the `Microsoft.DigitalTwins` resource provider. (If you're not sure, it's ok to run it again even if you've done it sometime in the past.)
-
- ```azurepowershell-interactive
- Register-AzResourceProvider -ProviderNamespace Microsoft.DigitalTwins
- ```
-
-1. Use the following command to install the `Az.DigitalTwins` PowerShell module.
- ```azurepowershell-interactive
- Install-Module -Name Az.DigitalTwins
- ```
-
-## Create the Azure Digital Twins instance
-
-In this section, you'll create a new instance of Azure Digital Twins using Azure PowerShell. You'll need to provide:
-
-* An [Azure resource group](../azure-resource-manager/management/overview.md) where the instance will be deployed. If you don't already have an existing resource group, you can create one using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet:
-
- ```azurepowershell-interactive
- New-AzResourceGroup -Name <name-for-your-resource-group> -Location <region>
- ```
-
-* A region for the deployment. To see what regions support Azure Digital Twins, visit [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=digital-twins).
-* A name for your instance. The name of the new instance must be unique within the region for your subscription. If your subscription has another Azure Digital Twins instance in the region that's already using the specified name, you'll be asked to pick a different name.
-
-Use your values in the following command to create the instance:
-
-```azurepowershell-interactive
-New-AzDigitalTwinsInstance -ResourceGroupName <your-resource-group> -ResourceName <name-for-your-Azure-Digital-Twins-instance> -Location <region>
-```
-
-### Verify success and collect important values
-
-If the instance was created successfully, the result looks similar to the following output containing information about the resource you've created:
-
-```Output
-Location Name Type
- -
-<region> <name-for-your-Azure-Digital-Twins-instance> Microsoft.DigitalTwins/digitalTwinsInstances
-```
-
-Next, display the properties of your new instance by running `Get-AzDigitalTwinsInstance` and piping to `Select-Object -Property *`, like this:
-
-```azurepowershell-interactive
-Get-AzDigitalTwinsInstance -ResourceGroupName <your-resource-group> -ResourceName <name-for-your-Azure-Digital-Twins-instance> |
- Select-Object -Property *
-```
-
-> [!TIP]
-> You can use this command to see all the properties of your instance at any time.
-
-Note the Azure Digital Twins instance's **host name**, **name**, and **resource group**. These are important values that you may need as you continue working with your Azure Digital Twins instance, to set up authentication, and related Azure resources. If other users will be programming against the instance, you should share these values with them.
-
-You now have an Azure Digital Twins instance ready to go. Next, you'll give the appropriate Azure user permissions to manage it.
-
-## Set up user access permissions
--
-### Prerequisites: Permission requirements
-
-### Assign the role
-
-To give a user permissions to manage an Azure Digital Twins instance, you must assign them the **Azure Digital Twins Data Owner** role within the instance.
-
-First, determine the Object Id for the Azure AD account of the user that should be assigned the role. You can find this value using the [Get-AzAdUser](/powershell/module/az.resources/get-azaduser) cmdlet, by passing in the user principal name on the Azure AD account to retrieve their Object Id (and other user information). In most cases, the user principal name will match the user's email on the Azure AD account.
-
-```azurepowershell-interactive
-Get-AzADUser -UserPrincipalName <Azure-AD-user-principal-name-of-user-to-assign>
-```
-
-Next, use the Object Id in the following command to assign the role. The command also requires you to enter the same subscription ID, resource group name, and Azure Digital Twins instance name that you chose earlier when creating the instance. The command must be run by a user with [sufficient permissions](#prerequisites-permission-requirements) in the Azure subscription.
-
-```azurepowershell-interactive
-$Params = @{
- ObjectId = '<Azure-AD-user-object-ID-of-user-to-assign>'
- RoleDefinitionName = 'Azure Digital Twins Data Owner'
- Scope = '/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DigitalTwins/digitalTwinsInstances/<name-for-your-Azure-Digital-Twins-instance>'
-}
-New-AzRoleAssignment @Params
-```
-
-The result of this command is outputted information about the role assignment that's been created.
-
-### Verify success
--
-You now have an Azure Digital Twins instance ready to go, and have assigned permissions to manage it.
-
-## Next steps
-
-See how to connect a client application to your instance with authentication code:
-* [Write app authentication code](how-to-authenticate-client.md)
dms Migration Dms Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-dms-powershell-cli.md
$migOpId = az datamigration sql-managed-instance show --managed-instance-name "m
az datamigration sql-managed-instance cutover --managed-instance-name "mySQLMI" --resource-group "myRG" --target-db-name "AdventureWorks2008" --migration-operation-id $migOpId ```
+> [!TIP]
+> If you receive the error "The subscription is not registered to use namespace 'Microsoft.DataMigration'. See https://aka.ms/rps-not-found for how to register subscriptions.", run
+> ```azurepowershell Register-AzResourceProvider -ProviderNamespace "Microsoft.DataMigration". ```
+ ## Next steps - For Azure PowerShell reference documentation for SQL Server database migrations, see [Az.DataMigration](/powershell/module/az.datamigration).
event-grid Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/cli-samples.md
- Title: Azure CLI samples - Event Grid | Microsoft Docs
-description: This article provides a table with links to Azure CLI script samples for Event Grid.
- Previously updated : 07/22/2021 ---
-# Azure CLI samples for Event Grid
-
-The following table includes links to Azure CLI samples for Event Grid.
-
-## Event Grid subscriptions
--- [Subscribe to Azure subscription](scripts/event-grid-cli-azure-subscription.md) - Subscribes to events for an Azure subscription. -- [Subscribe to Blob storage](scripts/event-grid-cli-blob.md) - Subscribes to events for a Blob storage account. -- [Subscribe to custom topic](scripts/event-grid-cli-subscribe-custom-topic.md) - Subscribes to events for a custom topic. -- [Subscribe to resource group](scripts/event-grid-cli-resource-group.md) - Subscribes to events for a resource group. -- [Subscribe to resource group and filter for a resource](scripts/event-grid-cli-resource-group-filter.md) - Subscribes to events for a resource group and filters events for a resource. -
-## Event Grid topics
--- [Create custom topic](scripts/event-grid-cli-create-custom-topic.md) - Creates an Event Grid custom topic, and returns the endpoint and key.
event-grid Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/concepts.md
A subscription tells Event Grid which events on a topic you're interested in rec
For examples of creating subscriptions, see:
-* [Azure CLI samples for Event Grid](cli-samples.md)
+* [Azure CLI samples for Event Grid](scripts/event-grid-cli-subscribe-custom-topic.md)
* [Azure PowerShell samples for Event Grid](powershell-samples.md) * [Azure Resource Manager templates for Event Grid](template-samples.md)
event-grid Event Schema Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-blob-storage.md
The data object has the following properties:
| [Quickstart: route Blob storage events to a custom web endpoint with Azure CLI](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json) | Shows how to use Azure CLI to send blob storage events to a WebHook. | | [Quickstart: route Blob storage events to a custom web endpoint with PowerShell](../storage/blobs/storage-blob-event-quickstart-powershell.md?toc=%2fazure%2fevent-grid%2ftoc.json) | Shows how to use Azure PowerShell to send blob storage events to a WebHook. | | [Quickstart: create and route Blob storage events with the Azure portal](blob-event-quickstart-portal.md) | Shows how to use the portal to send blob storage events to a WebHook. |
-| [Azure CLI: subscribe to events for a Blob storage account](./scripts/event-grid-cli-blob.md) | Sample script that subscribes to event for a Blob storage account. It sends the event to a WebHook. |
+| [Azure CLI: subscribe to events for a Blob storage account](./scripts/event-grid-cli-subscribe-custom-topic.md) | Sample script that subscribes to event for a Blob storage account. It sends the event to a WebHook. |
| [PowerShell: subscribe to events for a Blob storage account](./scripts/event-grid-powershell-blob.md) | Sample script that subscribes to event for a Blob storage account. It sends the event to a WebHook. | | [Resource Manager template: Create Blob storage and subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.eventgrid/event-grid-subscription-and-storage) | Deploys an Azure Blob storage account and subscribes to events for that storage account. It sends events to a WebHook. | | [Overview: reacting to Blob storage events](../storage/blobs/storage-blob-event-overview.md) | Overview of integrating Blob storage with Event Grid. |
event-grid Event Grid Cli Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/event-grid-cli-azure-subscription.md
- Title: Azure CLI script sample - Subscribe to Azure subscription | Microsoft Docs
-description: This article provides a sample Azure CLI script that shows how to subscribe to Azure Event Grid events using Azure CLI.
- Previously updated : 09/15/2021 ---
-# Subscribe to events for an Azure subscription with Azure CLI
-
-This script creates an Event Grid subscription to the events for an Azure subscription.
---
-The preview sample script requires the Event Grid extension. To install, run `az extension add --name eventgrid`.
-
-## Sample script - stable
-
-[!code-azurecli[main](../../../cli_scripts/event-grid/subscribe-to-azure-subscription/subscribe-to-azure-subscription.sh "Subscribe to Azure subscription")]
-
-## Sample script - preview extension
-
-[!code-azurecli[main](../../../cli_scripts/event-grid/subscribe-to-azure-subscription-preview/subscribe-to-azure-subscription-preview.sh "Subscribe to Azure subscription")]
-
-## Script explanation
-
-This script uses the following command to create the event subscription. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create) | Create an Event Grid subscription. |
-| [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create) - extension version | Create an Event Grid subscription. |
-
-## Next steps
-
-* For information about querying subscriptions, see [Query Event Grid subscriptions](../query-event-subscriptions.md).
-* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
event-grid Event Grid Cli Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/event-grid-cli-blob.md
- Title: Azure CLI script sample - Subscribe to Blob storage account | Microsoft Docs
-description: This article provides a sample Azure CLI script that shows how to subscribe to events for a Azure Blob Storage account.
- Previously updated : 09/15/2021 ---
-# Subscribe to events for a Blob storage account with Azure CLI
-
-This script creates an Event Grid subscription to the events for a Blob storage account.
---
-## Sample script
-
-[!code-azurecli[main](../../../cli_scripts/event-grid/subscribe-to-blob-storage/subscribe-to-blob-storage.sh "Subscribe to Blob storage")]
-
-## Script explanation
-
-This script uses the following command to create the event subscription. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create) | Create an Event Grid subscription. |
-| [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create) - extension version | Create an Event Grid subscription. |
-
-## Next steps
-
-* For information about querying subscriptions, see [Query Event Grid subscriptions](../query-event-subscriptions.md).
-* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
event-grid Event Grid Cli Create Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/event-grid-cli-create-custom-topic.md
- Title: Azure CLI script sample - Create custom topic | Microsoft Docs
-description: This article provides a sample Azure CLI script that shows how to create an Azure Event Grid custom topic.
- Previously updated : 09/15/2021 ---
-# Create Event Grid custom topic with Azure CLI
-
-This script creates an Event Grid custom topic.
---
-## Sample script
-
-[!code-azurecli[main](../../../cli_scripts/event-grid/create-custom-topic/create-custom-topic.sh "Create custom topic")]
-
-## Script explanation
-
-This script uses the following command to create the custom topic. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az eventgrid topic create](/cli/azure/eventgrid/topic#az-eventgrid-topic-create) | Create an Event Grid custom topic. |
--
-## Next steps
-
-* For information about querying subscriptions, see [Query Event Grid subscriptions](../query-event-subscriptions.md).
-* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
event-grid Event Grid Cli Resource Group Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/event-grid-cli-resource-group-filter.md
- Title: Azure CLI - subscribe to resource group & filter by resource
-description: This article provides a sample Azure CLI script that shows how to subscribe to Event Grid events for a resource and filter for a resource.
- Previously updated : 09/15/2021---
-# Subscribe to events for a resource group and filter for a resource with Azure CLI
-
-This script creates an Event Grid subscription to the events for a resource group. It uses a filter to get only events for a specified resource in the resource group.
---
-The preview sample script requires the Event Grid extension. To install, run `az extension add --name eventgrid`.
-
-## Sample script - stable
-
-[!code-azurecli[main](../../../cli_scripts/event-grid/filter-events/filter-events.sh "Subscribe to Azure subscription")]
-
-## Sample script - preview extension
-
-[!code-azurecli[main](../../../cli_scripts/event-grid/filter-events-preview/filter-events-preview.sh "Subscribe to Azure subscription")]
-
-## Script explanation
-
-This script uses the following command to create the event subscription. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create) | Create an Event Grid subscription. |
-| [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create) - extension version | Create an Event Grid subscription. |
-
-## Next steps
-
-* For information about querying subscriptions, see [Query Event Grid subscriptions](../query-event-subscriptions.md).
-* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
event-grid Event Grid Cli Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/event-grid-cli-resource-group.md
- Title: Azure CLI script sample - Subscribe to resource group | Microsoft Docs
-description: This article provides a sample Azure CLI script that shows how to subscribe to Azure Event Grid events for a resource group.
- Previously updated : 09/15/2021 ---
-# Subscribe to events for a resource group with Azure CLI
-
-This script creates an Event Grid subscription to the events for a resource group.
---
-The preview sample script requires the Event Grid extension. To install, run `az extension add --name eventgrid`.
-
-## Sample script - stable
-
-[!code-azurecli[main](../../../cli_scripts/event-grid/subscribe-to-resource-group/subscribe-to-resource-group.sh "Subscribe to resource group")]
-
-## Sample script - preview extension
-
-[!code-azurecli[main](../../../cli_scripts/event-grid/subscribe-to-resource-group-preview/subscribe-to-resource-group-preview.sh "Subscribe to resource group")]
-
-## Script explanation
-
-This script uses the following command to create the event subscription. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create) | Create an Event Grid subscription. |
-| [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create) - extension version | Create an Event Grid subscription. |
-
-## Next steps
-
-* For information about querying subscriptions, see [Query Event Grid subscriptions](../query-event-subscriptions.md).
-* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
event-grid Event Grid Cli Subscribe Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/event-grid-cli-subscribe-custom-topic.md
Title: Azure CLI script sample - Subscribe to custom topic | Microsoft Docs
-description: This article provides a sample Azure CLI script that shows how to subscribe to Event Grid events for a custom topic.
+ Title: Azure CLI script sample - Create custom topic and send event | Microsoft Docs
+description: This article provides a sample Azure CLI script that shows how to create a custom topic and send an event to the custom topic using Azure CLI.
ms.devlang: azurecli Previously updated : 09/15/2021 Last updated : 03/29/2022
-# Subscribe to events for a custom topic with Azure CLI
+# Create custom topic and subscribe to events for an Azure subscription with Azure CLI
-This script creates an Event Grid subscription to the events for a custom topic.
-
+This article provides a sample Azure CLI script that shows how to create a custom topic and send an event to the custom topic using Azure CLI.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-The preview sample script requires the Event Grid extension. To install, run `az extension add --name eventgrid`.
+
+## Sample script
+
-## Sample script - stable
+### Run the script
-[!code-azurecli[main](../../../cli_scripts/event-grid/subscribe-to-custom-topic/subscribe-to-custom-topic.sh "Subscribe to custom topic")]
-## Sample script - preview extension
+## Clean up resources
-[!code-azurecli[main](../../../cli_scripts/event-grid/subscribe-to-custom-topic-preview/subscribe-to-custom-topic-preview.sh "Subscribe to custom topic")]
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following command to create the event subscription. Each command in the table links to command-specific documentation.
event-grid Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/template-samples.md
For the JSON syntax and properties to use in a template, see [Microsoft.EventGri
See the following samples: - [PowerShell samples](powershell-samples.md)-- [CLI samples](cli-samples.md)
+- [CLI samples](scripts/event-grid-cli-subscribe-custom-topic.md)
firewall-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/overview.md
Azure Firewall Manager has the following known issues:
|Issue |Description |Mitigation | |||| |Traffic splitting|Microsoft 365 and Azure Public PaaS traffic splitting isn't currently supported. As such, selecting a third-party provider for V2I or B2I also sends all Azure Public PaaS and Microsoft 365 traffic via the partner service.|Investigating traffic splitting at the hub.
-|One secured virtual hub per region|You can't have more than one secured virtual hub per region.|Create multiple virtual WANs in a region.|
|Base policies must be in same region as local policy|Create all your local policies in the same region as the base policy. You can still apply a policy that was created in one region on a secured hub from another region.|Investigating| |Filtering inter-hub traffic in secure virtual hub deployments|Secured Virtual Hub to Secured Virtual Hub communication filtering isn't yet supported. However, hub to hub communication still works if private traffic filtering via Azure Firewall isn't enabled.|Investigating| |Branch to branch traffic with private traffic filtering enabled|Branch to branch traffic isn't supported when private traffic filtering is enabled. |Investigating.<br><br>Don't secure private traffic if branch to branch connectivity is critical.|
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Previously updated : 03/17/2022 Last updated : 03/30/2022 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
Azure Firewall Premium has the following known issues:
|Issue |Description |Mitigation | |||| |ESNI support for FQDN resolution in HTTPS|Encrypted SNI isn't supported in HTTPS handshake.|Today only Firefox supports ESNI through custom configuration. Suggested workaround is to disable this feature.|
-|Client Certificates (TLS)|Client certificates are used to build a mutual identity trust between the client and the server. Client certificates are used during a TLS negotiation. Azure firewall renegotiates a connection with the server and has no access to the private key of the client certificates.|None|
+|Client Certification Authentication is not supported|Client certificates are used to build a mutual identity trust between the client and the server. Client certificates are used during a TLS negotiation. Azure firewall renegotiates a connection with the server and has no access to the private key of the client certificates.|None|
|QUIC/HTTP3|QUIC is the new major version of HTTP. It's a UDP-based protocol over 80 (PLAN) and 443 (SSL). FQDN/URL/TLS inspection won't be supported.|Configure passing UDP 80/443 as network rules.| Untrusted customer signed certificates|Customer signed certificates are not trusted by the firewall once received from an intranet-based web server.|A fix is being investigated. |Wrong source IP address in Alerts with IDPS for HTTP (without TLS inspection).|When plain text HTTP traffic is in use, and IDPS issues a new alert, and the destination is a public IP address, the displayed source IP address is wrong (the internal IP address is displayed instead of the original IP address).|A fix is being investigated.|
firewall Premium Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-certificates.md
Previously updated : 02/03/2022 Last updated : 03/07/2022
There are three types of certificates used in a typical deployment:
A certificate authority can issue multiple certificates in the form of a tree structure. A root certificate is the top-most certificate of the tree.
-Azure Firewall Premium can intercept outbound HTTP/S traffic and auto-generate a server certificate for `www.website.com`. This certificate is generated using the Intermediate CA certificate that you provide. End-user browser and client applications must trust your organization Root CA certificate or intermediate CA certificate for this procedure to work.
+Azure Firewall Premium can intercept outbound HTTP/S traffic and auto-generate a server certificate for `www.website.com`. This certificate is generated using the Intermediate CA certificate that you provide. End-user browser and client applications (IaaS, PaaS and other workloads) must trust your organization Root CA certificate or intermediate CA certificate for this procedure to work.
:::image type="content" source="media/premium-certificates/certificate-process.png" alt-text="Certificate process":::
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
Previously updated : 01/19/2022 Last updated : 03/30/2022
Azure Firewall Premium includes the following features:
The TLS (Transport Layer Security) protocol primarily provides cryptography for privacy, integrity, and authenticity using certificates between two or more communicating applications. It runs in the application layer and is widely used to encrypt the HTTP protocol.
-Encrypted traffic has a possible security risk and can hide illegal user activity and malicious traffic. Azure Firewall Premium terminates and inspects TLS connections to detect, alert, and mitigate malicious activity in HTTPS.
+Encrypted traffic has a possible security risk and can hide illegal user activity and malicious traffic. Azure Firewall without TLS inspection (as shown in the following diagram) has no visibility into the data that flows in the encrypted TLS tunnel, and so can't provide a full protection coverage.
+
+The second diagram shows how Azure Firewall Premium terminates and inspects TLS connections to detect, alert, and mitigate malicious activity in HTTPS. The firewall actually creates two dedicated TLS connections: one with the Web Server (contoso.com) and another connection with the client. Using the customer provided CA certificate, it generates an on-the-fly certificate which replaces the Web Server certificate and shares it with the client to establish the TLS connection between the firewall and the client.
+
+Azure Firewall without TLS inspection:
+
+Azure Firewall with TLS inspection:
The following three use cases are supported: - Outbound TLS Inspection: To protect against malicious traffic that is sent from an internal client hosted in Azure to the Internet.-- East-West TLS Inspection: To protect your Azure workloads from potential malicious traffic sent from within Azure.
+- East-West TLS Inspection (includes traffic that goes from/to an on-premises network): To protect your Azure workloads from potential malicious traffic sent from within Azure.
- Inbound TLS Inspection: To protect internal servers or applications hosted in Azure from malicious requests that arrive from the Internet or an external network. Inbound TLS inspection is supported with [Azure Application Gateway](../web-application-firewall/ag/ag-overview.md), which provides end-to-end encryption.
To learn more about Azure Firewall Premium Intermediate CA certificate requireme
A network intrusion detection and prevention system (IDPS) allows you to monitor your network for malicious activity, log information about this activity, report it, and optionally attempt to block it.
-Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network level traffic (Layers 4-7), they're fully managed, and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic.
+Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network level traffic (Layers 4-7), they're fully managed, and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic. Spoke-to-spoke (East-West) includes traffic that goes from/to an on-premises network.
The Azure Firewall signatures/rulesets include: - An emphasis on fingerprinting actual malware, Command and Control, exploit kits, and in the wild malicious activity missed by traditional prevention methods.
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-migrate.md
Previously updated : 03/28/2022 Last updated : 03/30/2022
The firewall throughput might be lower than 30 Gbps when youΓÇÖve one or more si
## Downtime
-Migrate your firewall during a planned maintenance time, as there will be some downtime during the migration.
+Migrate your firewall during a planned maintenance time, as there will be some downtime when you [Migrate Azure Firewall from Standard to Premium using stop/start](#migrate-azure-firewall-using-stopstart).
## Migrate Classic rules to Standard policy
frontdoor Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/private-link.md
Title: 'Secure your Origin with Private Link in Azure Front Door Standard/Premium (Preview)'
+ Title: 'Secure your Origin with Private Link in Azure Front Door Premium'
description: This page provides information about how to secure connectivity to your origin using Private Link. documentationcenter: '' Previously updated : 02/12/2022 Last updated : 03/30/2022
-# Secure your Origin with Private Link in Azure Front Door Standard/Premium (Preview)
+# Secure your Origin with Private Link in Azure Front Door Premium
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [Azure Front Door Docs](front-door-overview.md).
+[Azure Private Link](../private-link/private-link-overview.md) enables you to access Azure PaaS services and services hosted in Azure over a private endpoint in your virtual network. Traffic between your virtual network and the service goes over the Microsoft backbone network, eliminating exposure to the public Internet.
-## Overview
+Azure Front Door Premium can connect to your origin using Private Link. Your origin can be hosted in a virtual network or hosted as a PaaS service such as Azure App Service or Azure Storage. Private Link removes the need for your origin to be access publicly.
-[Azure Private Link](../private-link/private-link-overview.md) enables you to access Azure PaaS Services and Azure hosted services over a Private Endpoint in your virtual network. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet.
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-Azure Front Door Premium can connect to your origin via Private Link. Your origin can be hosted in your private VNet or by using a PaaS service such as Azure App Service or Azure Storage. Private Link removing the need for your origin to be publically accessible.
+## How Private Link works
+When you enable Private Link to your origin in Azure Front Door Premium, Front Door creates a private endpoint on your behalf from an Azure Front Door managed regional private network. You'll receive an Azure Front Door private endpoint request at the origin pending your approval.
-When you enable Private Link to your origin in Azure Front Door Premium, Front Door creates a private endpoint on your behalf from a regional network managed Front Door's regional private network. This endpoint is managed by Azure Front Door. You'll receive an Azure Front Door private endpoint request for approval message at your origin.
-
-You must approve the private endpoint connection before traffic will flow to the origin. You can approve private endpoint connections by using the Azure portal, the Azure CLI, or Azure PowerShell. For more information, see [Manage a Private Endpoint connection](../private-link/manage-private-endpoint.md).
+You must approve the private endpoint connection before traffic can pass to the origin privately. You can approve private endpoint connections by using the Azure portal, Azure CLI, or Azure PowerShell. For more information, see [Manage a Private Endpoint connection](../private-link/manage-private-endpoint.md).
> [!IMPORTANT] > You must approve the private endpoint connection before traffic will flow to your origin.
-After you enable a Private Link origin and approve the private endpoint connection, it takes a few minutes for the connection to be established. During this time, requests to the origin will receive a Front Door error message. The error message will go away once the connection is established.
+After you enable an origin for Private Link and approve the private endpoint connection, it can take a few minutes for the connection to get established. During this time, requests to the origin will receive an Azure Front Door error message. The error message will go away once the connection is established.
-After you approve the request, a private IP address gets assigned from Front Door's virtual network. Traffic between Azure Front Door and your origin traverses the established private link by using Azure's network backbone. Incoming traffic to your origin is now secured when coming from your Azure Front Door.
+Once your request is approved, a private IP address gets assigned from the Azure Front Door managed virtual network. Traffic between your Azure Front Door and your origin will communicate using the established private link over the Microsoft backbone network. Incoming traffic to your origin is now secured when arriving at your Azure Front Door.
-## Limitations
+## Region availability
-Azure Front Door private endpoints are available in the following regions during public preview: East US, West US 2, South Central US, UK South, and Japan East.
+Azure Front Door private endpoints are available in the following regions:
+
+| Americas | Europe | Asia Pacific |
+|--|--|--|
+| Brazil South | France Central | Australia East |
+| Canada Central | Germany West Central | Central India |
+| Central US | Norway East | East Asia |
+| East US | Sweden Central | Japan East |
+| East US 2 | UK South | Korea Central |
+| South Central US | West Europe | |
+| West US 2 | | |
+| West US 3 | | |
+
+## Limitations
-The backends that support direct private end point connectivity are now limited to Storage (Azure Blobs) and App Services. All other backends will have to be put behind an Internal Load Balancer as explained in the Next Steps below.
+Origin support for direct private end point connectivity is limited to Storage (Azure Blobs), App Services and internal load balancers.
-For the best latency, you should always pick an Azure region closest to your origin when choosing to enable Front Door private link endpoint.
+For the best latency, you should always pick an Azure region closest to your origin when choosing to enable Azure Front Door private link endpoint.
## Next steps
-* To connect Azure Front Door Premium to your Web App via Private Link service, see [Connect Azure Front Door Premium to a Web App origin with Private Link](standard-premium/how-to-enable-private-link-web-app.md).
-* To connect Azure Front Door Premium to your Storage Account via private link service, see [Connect Azure Front Door Premium to a storage account origin with Private Link](standard-premium/how-to-enable-private-link-storage-account.md).
-* To connect Azure Front Door Premium to an internal load balancer origin with Private Link service, see [Connect Azure Front Door Premium to an internal load balancer origin with Private Link](standard-premium/how-to-enable-private-link-internal-load-balancer.md).
+* Learn how to [connect Azure Front Door Premium to a App Service origin with Private Link](standard-premium/how-to-enable-private-link-web-app.md).
+* Learn how to [connect Azure Front Door Premium to a storage account origin with Private Link](standard-premium/how-to-enable-private-link-storage-account.md).
+* Learn how to [connect Azure Front Door Premium to an internal load balancer origin with Private Link](standard-premium/how-to-enable-private-link-internal-load-balancer.md).
frontdoor Troubleshoot Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/troubleshoot-issues.md
This article describes how to troubleshoot common routing problems that you might face for your Azure Front Door configuration.
-## Additional debugging HTTP headers
+## Other debugging HTTP headers
-You can request Front Door to return additional debugging HTTP response headers. For more details, refer to [optional response headers](front-door-http-headers-protocol.md#optional-debug-response-headers).
+You can request Azure Front Door to return more debugging HTTP response headers. For more information, see [optional response headers](front-door-http-headers-protocol.md#optional-debug-response-headers).
## 503 response from Azure Front Door after a few seconds ### Symptom * Regular requests sent to your backend without going through Azure Front Door are succeeding. Going via Azure Front Door results in 503 error responses.
-* The failure from Azure Front Door typically shows after about 30 seconds.
-* Intermittent 503 errors with log `ErrorInfo: OriginInvalidResponse`.
+* The failure from Azure Front Door typically appears after about 30 seconds.
+* Intermittent 503 errors appear with "ErrorInfo: OriginInvalidResponse."
### Cause The cause of this problem can be one of three things:
-
-* Your origin is taking longer than the timeout configured (default is 30 seconds) to receive the request from Azure Front Door.
+
+* Your origin is taking longer than the timeout configured to receive the request from Azure Front Door. The default is 30 seconds.
* The time it takes to send a response to the request from Azure Front Door is taking longer than the timeout value.
-* Client sent a byte range request with `Accept-Encoding header` (compression enabled).
+* The client sent a byte range request with an **Accept-Encoding** header, which means compression is enabled.
### Troubleshooting steps
-* Send the request to your backend directly (without going through Azure Front Door). See how long your backend usually takes to respond.
+* Send the request to your backend directly without going through Azure Front Door. See how long your backend usually takes to respond.
* Send the request via Azure Front Door and see if you're getting any 503 responses. If not, the problem might not be a timeout issue. Contact support.
-* If requests going through Azure Front Door results in a 503 error response code, configure the **Origin response timeout (in seconds)** for the endpoint. You can extend the default timeout to up to 4 minutes (240 seconds). The setting can be configured by going to the *Endpoint manager* and selecting **Edit endpoint**.
+* If requests going through Azure Front Door result in a 503 error response code, configure **Origin response timeout (in seconds)** for the endpoint. You can extend the default timeout to up to 4 minutes, which is 240 seconds. To configure the setting, go to **Endpoint manager** and select **Edit endpoint**.
+
+ :::image type="content" source="./media/troubleshoot-issues/origin-response-timeout-1.png" alt-text="Screenshot that shows selecting Edit endpoint from Endpoint manager.":::
- :::image type="content" source="./media/troubleshoot-issues/origin-response-timeout-1.png" alt-text="Screenshot of selecting edit endpoint from Endpoint manager.":::
+ Then select **Endpoint properties** to configure **Origin response timeout**.
- Then select **Endpoint properties** to configure the **Origin response timeout**:
+ :::image type="content" source="./media/troubleshoot-issues/origin-response-timeout-2.png" alt-text="Screenshot that shows selecting Endpoint properties and the Origin response timeout field." lightbox="./media/troubleshoot-issues/origin-response-timeout-2-expanded.png":::
- :::image type="content" source="./media/troubleshoot-issues/origin-response-timeout-2.png" alt-text="Screenshot of select endpoint properties and Origin response timeout field." lightbox="./media/troubleshoot-issues/origin-response-timeout-2-expanded.png":::
+* If the timeout doesn't resolve the issue, use a tool like Fiddler or your browser's developer tool to check if the client is sending byte range requests with **Accept-Encoding** headers. Using this option leads to the origin responding with different content lengths.
-* If the timeout doesnΓÇÖt resolve the issue, use a tool like Fiddler or your browser's developer tool to check if the client is sending byte range requests with Accept-Encoding headers, leading to the origin responding with different content lengths. If yes, then you can either disable compression on the Origin/Azure Front Door or create a Rules Set rule to remove `accept-encoding` from the request for byte range requests.
+ If the client is sending byte range requests with **Accept-Encoding** headers, you have two options. You can disable compression on the origin/Azure Front Door. Or you can create a rules set rule to remove **Accept-Encoding** from the request for byte range requests.
- :::image type="content" source="./media/troubleshoot-issues/remove-encoding-rule.png" alt-text="Screenshot of accept-encoding rule in a Rule Set.":::
+ :::image type="content" source="./media/troubleshoot-issues/remove-encoding-rule.png" alt-text="Screenshot that shows the Accept-Encoding rule in a rule set.":::
## 503 responses from Azure Front Door only for HTTPS ### Symptom
-* 503 responses are returned only for AFD HTTPS enabled endpoints
-* Regular requests sent to your backend without going through Azure Front Door are succeeding. Going via Azure Front Door results in 503 error responses.
-* Intermittent 503 errors with log `ErrorInfo: OriginInvalidResponse`
+* Any 503 responses are returned only for Azure Front Door HTTPS-enabled endpoints.
+* Regular requests sent to your backend without going through Azure Front Door are succeeding. Going via Azure Front Door results in 503 error responses.
+* Intermittent 503 errors appear with "ErrorInfo: OriginInvalidResponse."
### Cause+ The cause of this problem can be one of three things:
-* Backend Pool is an IP address
-* Backend Server is returning a certificate that does not match the FQDN of the AFD backend Pool
-* Backend Pool is an Azure Web Apps server
+
+* The backend pool is an IP address.
+* The backend server returns a certificate that doesn't match the FQDN of the Azure Front Door backend pool.
+* The backend pool is an Azure Web Apps server.
### Troubleshooting steps
-* Backend Pool is an IP address
+* The backend pool is an IP address.
- `EnforceCertificateNameCheck` must be disabled.
+ `EnforceCertificateNameCheck` must be disabled.
- AFD has a switch called "enforceCertificateNameCheck". By default, this setting is enabled. When enabled, AFD checks that the backend pool host name FQDN matches the backend server certificate's Certificate Name (CN) or one of the entries in the Subject Alternative Names (SAN) extension.
+ Azure Front Door has a switch called `EnforceCertificateNameCheck`. By default, this setting is enabled. When enabled, Azure Front Door checks that the backend pool host name FQDN matches the backend server certificate's certificate name or one of the entries in the subject alternative names extension.
- How to disable EnforceCertifiateNameCheck from Portal:
+ - How to disable `EnforceCertificateNameCheck` from the Azure portal:
- In the portal there is a toggle button, that will allow you to turn this on/off in the Azure Front Door Design Blade.
+ In the portal, use a toggle button to turn this setting on or off in the Azure Front Door **Design** pane.
- ![image](https://user-images.githubusercontent.com/63200992/148067710-1b9b6053-efe3-45eb-859f-f747de300653.png)
+ ![Screenshot that shows the toggle button.](https://user-images.githubusercontent.com/63200992/148067710-1b9b6053-efe3-45eb-859f-f747de300653.png)
-* Backend Server is returning a certificate that does not match the FQDN of the AFD backend Pool
+* The backend server returns a certificate that doesn't match the FQDN of the Azure Front Door backend pool. To resolve this issue, you have two options:
- - To resolve we will either need the certificate returned to match the FQDN (or)
-
- - The EnforceCertificateNameCheck must be disabled
+ - The returned certificate must match the FQDN.
+ - `EnforceCertificateNameCheck` must be disabled.
-* Backend Pool is an Azure Web Apps server
+* The backend pool is an Azure Web Apps server:
- - Check if Azure web app is configured with Ip Based SSL instead of SNI based. If itΓÇÖs configured as IpBased then this should be changed to SNI.
-
- - If the backend is unhealthy due to a certificate failure, we will return a 503. You can verify the health of the backends on port 80 and 443. If only 443 is unhealthy, this is likely an issue with SSL. Since the backend is configured to use the FQDN, we know itΓÇÖs sending SNI.
+ - Check if the Azure web app is configured with IP-based SSL instead of being SNI based. If the web app is configured as IP based, it should be changed to SNI.
+ - If the backend is unhealthy because of a certificate failure, a 503 error message is returned. You can verify the health of the backends on ports 80 and 443. If only 443 is unhealthy, it's likely an issue with SSL. Because the backend is configured to use the FQDN, we know it's sending SNI.
- Using OPENSSL, verify the certificate that is being returned. To do this, connect to the backend using "-servername" and it should return the SNI which needs to match with the FQDN of the backend pool.
+ Use OPENSSL to verify the certificate that's being returned. To do this check, connect to the backend by using `-servername`. It should return the SNI, which needs to match with the FQDN of the backend pool:
+
+ `openssl s_client -connect backendvm.contoso.com:443 -servername backendvm.contoso.com`
- _openssl s_client -connect backendvm.contoso.com:443 -servername backendvm.contoso.com_
-
## Requests sent to the custom domain return a 400 status code ### Symptom
-* You created an Azure Front Door instance, but a request to the domain or frontend host is returning an HTTP 400 status code.
-* You created a DNS mapping for a custom domain to the frontend host that you configured. However, sending a request to the custom domain host name returns an HTTP 400 status code. It doesn't appear to route to the backend that you configured.
+* You created an Azure Front Door instance. A request to the domain or frontend host returns an HTTP 400 status code.
+* You created a DNS mapping for a custom domain to the frontend host that you configured. Sending a request to the custom domain host name returns an HTTP 400 status code. It doesn't appear to route to the backend that you configured.
### Cause
-The problem occurs if you didn't configure a routing rule for the custom domain that was added as the frontend host. A routing rule needs to be explicitly added for that frontend host. That's true even if a routing rule has already been configured for the frontend host under the Azure Front Door subdomain (*.azurefd.net).
+The problem occurs if you didn't configure a routing rule for the custom domain that was added as the frontend host. A routing rule needs to be explicitly added for that frontend host. That's true even if a routing rule was already configured for the frontend host under the Azure Front Door subdomain, which is ***.azurefd.net**.
-### Troubleshooting steps
+### Troubleshooting step
Add a routing rule for the custom domain to direct traffic to the selected origin group.
Azure Front Door has a routing rule that redirects HTTP to HTTPS, but accessing
### Cause
-This behavior can happen if you didn't configure the routing rules correctly for Azure Front Door. Basically, your current configuration isn't specific and might have conflicting rules.
+This behavior can happen if you didn't configure the routing rules correctly for Azure Front Door. Your current configuration isn't specific and might have conflicting rules.
### Troubleshooting steps
This behavior can happen if you didn't configure the routing rules correctly for
### Symptom
-You created an Azure Front Door Standard/Premium instance and configured a frontend host, an origin group with at least one origin in it, and a routing rule that connects the frontend host to the origin group. Your content doesn't seem to be available when a request goes to the configured frontend host because an HTTP 411 status code gets returned.
+You created an Azure Front Door Standard/Premium instance and configured:
-Responses to these requests might also contain an HTML error page in the response body that includes an explanatory statement. For example: `HTTP Error 411. The request must be chunked or have a content length`.
+- A frontend host.
+- An origin group with at least one origin in it.
+- A routing rule that connects the frontend host to the origin group.
+
+Your content doesn't seem to be available when a request goes to the configured frontend host because an HTTP 411 status code gets returned.
+
+Responses to these requests might also contain an HTML error page in the response body that includes an explanatory statement. An example is "HTTP Error 411. The request must be chunked or have a content length."
### Cause
-There are several possible causes for this symptom. The overall reason is that your HTTP request isn't fully RFC-compliant.
+There are several possible causes for this symptom. The overall reason is that your HTTP request isn't fully RFC-compliant.
-An example of noncompliance is a `POST` request sent without either a `Content-Length` or a `Transfer-Encoding` header (for example, using `curl -X POST https://example-front-door.domain.com`). This request doesn't meet the requirements set out in [RFC 7230](https://tools.ietf.org/html/rfc7230#section-3.3.2). Azure Front Door would block it with an HTTP 411 response.
+An example of noncompliance is a `POST` request sent without either a **Content-Length** or a **Transfer-Encoding** header. An example would be using `curl -X POST https://example-front-door.domain.com`. This request doesn't meet the requirements set out in [RFC 7230](https://tools.ietf.org/html/rfc7230#section-3.3.2). Azure Front Door would block it with an HTTP 411 response.
-This behavior is separate from the Web Application Firewall (WAF) functionality of Azure Front Door. Currently, there's no way to disable this behavior. All HTTP requests must meet the requirements, even if the WAF functionality isn't in use.
+This behavior is separate from the web application firewall (WAF) functionality of Azure Front Door. Currently, there's no way to disable this behavior. All HTTP requests must meet the requirements, even if the WAF functionality isn't in use.
### Troubleshooting steps - Verify that your requests are in compliance with the requirements set out in the necessary RFCs.- - Take note of any HTML message body that's returned in response to your request. A message body often explains exactly *how* your request is noncompliant. ## Next steps
governance Guest Configuration Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/guest-configuration-custom.md
previous implementation of
[DSC for Linux](https://github.com/Microsoft/PowerShell-DSC-for-Linux) or the "nx" providers included in that repository.
-Guest configuration operates in PowerShell 7.1.3 for Windows and PowerShell 7.2
+As of version 1.29.33, guest configuration operates in PowerShell 7.1.2 for Windows and PowerShell 7.2
preview 6 for Linux. Starting with version 7.2, the `PSDesiredStateConfiguration` module moved from being part of the PowerShell installation and is instead installed as a
governance Samples By Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-table.md
Title: List of sample Azure Resource Graph queries by table description: List sample queries for Azure Resource-Graph. Tables include Resources, ResourceContainers, PolicyResources, and more. Previously updated : 03/08/2022 Last updated : 03/23/2022
details, see [Resource Graph tables](../concepts/query-language.md#resource-grap
[!INCLUDE [azure-resource-graph-samples-table-servicehealthresources](../../../../includes/resource-graph/samples/bytable/servicehealthresources.md)]
+## SpotResources
++ ## Next steps - Learn more about the [query language](../concepts/query-language.md).
hdinsight Apache Hadoop Connect Hive Jdbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-connect-hive-jdbc-driver.md
JDBC connections to an HDInsight cluster on Azure are made over port 443. The tr
Replace `CLUSTERNAME` with the name of your HDInsight cluster.
-Or you can get the connection through **Ambari UI > Hive > Configs > Advanced**.
-- ### Host name in connection string Host name 'CLUSTERNAME.azurehdinsight.net' in the connection string is the same as your cluster URL. You can get it through Azure portal.
hdinsight Hdinsight Go Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-go-sdk-overview.md
az account set -s <name or ID of subscription>
Next, choose a name for your service principal and create it with the following command: ```azurecli-interactive
-az ad sp create-for-rbac --name <Service Principal Name> --sdk-auth --role Contributor
+az ad sp create-for-rbac --name <Service Principal Name> --sdk-auth --role Contributor --scopes /subscriptions/<subscription id>
``` The service principal information is displayed as JSON.
hdinsight Hdinsight Hadoop Provision Linux Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-provision-linux-clusters.md
description: Set up Hadoop, Kafka, Spark, HBase, or Storm clusters for HDInsight
Previously updated : 08/06/2020 Last updated : 03/30/2022 # Set up clusters in HDInsight with Apache Hadoop, Apache Spark, Apache Kafka, and more
hdinsight Apache Kafka Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-get-started.md
description: In this quickstart, you learn how to create an Apache Kafka cluster
Previously updated : 04/29/2020 Last updated : 03/30/2022 #Customer intent: I need to create a Kafka cluster so that I can use it to process streaming data
hdinsight Apache Kafka Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-introduction.md
description: 'Learn about Apache Kafka on HDInsight: What it is, what it does, a
Previously updated : 02/25/2020 Last updated : 03/30/2022 #Customer intent: As a developer, I want to understand how Kafka on HDInsight is different from Kafka on other platforms.
hdinsight Set Up Pyspark Interactive Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/set-up-pyspark-interactive-environment.md
keywords: VScode,Azure HDInsight Tools,Hive,Python,PySpark,Spark,HDInsight,Hadoo
Previously updated : 04/23/2020 Last updated : 03/30/2022 # Set up the PySpark interactive environment for Visual Studio Code
hdinsight Apache Spark Create Standalone Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-create-standalone-application.md
description: Tutorial - Create a Spark application written in Scala with Apache
Previously updated : 08/21/2020 Last updated : 03/30/2022 # Customer intent: As a developer new to Apache Spark and to Apache Spark in Azure HDInsight, I want to learn how to create a Scala Maven application for Spark in HDInsight using IntelliJ.
hdinsight Apache Spark Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-overview.md
description: This article provides an introduction to Spark in HDInsight and the
Previously updated : 11/17/2020 Last updated : 03/30/2022 # Customer intent: As a developer new to Apache Spark and Apache Spark in Azure HDInsight, I want to have a basic understanding of Microsoft's implementation of Apache Spark in Azure HDInsight so I can decide if I want to use it rather than build my own cluster.
iot-central How To Connect Devices X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-devices-x509.md
Make a note of the location of these files. You need it later.
### Create a group enrollment
-1. Open your IoT Central application and navigate to **Administration** in the left pane and select **Device connection**.
+1. Open your IoT Central application and navigate to **Permissions** in the left pane and select **Device connection groups**.
1. Select **+ New**, and create a new enrollment group called _MyX509Group_ with an attestation type of **Certificates (X.509)**.
Obtain new X.509 certificates from your certificate provider. You can create you
To update a group enrollment in response to a security breach, you should use the following approach to update the current certificate immediately. Complete these steps for the primary and secondary certificates if both are compromised:
-1. Navigate to **Administration** in the left pane and select **Device connection**.
+1. Navigate to **Permissions** in the left pane and select **Device connection groups**.
-2. Select **Enrollment Groups**, and select the group name in the list.
+2. Select the group name in the list under Enrollment groups.
3. For certificate update, select **Manage primary** or **Manage Secondary**.
If you're rolling certificates in response to a security breach, use the followi
To handle certificate expirations, use the following approach to update the current certificate immediately:
-1. Navigate to **Administration** in the left pane and select **Device connection**.
+1. Navigate to **Permissions** in the left pane and select **Device connection groups**.
-2. Select **Enrollment Groups**, and select the group name in the list.
+2. Select the group name in the list under Enrollment groups.
3. For certificate update, select **Manage Primary**.
iot-central Howto Build Iotc Device Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-build-iotc-device-bridge.md
Last updated 12/21/2021
+custom: contperf-fy22q3
# Administrator # Use the IoT Central device bridge to connect other IoT clouds to IoT Central
-The IoT Central device bridge is an open-source solution that connects other IoT clouds to your IoT Central application. Examples of other IoT clouds include [Sigfox](https://www.sigfox.com/), [Particle Device Cloud](https://www.particle.io/), and [The Things Network](https://www.thethingsnetwork.org/). The device bridge works by forwarding data from devices connected to other IoT clouds through to your IoT Central application. The device bridge only forwards data to IoT Central, it doesn't send commands or property updates from IoT Central back to the devices.
+The IoT Central device bridge is an open-source solution that connects other IoT clouds such as [Sigfox](https://www.sigfox.com/), [Particle Device Cloud](https://www.particle.io/), and [The Things Network](https://www.thethingsnetwork.org/) to your IoT Central application. The device bridge works by forwarding data from devices connected to other IoT clouds through to your IoT Central application. The device bridge only forwards data to IoT Central, it doesn't send commands or property updates from IoT Central back to the devices.
The device bridge lets you combine the power of IoT Central with devices such as asset tracking devices connected to Sigfox's low-power wide area network, air quality monitoring devices on the Particle Device Cloud, or soil moisture monitoring devices on The Things Network. You can use IoT Central application features such as rules and analytics on the data, create workflows in Power Automate and Azure Logic apps, or export the data.
iot-central Howto Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data.md
You can export data to a publicly available HTTP webhook endpoint. You can creat
1. Open [RequestBin](https://requestbin.net/). 1. Create a new RequestBin and copy the **Bin URL**. You use this URL when you test your data export.
-To create the Azure Data Explorer destination in IoT Central on the **Create new destination** page:
+To create the Azure Data Explorer destination in IoT Central on the **Data export** page:
-1. Enter a **Destination name**.
+1. Select **+ New destination**.
1. Select **Webhook** as the destination type.
If you don't have an existing Azure Data Explorer database to export to, follow
1. Create a service principal that you can use to connect your IoT Central application to Azure Data Explorer. Use the Azure Cloud Shell to run the following command: ```azurecli
- az ad sp create-for-rbac --skip-assignment --name "My SP for IoT Central"
+ az ad sp create-for-rbac --skip-assignment --name "My SP for IoT Central" --scopes /subscriptions/<SubscriptionId>
``` Make a note of the `appId`, `password`, and `tenant` values in the command output, you need them in the following steps.
If you don't have an existing Azure Data Explorer database to export to, follow
.alter table smartvitalspatch policy streamingingestion enable ```
-To create the Azure Data Explorer destination in IoT Central on the **Create new destination** page:
+To create the Azure Data Explorer destination in IoT Central on the **Data export** page:
-1. Enter a **Destination name**.
+1. Select **+ New destination**.
1. Select **Azure Data Explorer** as the destination type.
To configure the managed identity that enables your IoT Central application to s
.alter table smartvitalspatch policy streamingingestion enable ```
-To create the Azure Data Explorer destination in IoT Central on the **Create new destination** page:
+To create the Azure Data Explorer destination in IoT Central on the **Data export** page:
-1. Enter a **Destination name**.
+1. Select **+ New destination**.
1. Select **Azure Data Explorer** as the destination type.
If you don't have an existing Event Hubs namespace to export to, follow these st
3. Create a new key or choose an existing key that has **Send** permissions. 4. Copy either the primary or secondary connection string.
-To create the Event Hubs destination in IoT Central on the **Create new destination** page:
+To create the Event Hubs destination in IoT Central on the **Data export** page:
-1. Enter a **Destination name**.
+1. Select **+ New destination**.
1. Select **Azure Event Hubs** as the destination type.
To further secure your event hub and only allow access from trusted services wit
- [Trusted Microsoft services](../../event-hubs/private-link-service.md#trusted-microsoft-services) - [Allow access to Azure Event Hubs namespaces from specific virtual networks](../../event-hubs/event-hubs-service-endpoints.md)
-To create the Event Hubs destination in IoT Central on the **Create new destination** page:
+To create the Event Hubs destination in IoT Central on the **Data export** page:
-1. Enter a **Destination name**.
+1. Select **+ New destination**.
1. Select **Azure Event Hubs** as the destination type.
If you don't have an existing Service Bus namespace to export to, follow these s
3. Create a new key or choose an existing key that has **Send** permissions. 4. Copy either the primary or secondary connection string.
-To create the Service Bus destination in IoT Central on the **Create new destination** page:
+To create the Service Bus destination in IoT Central on the **Data export** page:
-1. Enter a **Destination name**.
+1. Select **+ New destination**.
1. Select **Azure Service Bus Queue** or **Azure Service Bus Topic** as the destination type.
To further secure your queue or topic and only allow access from trusted service
- [Trusted Microsoft services](../../service-bus-messaging/private-link-service.md#trusted-microsoft-services) - [Allow access to Azure Service Bus namespace from specific virtual networks](../../service-bus-messaging/service-bus-service-endpoints.md)
-To create the Service Bus destination in IoT Central on the **Create new destination** page:
+To create the Service Bus destination in IoT Central on the **Data export** page:
-1. Enter a **Destination name**.
+1. Select **+ New destination**.
1. Select **Azure Service Bus Queue** or **Azure Service Bus Topic** as the destination type.
If you don't have an existing Azure storage account to export to, follow these s
1. Generate a connection string for your storage account by going to **Settings > Access keys**. Copy one of the two connection strings.
-To create the Blob Storage destination in IoT Central on the **Create new destination** page:
+To create the Blob Storage destination in IoT Central on the **Data export** page:
-1. Enter a **Destination name**.
+1. Select **+ New destination**.
1. Select **Azure Blob Storage** as the destination type.
To further secure your blob container and only allow access from trusted service
- [Authorize access to blob data with managed identities for Azure resources](../../storage/blobs/authorize-managed-identity.md) - [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json)
-To create the Blob Storage destination in IoT Central on the **Create new destination** page:
+To create the Blob Storage destination in IoT Central on the **Data export** page:
-1. Enter a **Destination name**.
+1. Select **+ New destination**.
1. Select **Azure Blob Storage** as the destination type.
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
Once your IoT Edge device connects, be sure to continue configuring the Upstream
* Be consistent with upstream protocol * Set up host storage for system modules * Reduce memory space used by the IoT Edge hub
- * Do not use debug versions of module images
+ * Use correct module images in deployment manifests
* Be mindful of twin size limits when using custom modules ### Be consistent with upstream protocol
The IoT Edge hub module stores messages temporarily if they cannot be delivered
The default value of the timeToLiveSecs parameter is 7200 seconds, which is two hours.
-### Do not use debug versions of module images
+### Use correct module images in deployment manifests
+If an empty or wrong module image is used, the Edge agent retries to load the image, which causes extra traffic to be generated. Add the correct images to the deployment manifest to avoid generating unnecessary traffic.
+#### Don't use debug versions of module images
When moving from test scenarios to production scenarios, remember to remove debug configurations from deployment manifests. Check that none of the module images in the deployment manifests have the **\.debug** suffix. If you added create options to expose ports in the modules for debugging, remove those create options as well. ### Be mindful of twin size limits when using custom modules
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-common-errors.md
When migrating to the new IoT hub (assuming not using DPS), follow these steps i
:::moniker-end <!-- end 1.2 -->
+## Security daemon couldn't start successfully
+
+**Observed behavior:**
+
+The security daemon fails to start and module containers aren't created. The `edgeAgent`, `edgeHub` and other custom modules aren't started by IoT Edge service. In `aziot-edged` logs, you see this error:
+
+> - The daemon could not start up successfully: Could not start management service
+> - caused by: An error occurred for path /var/run/iotedge/mgmt.sock
+> - caused by: Permission denied (os error 13)
++
+**Root cause:**
+
+For all Linux distros except CentOS 7, IoT Edge's default configuration is to use `systemd` socket activation. A permission error happens if you change the configuration file to not use socket activation but leave the URLs as `/var/run/iotedge/*.sock`, since the `iotedge` user can't write to `/var/run/iotedge` meaning it can't unlock and mount the sockets itself.
+
+**Resolution:**
+
+You do not need to disable socket activation on a distro where socket activation is supported. However, if you prefer to not use socket activation at all, put the sockets in `/var/lib/iotedge/`. To do this
+1. Run `systemctl disable iotedge.socket iotedge.mgmt.socket` to disable the socket units so that systemd doesn't start them unnecessarily
+1. Change the iotedge config to use `/var/lib/iotedge/*.sock` in both `connect` and `listen` sections
+1. If you already have modules, they have the old `/var/run/iotedge/*.sock` mounts, so `docker rm -f` them.
+ ## Next steps Do you think that you found a bug in the IoT Edge platform? [Submit an issue](https://github.com/Azure/iotedge/issues) so that we can continue to improve.
key-vault Manage With Cli2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/manage-with-cli2.md
For detailed steps on registering an application with Azure Active Directory you
To register an application in Azure Active Directory: ```azurecli
-az ad sp create-for-rbac -n "MyApp" --password "hVFkk965BuUv" --role Contributor
+az ad sp create-for-rbac -n "MyApp" --password "hVFkk965BuUv" --role Contributor --scopes /subscriptions/<subscription id>
# If you don't specify a password, one will be created for you. ```
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
The Azure RBAC model provides the ability to set permissions on different scope
For more information, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-## Best Practices for individual keys, secrets, and certificates
+## Best Practices for individual keys, secrets, and certificates role assignments
Our recommendation is to use a vault per application per environment (Development, Pre-Production, and Production).
For more Information about how to create custom roles, see:
- Role assignments latency: at current expected performance, it will take up to 10 minutes (600 seconds) after role assignments is changed for role to be applied ## Learn more
-1. Assign the [ROLENAME] role to the [USER | GROUP | SERVICEPRINCIPAL | MANAGEDIDENTITY] at [MANAGEMENTGROUP | SUBSCRIPTION | RESOURCEGROUP | RESOURCE] scope.
- - [Azure RBAC Overview](../../role-based-access-control/overview.md) - [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md)
load-balancer Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-overview.md
Gateway Load Balancer has the following benefits:
* Chain applications across regions and subscriptions
-A Standard Public Load balancer or the IP configuration of a virtual machine can be chained to a Gateway Load Balancer. Once chained to a Standard Public Load Balancer frontend or IP configuration on a virtual machine, no additional configuration is needed to ensure traffic to and from the application endpoint is sent to the Gateway Load Balancer.
+A Standard Public Load balancer or a Standard IP configuration of a virtual machine can be chained to a Gateway Load Balancer. Once chained to a Standard Public Load Balancer frontend or Standard IP configuration on a virtual machine, no additional configuration is needed to ensure traffic to and from the application endpoint is sent to the Gateway Load Balancer.
Traffic moves from the consumer virtual network to the provider virtual network. The traffic then returns to the consumer virtual network. The consumer virtual network and provider virtual network can be in different subscriptions, tenants, or regions removing management overhead.
Gateway Load Balancer consists of the following components:
* **Tunnel interfaces** - Gateway Load balancer backend pools have another component called the tunnel interfaces. The tunnel interface enables the appliances in the backend to ensure network flows are handled as expected. Each backend pool can have up to 2 tunnel interfaces. Tunnel interfaces can be either internal or external. For traffic coming to your backend pool, you should use the external type. For traffic going from your appliance to the application, you should use the internal type.
-* **Chain** - A Gateway Load Balancer can be referenced by a Standard Public Load Balancer frontend or a Public IP configuration on a virtual machine. The addition of advanced networking capabilities in a specific sequence is known as service chaining. As a result, this reference is called a chain.
+* **Chain** - A Gateway Load Balancer can be referenced by a Standard Public Load Balancer frontend or a Standard Public IP configuration on a virtual machine. The addition of advanced networking capabilities in a specific sequence is known as service chaining. As a result, this reference is called a chain.
## Pricing
machine-learning Azure Machine Learning Release Notes Cli V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes-cli-v2.md
Last updated 11/03/2021+ # Azure Machine Learning CLI (v2) release notes
machine-learning Concept Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-component.md
Last updated 10/21/2021 ++ # What is an Azure Machine Learning component (preview)?
machine-learning Concept Train Model Git Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-model-git-integration.md
Last updated 10/21/2021++ # Git integration for Azure Machine Learning
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
Last updated 01/11/2022 -++ # Customer intent: As a data scientist, I want to securely access Azure resources for my machine learning model deployment with an online endpoint and managed identity.
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
-++ Last updated 11/03/2021
machine-learning How To Change Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-change-storage-access-key.md
Last updated 10/21/2021+
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
Previously updated : 02/28/2022 Last updated : 03/29/2022 -+ # Install and set up the CLI (v2)
The `ml` extension (preview) to the [Azure CLI](/cli/azure/) is the enhanced int
- To use the CLI, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today. - To use the CLI commands in this document from your **local environment**, you need the [Azure CLI](/cli/azure/install-azure-cli).
-## Limitations
-- ## Installation The new Machine Learning extension **requires Azure CLI version `>=2.15.0`**. Ensure this requirement is met:
You can show your current defaults using `--list-defaults/-l`:
The `ml` CLI extension (sometimes called 'CLI v2') for Azure Machine Learning sends operational data (YAML parameters and metadata) over the public internet. All the `ml` CLI extension commands communicate with the Azure Resource Manager. This communication is secured using HTTPS/TLS 1.2.
+Data in a data store that is secured in a virtual network is _not_ sent over the public internet. For example, if your training data is located in the default storage account for the workspace, and the storage account is in a virtual network.
+ > [!NOTE] > With the previous extension (`azure-cli-ml`, sometimes called 'CLI v1'), only some of the commands communicate with the Azure Resource Manager. Specifically, commands that create, update, delete, list, or show Azure resources. Operations such as submitting a training job communicate directly with the Azure Machine Learning workspace. If your workspace is [secured with a private endpoint](how-to-configure-private-link.md), that is enough to secure commands provided by the `azure-cli-ml` extension.
-> [!TIP]
-> Data stored in a data store that is secured in a virtual network is _not_ sent over the public internet. For example, if your training data is secured on the default storage account for the workspace, and the storage account is in the virtual network.
+# [Public workspace](#tab/public)
+
+If your Azure Machine Learning workspace is public (that is, not behind a virtual network), then there is no additional configuration required. Communications are secured using HTTPS/TLS 1.2
+
+# [Private workspace](#tab/private)
-You can increase the security of CLI communications with Azure Resource Manager by using Azure Private Link. The following links provide information on using a Private Link for managing Azure resources:
+If your Azure Machine Learning workspace uses a private endpoint and virtual network, choose one of the following configurations to use:
-1. [Secure your Azure Machine Learning workspace inside a virtual network using a private endpoint](how-to-configure-private-link.md).
-2. [Create a Private Link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md).
-3. [Create a private endpoint](../azure-resource-manager/management/create-private-link-access-portal.md#create-private-endpoint) for the Private Link created in the previous step.
+* If you are __OK__ with the CLI v2 communication over the public internet, use the following `--public-network-access` parameter for the `az ml workspace update` command to enable public network access. For example, the following command updates a workspace for public network access:
-> [!IMPORTANT]
-> To configure the private link for Azure Resource Manager, you must be the _subscription owner_ for the Azure subscription, and an _owner_ or _contributor_ of the root management group. For more information, see [Create a private link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md).
+ ```azurecli
+ az ml workspace update --name myworkspace --public-network-access
+ ```
+
+* If you are __not OK__ with the CLI v2 communication over the public internet, you can use an Azure Private Link to increase security of the communication. Use the following links to secure communications with Azure Resource Manager by using Azure Private Link.
+
+ 1. [Secure your Azure Machine Learning workspace inside a virtual network using a private endpoint](how-to-configure-private-link.md).
+ 2. [Create a Private Link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md).
+ 3. [Create a private endpoint](../azure-resource-manager/management/create-private-link-access-portal.md#create-private-endpoint) for the Private Link created in the previous step.
+
+ > [!IMPORTANT]
+ > To configure the private link for Azure Resource Manager, you must be the _subscription owner_ for the Azure subscription, and an _owner_ or _contributor_ of the root management group. For more information, see [Create a private link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md).
++ ## Next steps
machine-learning How To Consume Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-consume-web-service.md
Last updated 10/21/2021 ms.devlang: csharp, golang, java, python---+ #Customer intent: As a developer, I need to understand how to create a client application that consumes the web service of a deployed ML model.
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
-+
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-kubernetes.md
-+
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
Last updated 01/07/2022
+ms.devlang: azurecli, cliv2
machine-learning How To Create Image Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-image-labeling-projects.md
For bounding boxes, important questions include:
* How is the bounding box defined for this task? Should it be entirely on the interior of the object, or should it be on the exterior? Should it be cropped as closely as possible, or is some clearance acceptable? * What level of care and consistency do you expect the labelers to apply in defining bounding boxes?
+* What is the visual definition of each label class? Is it possible to provide a list of normal, edge, and counter cases for each class?
+* What should the labelers do if the object is tiny? Should it be labeled as an object or should it be ignored as background?
* How to label the object that is partially shown in the image? * How to label the object that partially covered by other object?
+* How to label the object if there is no clear boundary of the object?
+* How to label the object which is not object class of interest but visually similar to an interested object type?
>[!NOTE] > Be sure to note that the labelers will be able to select the first 9 labels by using number keys 1-9.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
-+
machine-learning How To Create Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-workspace-template.md
See the [Azure portal](#use-the-azure-portal) section if you prefer using the gr
# [Azure CLI](#tab/azcli) ```azurecli az group create --name "examplegroup" --location "eastus"
Once your resource group is successfully created, deploy the template with the f
# [Azure CLI](#tab/azcli) - ```azurecli az deployment group create \ --name "exampledeployment" \
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
Last updated 11/03/2021 -+ ms.devlang: azurecli #Customer intent: As a machine learning engineer, I want to test and debug online endpoints locally using Visual Studio Code before deploying them Azure.
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-and-where.md
Last updated 11/12/2021 -+ adobe-target: true
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
Last updated 12/22/2021 -+ ms.devlang: azurecli
machine-learning How To Deploy Azure Container Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-azure-container-instance.md
-+
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-azure-kubernetes-service.md
-+
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
Last updated 12/22/2021 -+ ms.devlang: azurecli
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
Last updated 12/22/2021 -+ # Deploy and score a machine learning model by using an online endpoint (preview)
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Last updated 12/21/2021 -+ ms.devlang: azurecli
machine-learning How To Deploy Profile Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-profile-model.md
zone_pivot_groups: aml-control-methods -+ # Profile your model to determine resource utilization
machine-learning How To Deploy Update Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-update-web-service.md
Last updated 10/21/2021-+ # Update a deployed web service
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
-+ ms.devlang: azurecli
machine-learning How To Manage Environments V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-v2.md
Last updated 10/21/2021 -+ # Manage Azure Machine Learning environments with the CLI (v2) (preview)
machine-learning How To Manage Optimize Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-optimize-cost.md
description: Learn tips to optimize your cost when building machine learning models in Azure Machine Learning -+
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
Last updated 01/05/2022 -+ # Manage Azure Machine Learning workspaces using Azure CLI
In this article, you learn how to create and manage Azure Machine Learning workspaces using the Azure CLI. The Azure CLI provides commands for managing Azure resources and is designed to get you working quickly with Azure, with an emphasis on automation. The machine learning extension to the CLI provides commands for working with Azure Machine Learning resources. > [!NOTE]
-> Examples in this article refer to both 1.0 CLI and 2.0 CLI versions. If no version is specified for a command, it will work with either the 1.0 or 2.0 CLI. The machine learning 2.0 CLI is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads.
+> Examples in this article refer to both CLI v1 and CLI v2 versions. If no version is specified for a command, it will work with either the v1 or CLI v2. The machine learning CLI v2 is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads.
## Prerequisites
In this article, you learn how to create and manage Azure Machine Learning works
[!INCLUDE [register-namespace](../../includes/machine-learning-register-namespace.md)] - [!INCLUDE [application-insight](../../includes/machine-learning-application-insight.md)]
+### Secure CLI communications
+
+Some of the Azure CLI commands communicate with Azure Resource Manager over the internet. This communication is secured using HTTPS/TLS 1.2.
+
+# [CLI v1](#tab/vnetpleconfigurationsv1cli)
+
+With the Azure Machine Learning CLI extension v1 (`azure-cli-ml`), only some of the commands communicate with the Azure Resource Manager. Specifically, commands that create, update, delete, list, or show Azure resources. Operations such as submitting a training job communicate directly with the Azure Machine Learning workspace. **If your workspace is [secured with a private endpoint](how-to-configure-private-link.md), that is enough to secure commands provided by the `azure-cli-ml` extension**.
+
+# [CLI v2](#tab/vnetpleconfigurationsv2cli)
+
+With the Azure Machine Learning CLI extension v2 ('ml'), all of the commands communicate with the Azure Resource Manager. This includes operational data such as YAML parameters and metadata. If your Azure Machine Learning workspace is public (that is, not behind a virtual network), then there is no additional configuration required. Communications are secured using HTTPS/TLS 1.2.
+
+If your Azure Machine Learning workspace uses a private endpoint and virtual network and you are using CLI v2, choose one of the following configurations to use:
+
+* If you are __OK__ with the CLI v2 communication over the public internet, use the following `--public-network-access` parameter for the `az ml workspace update` command to enable public network access. For example, the following command updates a workspace for public network access:
+
+ ```azurecli
+ az ml workspace update --name myworkspace --public-network-access
+ ```
+
+* If you are __not OK__ with the CLI v2 communication over the public internet, you can use an Azure Private Link to increase security of the communication. Use the following links to secure communications with Azure Resource Manager by using Azure Private Link.
+
+ 1. [Secure your Azure Machine Learning workspace inside a virtual network using a private endpoint](how-to-configure-private-link.md).
+ 2. [Create a Private Link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md).
+ 3. [Create a private endpoint](../azure-resource-manager/management/create-private-link-access-portal.md#create-private-endpoint) for the Private Link created in the previous step.
+
+ > [!IMPORTANT]
+ > To configure the private link for Azure Resource Manager, you must be the _subscription owner_ for the Azure subscription, and an _owner_ or _contributor_ of the root management group. For more information, see [Create a private link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md).
+
+For more information on CLI v2 communication, see [Install and set up the CLI](how-to-configure-cli.md#secure-communications).
+++ ## Connect the CLI to your Azure subscription > [!IMPORTANT]
To create a new workspace where the __services are automatically created__, use
az ml workspace create -w <workspace-name> -g <resource-group-name> ```
-# [Bring existing resources (1.0 CLI)](#tab/bringexistingresources1)
+# [Bring existing resources (CLI v1)](#tab/bringexistingresources1)
[!INCLUDE [cli v1](../../includes/machine-learning-cli-v1.md)]
az ml workspace create -w <workspace-name>
--container-registry "/subscriptions/<service-GUID>/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerRegistry/registries/<acr-name>" ```
-# [Bring existing resources (2.0 CLI - preview)](#tab/bringexistingresources2)
+# [Bring existing resources (CLI v2 - preview)](#tab/bringexistingresources2)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
The output of the workspace creation command is similar to the following JSON. Y
Dependent on your use case and organizational requirements, you can choose to configure Azure Machine Learning using private network connectivity. You can use the Azure CLI to deploy a workspace and a Private link endpoint for the workspace resource. For more information on using a private endpoint and virtual network (VNet) with your workspace, see [Virtual network isolation and privacy overview](how-to-network-security-overview.md). For complex resource configurations, also refer to template based deployment options including [Azure Resource Manager](how-to-create-workspace-template.md).
-# [1.0 CLI](#tab/vnetpleconfigurationsv1cli)
+# [CLI v1](#tab/vnetpleconfigurationsv1cli)
[!INCLUDE [cli v1](../../includes/machine-learning-cli-v1.md)]
az ml workspace create -w <workspace-name>
For more details on how to use these commands, see the [CLI reference pages](/cli/azure/ml(v1)/workspace).
-# [2.0 CLI - preview](#tab/vnetpleconfigurationsv2cli)
+# [CLI v2 - preview](#tab/vnetpleconfigurationsv2cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-When using private link, your workspace cannot use Azure Container Registry tasks compute for image building. Hence, you must set the image_build_compute property to a CPU compute cluster name to use for Docker image environment building. You can also specify whether the private link workspace should be accessible over the internet using the public_network_access property.
+When using private link, your workspace cannot use Azure Container Registry to build docker images. Hence, you must set the image_build_compute property to a CPU compute cluster name to use for Docker image environment building. You can also specify whether the private link workspace should be accessible over the internet using the public_network_access property.
:::code language="YAML" source="~/azureml-examples-main/cli/resources/workspace/privatelink.yml":::
By default, metadata for the workspace is stored in an Azure Cosmos DB instance
To learn more about the resources that are created when you bring your own key for encryption, see [Data encryption with Azure Machine Learning](./concept-data-encryption.md#azure-cosmos-db).
-Below CLI commands provide examples for creating a workspace that uses customer-managed keys for encryption using the 1.0 CLI and 2.0 CLI versions.
+Below CLI commands provide examples for creating a workspace that uses customer-managed keys for encryption using the CLI v1 and CLI v2 versions.
-# [1.0 CLI](#tab/vnetpleconfigurationsv1cli)
+# [CLI v1](#tab/vnetpleconfigurationsv1cli)
[!INCLUDE [cli v1](../../includes/machine-learning-cli-v1.md)]
az ml workspace create -w <workspace-name>
--hbi-workspace ```
-# [2.0 CLI - preview](#tab/vnetpleconfigurationsv2cli)
+# [CLI v2 - preview](#tab/vnetpleconfigurationsv2cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
For more information on customer-managed keys and high business impact workspace
To get information about a workspace, use the following command:
-# [1.0 CLI](#tab/workspaceupdatev1)
+# [CLI v1](#tab/workspaceupdatev1)
[!INCLUDE [cli v1](../../includes/machine-learning-cli-v1.md)]
To get information about a workspace, use the following command:
az ml workspace show -w <workspace-name> -g <resource-group-name> ```
-# [2.0 CLI - preview](#tab/workspaceupdatev2)
+# [CLI v2 - preview](#tab/workspaceupdatev2)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
For more information, see the [az ml workspace show](/cli/azure/ml/workspace#az-
To update a workspace, use the following command:
-# [1.0 CLI](#tab/workspaceupdatev1)
+# [CLI v1](#tab/workspaceupdatev1)
[!INCLUDE [cli v1](../../includes/machine-learning-cli-v1.md)]
To update a workspace, use the following command:
az ml workspace update -w <workspace-name> -g <resource-group-name> ```
-# [2.0 CLI - preview](#tab/workspaceupdatev2)
+# [CLI v2 - preview](#tab/workspaceupdatev2)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
For more information, see the [az ml workspace update](/cli/azure/ml/workspace#a
If you change access keys for one of the resources used by your workspace, it takes around an hour for the workspace to synchronize to the new key. To force the workspace to sync the new keys immediately, use the following command:
-# [1.0 CLI](#tab/workspacesynckeysv1)
+# [CLI v1](#tab/workspacesynckeysv1)
[!INCLUDE [cli v1](../../includes/machine-learning-cli-v1.md)]
If you change access keys for one of the resources used by your workspace, it ta
az ml workspace sync-keys -w <workspace-name> -g <resource-group-name> ```
-# [2.0 CLI - preview](#tab/workspacesynckeysv2)
+# [CLI v2 - preview](#tab/workspacesynckeysv2)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
For more information on the sync-keys command, see [az ml workspace sync-keys](/
To delete a workspace after it is no longer needed, use the following command:
-# [1.0 CLI](#tab/workspacedeletev1)
+# [CLI v1](#tab/workspacedeletev1)
[!INCLUDE [cli v1](../../includes/machine-learning-cli-v1.md)]
To delete a workspace after it is no longer needed, use the following command:
az ml workspace delete -w <workspace-name> -g <resource-group-name> ```
-# [2.0 CLI - preview](#tab/workspacedeletev2)
+# [CLI v2 - preview](#tab/workspacedeletev2)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
machine-learning How To Safely Rollout Managed Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-managed-endpoints.md
Last updated 10/21/2021 -+ # Safe rollout for online endpoints (preview)
machine-learning How To Secure Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-web-service.md
Last updated 10/21/2021 +
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
Last updated 03/09/2022 -+
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md
Last updated 02/02/2022 -+ # Set up authentication for Azure Machine Learning resources and workflows
The easiest way to create an SP and grant access to your workspace is by using t
1. Create the service principal. In the following example, an SP named **ml-auth** is created: ```azurecli-interactive
- az ad sp create-for-rbac --sdk-auth --name ml-auth --role Contributor
+ az ad sp create-for-rbac --sdk-auth --name ml-auth --role Contributor --scopes /subscriptions/<subscription id>
``` The output will be a JSON similar to the following. Take note of the `clientId`, `clientSecret`, and `tenantId` fields, as you will need them for other steps in this article.
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-monitor-analyze-runs.md
Last updated 10/21/2021 -+ # Start, monitor, and track run history
machine-learning How To Train Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-cli.md
Last updated 10/21/2021 -+ # Train models with the CLI (v2) (preview)
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-batch-endpoints.md
-+
machine-learning How To Troubleshoot Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-deployment.md
Last updated 10/21/2021
-+ #Customer intent: As a data scientist, I want to figure out why my model deployment fails so that I can fix it.
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
Last updated 11/03/2021 -+ #Customer intent: As a data scientist, I want to figure out why my online endpoint deployment failed so that I can fix it.
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
Specify the following configuration parameters:
* `delay_evaluation`: (optional) delays the first policy evaluation for a specified number of intervals + ```Python from azureml.train.hyperdrive import BanditPolicy early_termination_policy = BanditPolicy(slack_factor = 0.1, evaluation_interval=1, delay_evaluation=5)
This policy takes the following configuration parameters:
* `truncation_percentage`: the percentage of lowest performing runs to terminate at each evaluation interval. An integer value between 1 and 99. * `evaluation_interval`: (optional) the frequency for applying the policy * `delay_evaluation`: (optional) delays the first policy evaluation for a specified number of intervals
+* `exclude_finished_jobs`: specifies whether to exclude finished jobs when applying the policy
```Python from azureml.train.hyperdrive import TruncationSelectionPolicy
-early_termination_policy = TruncationSelectionPolicy(evaluation_interval=1, truncation_percentage=20, delay_evaluation=5)
+early_termination_policy = TruncationSelectionPolicy(evaluation_interval=1, truncation_percentage=20, delay_evaluation=5, exclude_finished_jobs=true)
```
-In this example, the early termination policy is applied at every interval starting at evaluation interval 5. A run terminates at interval 5 if its performance at interval 5 is in the lowest 20% of performance of all runs at interval 5.
+In this example, the early termination policy is applied at every interval starting at evaluation interval 5. A run terminates at interval 5 if its performance at interval 5 is in the lowest 20% of performance of all runs at interval 5 and will exclude finished jobs when applying the policy.
### No termination policy (default)
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
Last updated 10/21/2021-+ # Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-environments.md
Last updated 10/21/2021 -+ ms.devlang: azurecli ## As a developer, I need to configure my experiment context with the necessary software packages so my machine learning models can be trained and deployed on different compute targets.
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-identities.md
Last updated 10/21/2021++ # Use Managed identities with Azure Machine Learning
machine-learning How To Use Synapsesparkstep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-synapsesparkstep.md
Once your Azure Machine Learning workspace and your Azure Synapse Analytics work
* Azure Resource Manager (ARM) template (see this [Example ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-linkedservice-create/azuredeploy.json)). * You can use the command line to follow the ARM template, add the linked service, and attach the Apache Spark pool with the following code:
- [!INCLUDE [cli v1](../../includes/machine-learning-cli-v1.md)]
- ```azurecli az deployment group create --name --resource-group <rg_name> --template-file "azuredeploy.json" --parameters @"azuredeploy.parameters.json" ```
machine-learning Reference Azure Machine Learning Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-azure-machine-learning-cli.md
Last updated 04/02/2021-+ # Install & use the CLI extension for Azure Machine Learning
The following commands demonstrate how to work with machine learning pipelines:
For more information, see [az ml pipeline create](/cli/azure/ml(v1)/pipeline#az-ml-pipeline-create).
- For more information on the pipeline YAML file, see [Define machine learning pipelines in YAML](reference-yaml-job-pipeline.md).
+ For more information on the pipeline YAML file, see [Define machine learning pipelines in YAML](reference-pipeline-yaml.md).
+ Run a pipeline:
The following commands demonstrate how to work with machine learning pipelines:
For more information, see [az ml run submit-pipeline](/cli/azure/ml(v1)/run#az-ml-run-submit-pipeline).
- For more information on the pipeline YAML file, see [Define machine learning pipelines in YAML](reference-yaml-job-pipeline.md).
+ For more information on the pipeline YAML file, see [Define machine learning pipelines in YAML](reference-pipeline-yaml.md).
+ Schedule a pipeline:
machine-learning Reference Pipeline Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-pipeline-yaml.md
+
+ Title: Machine Learning pipeline YAML
+
+description: Learn how to define a machine learning pipeline using a YAML file. YAML pipeline definitions are used with the machine learning extension for the Azure CLI.
++++++++ Last updated : 07/31/2020+++
+# Define machine learning pipelines in YAML
+++
+Learn how to define your machine learning pipelines in [YAML](https://yaml.org/). When using the machine learning extension for the [Azure CLI **v1**](reference-azure-machine-learning-cli.md)., many of the pipeline-related commands expect a YAML file that defines the pipeline.
+
+The following table lists what is and is not currently supported when defining a pipeline in YAML for use with CLI v1:
+
+| Step type | Supported? |
+| -- | :--: |
+| PythonScriptStep | Yes |
+| ParallelRunStep | Yes |
+| AdlaStep | Yes |
+| AzureBatchStep | Yes |
+| DatabricksStep | Yes |
+| DataTransferStep | Yes |
+| AutoMLStep | No |
+| HyperDriveStep | No |
+| ModuleStep | Yes |
+| MPIStep | No |
+| EstimatorStep | No |
+
+## Pipeline definition
+
+A pipeline definition uses the following keys, which correspond to the [Pipelines](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline) class:
+
+| YAML key | Description |
+| -- | -- |
+| `name` | The description of the pipeline. |
+| `parameters` | Parameter(s) to the pipeline. |
+| `data_reference` | Defines how and where data should be made available in a run. |
+| `default_compute` | Default compute target where all steps in the pipeline run. |
+| `steps` | The steps used in the pipeline. |
+
+## Parameters
+
+The `parameters` section uses the following keys, which correspond to the [PipelineParameter](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelineparameter) class:
+
+| YAML key | Description |
+| - | - |
+| `type` | The value type of the parameter. Valid types are `string`, `int`, `float`, `bool`, or `datapath`. |
+| `default` | The default value. |
+
+Each parameter is named. For example, the following YAML snippet defines three parameters named `NumIterationsParameter`, `DataPathParameter`, and `NodeCountParameter`:
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYaml
+ parameters:
+ NumIterationsParameter:
+ type: int
+ default: 40
+ DataPathParameter:
+ type: datapath
+ default:
+ datastore: workspaceblobstore
+ path_on_datastore: sample2.txt
+ NodeCountParameter:
+ type: int
+ default: 4
+```
+
+## Data reference
+
+The `data_references` section uses the following keys, which correspond to the [DataReference](/python/api/azureml-core/azureml.data.data_reference.datareference):
+
+| YAML key | Description |
+| -- | -- |
+| `datastore` | The datastore to reference. |
+| `path_on_datastore` | The relative path in the backing storage for the data reference. |
+
+Each data reference is contained in a key. For example, the following YAML snippet defines a data reference stored in the key named `employee_data`:
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYaml
+ parameters:
+ PipelineParam1:
+ type: int
+ default: 3
+ data_references:
+ employee_data:
+ datastore: adftestadla
+ path_on_datastore: "adla_sample/sample_input.csv"
+```
+
+## Steps
+
+Steps define a computational environment, along with the files to run on the environment. To define the type of a step, use the `type` key:
+
+| Step type | Description |
+| -- | -- |
+| `AdlaStep` | Runs a U-SQL script with Azure Data Lake Analytics. Corresponds to the [AdlaStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.adlastep) class. |
+| `AzureBatchStep` | Runs jobs using Azure Batch. Corresponds to the [AzureBatchStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.azurebatchstep) class. |
+| `DatabricsStep` | Adds a Databricks notebook, Python script, or JAR. Corresponds to the [DatabricksStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricksstep) class. |
+| `DataTransferStep` | Transfers data between storage options. Corresponds to the [DataTransferStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.datatransferstep) class. |
+| `PythonScriptStep` | Runs a Python script. Corresponds to the [PythonScriptStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep) class. |
+| `ParallelRunStep` | Runs a Python script to process large amounts of data asynchronously and in parallel. Corresponds to the [ParallelRunStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_step.parallelrunstep) class. |
+
+### ADLA step
+
+| YAML key | Description |
+| -- | -- |
+| `script_name` | The name of the U-SQL script (relative to the `source_directory`). |
+| `compute` | The Azure Data Lake compute target to use for this step. |
+| `parameters` | [Parameters](#parameters) to the pipeline. |
+| `inputs` | Inputs can be [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
+| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
+| `source_directory` | Directory that contains the script, assemblies, etc. |
+| `priority` | The priority value to use for the current job. |
+| `params` | Dictionary of name-value pairs. |
+| `degree_of_parallelism` | The degree of parallelism to use for this job. |
+| `runtime_version` | The runtime version of the Data Lake Analytics engine. |
+| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
+
+The following example contains an ADLA Step definition:
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYaml
+ parameters:
+ PipelineParam1:
+ type: int
+ default: 3
+ data_references:
+ employee_data:
+ datastore: adftestadla
+ path_on_datastore: "adla_sample/sample_input.csv"
+ default_compute: adlacomp
+ steps:
+ Step1:
+ runconfig: "D:\\Yaml\\default_runconfig.yml"
+ parameters:
+ NUM_ITERATIONS_2:
+ source: PipelineParam1
+ NUM_ITERATIONS_1: 7
+ type: "AdlaStep"
+ name: "MyAdlaStep"
+ script_name: "sample_script.usql"
+ source_directory: "D:\\scripts\\Adla"
+ inputs:
+ employee_data:
+ source: employee_data
+ outputs:
+ OutputData:
+ destination: Output4
+ datastore: adftestadla
+ bind_mode: mount
+```
+
+### Azure Batch step
+
+| YAML key | Description |
+| -- | -- |
+| `compute` | The Azure Batch compute target to use for this step. |
+| `inputs` | Inputs can be [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
+| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
+| `source_directory` | Directory that contains the module binaries, executable, assemblies, etc. |
+| `executable` | Name of the command/executable that will be ran as part of this job. |
+| `create_pool` | Boolean flag to indicate whether to create the pool before running the job. |
+| `delete_batch_job_after_finish` | Boolean flag to indicate whether to delete the job from the Batch account after it's finished. |
+| `delete_batch_pool_after_finish` | Boolean flag to indicate whether to delete the pool after the job finishes. |
+| `is_positive_exit_code_failure` | Boolean flag to indicate if the job fails if the task exits with a positive code. |
+| `vm_image_urn` | If `create_pool` is `True`, and VM uses `VirtualMachineConfiguration`. |
+| `pool_id` | The ID of the pool where the job will run. |
+| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
+
+The following example contains an Azure Batch step definition:
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYaml
+ parameters:
+ PipelineParam1:
+ type: int
+ default: 3
+ data_references:
+ input:
+ datastore: workspaceblobstore
+ path_on_datastore: "input.txt"
+ default_compute: testbatch
+ steps:
+ Step1:
+ runconfig: "D:\\Yaml\\default_runconfig.yml"
+ parameters:
+ NUM_ITERATIONS_2:
+ source: PipelineParam1
+ NUM_ITERATIONS_1: 7
+ type: "AzureBatchStep"
+ name: "MyAzureBatchStep"
+ pool_id: "MyPoolName"
+ create_pool: true
+ executable: "azurebatch.cmd"
+ source_directory: "D:\\scripts\\AureBatch"
+ allow_reuse: false
+ inputs:
+ input:
+ source: input
+ outputs:
+ output:
+ destination: output
+ datastore: workspaceblobstore
+```
+
+### Databricks step
+
+| YAML key | Description |
+| -- | -- |
+| `compute` | The Azure Databricks compute target to use for this step. |
+| `inputs` | Inputs can be [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
+| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
+| `run_name` | The name in Databricks for this run. |
+| `source_directory` | Directory that contains the script and other files. |
+| `num_workers` | The static number of workers for the Databricks run cluster. |
+| `runconfig` | The path to a `.runconfig` file. This file is a YAML representation of the [RunConfiguration](/python/api/azureml-core/azureml.core.runconfiguration) class. For more information on the structure of this file, see [runconfigschema.json](https://github.com/microsoft/MLOps/blob/b4bdcf8c369d188e83f40be8b748b49821f71cf2/infra-as-code/runconfigschema.json). |
+| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
+
+The following example contains a Databricks step:
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYaml
+ parameters:
+ PipelineParam1:
+ type: int
+ default: 3
+ data_references:
+ adls_test_data:
+ datastore: adftestadla
+ path_on_datastore: "testdata"
+ blob_test_data:
+ datastore: workspaceblobstore
+ path_on_datastore: "dbtest"
+ default_compute: mydatabricks
+ steps:
+ Step1:
+ runconfig: "D:\\Yaml\\default_runconfig.yml"
+ parameters:
+ NUM_ITERATIONS_2:
+ source: PipelineParam1
+ NUM_ITERATIONS_1: 7
+ type: "DatabricksStep"
+ name: "MyDatabrickStep"
+ run_name: "DatabricksRun"
+ python_script_name: "train-db-local.py"
+ source_directory: "D:\\scripts\\Databricks"
+ num_workers: 1
+ allow_reuse: true
+ inputs:
+ blob_test_data:
+ source: blob_test_data
+ outputs:
+ OutputData:
+ destination: Output4
+ datastore: workspaceblobstore
+ bind_mode: mount
+```
+
+### Data transfer step
+
+| YAML key | Description |
+| -- | -- |
+| `compute` | The Azure Data Factory compute target to use for this step. |
+| `source_data_reference` | Input connection that serves as the source of data transfer operations. Supported values are [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
+| `destination_data_reference` | Input connection that serves as the destination of data transfer operations. Supported values are [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) and [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
+| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
+
+The following example contains a data transfer step:
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYaml
+ parameters:
+ PipelineParam1:
+ type: int
+ default: 3
+ data_references:
+ adls_test_data:
+ datastore: adftestadla
+ path_on_datastore: "testdata"
+ blob_test_data:
+ datastore: workspaceblobstore
+ path_on_datastore: "testdata"
+ default_compute: adftest
+ steps:
+ Step1:
+ runconfig: "D:\\Yaml\\default_runconfig.yml"
+ parameters:
+ NUM_ITERATIONS_2:
+ source: PipelineParam1
+ NUM_ITERATIONS_1: 7
+ type: "DataTransferStep"
+ name: "MyDataTransferStep"
+ adla_compute_name: adftest
+ source_data_reference:
+ adls_test_data:
+ source: adls_test_data
+ destination_data_reference:
+ blob_test_data:
+ source: blob_test_data
+```
+
+### Python script step
+
+| YAML key | Description |
+| -- | -- |
+| `inputs` | Inputs can be [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
+| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
+| `script_name` | The name of the Python script (relative to `source_directory`). |
+| `source_directory` | Directory that contains the script, Conda environment, etc. |
+| `runconfig` | The path to a `.runconfig` file. This file is a YAML representation of the [RunConfiguration](/python/api/azureml-core/azureml.core.runconfiguration) class. For more information on the structure of this file, see [runconfig.json](https://github.com/microsoft/MLOps/blob/b4bdcf8c369d188e83f40be8b748b49821f71cf2/infra-as-code/runconfigschema.json). |
+| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
+
+The following example contains a Python script step:
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYaml
+ parameters:
+ PipelineParam1:
+ type: int
+ default: 3
+ data_references:
+ DataReference1:
+ datastore: workspaceblobstore
+ path_on_datastore: testfolder/sample.txt
+ default_compute: cpu-cluster
+ steps:
+ Step1:
+ runconfig: "D:\\Yaml\\default_runconfig.yml"
+ parameters:
+ NUM_ITERATIONS_2:
+ source: PipelineParam1
+ NUM_ITERATIONS_1: 7
+ type: "PythonScriptStep"
+ name: "MyPythonScriptStep"
+ script_name: "train.py"
+ allow_reuse: True
+ source_directory: "D:\\scripts\\PythonScript"
+ inputs:
+ InputData:
+ source: DataReference1
+ outputs:
+ OutputData:
+ destination: Output4
+ datastore: workspaceblobstore
+ bind_mode: mount
+```
+
+### Parallel run step
+
+| YAML key | Description |
+| -- | -- |
+| `inputs` | Inputs can be [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
+| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
+| `script_name` | The name of the Python script (relative to `source_directory`). |
+| `source_directory` | Directory that contains the script, Conda environment, etc. |
+| `parallel_run_config` | The path to a `parallel_run_config.yml` file. This file is a YAML representation of the [ParallelRunConfig](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunconfig) class. |
+| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
+
+The following example contains a Parallel run step:
+
+```yaml
+pipeline:
+ description: SamplePipelineFromYaml
+ default_compute: cpu-cluster
+ data_references:
+ MyMinistInput:
+ dataset_name: mnist_sample_data
+ parameters:
+ PipelineParamTimeout:
+ type: int
+ default: 600
+ steps:
+ Step1:
+ parallel_run_config: "yaml/parallel_run_config.yml"
+ type: "ParallelRunStep"
+ name: "parallel-run-step-1"
+ allow_reuse: True
+ arguments:
+ - "--progress_update_timeout"
+ - parameter:timeout_parameter
+ - "--side_input"
+ - side_input:SideInputData
+ parameters:
+ timeout_parameter:
+ source: PipelineParamTimeout
+ inputs:
+ InputData:
+ source: MyMinistInput
+ side_inputs:
+ SideInputData:
+ source: Output4
+ bind_mode: mount
+ outputs:
+ OutputDataStep2:
+ destination: Output5
+ datastore: workspaceblobstore
+ bind_mode: mount
+```
+
+### Pipeline with multiple steps
+
+| YAML key | Description |
+| -- | -- |
+| `steps` | Sequence of one or more PipelineStep definitions. Note that the `destination` keys of one step's `outputs` become the `source` keys to the `inputs` of the next step.|
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYAML
+ description: Sample multistep YAML pipeline
+ data_references:
+ TitanicDS:
+ dataset_name: 'titanic_ds'
+ bind_mode: download
+ default_compute: cpu-cluster
+ steps:
+ Dataprep:
+ type: "PythonScriptStep"
+ name: "DataPrep Step"
+ compute: cpu-cluster
+ runconfig: ".\\default_runconfig.yml"
+ script_name: "prep.py"
+ arguments:
+ - '--train_path'
+ - output:train_path
+ - '--test_path'
+ - output:test_path
+ allow_reuse: True
+ inputs:
+ titanic_ds:
+ source: TitanicDS
+ bind_mode: download
+ outputs:
+ train_path:
+ destination: train_csv
+ datastore: workspaceblobstore
+ test_path:
+ destination: test_csv
+ Training:
+ type: "PythonScriptStep"
+ name: "Training Step"
+ compute: cpu-cluster
+ runconfig: ".\\default_runconfig.yml"
+ script_name: "train.py"
+ arguments:
+ - "--train_path"
+ - input:train_path
+ - "--test_path"
+ - input:test_path
+ inputs:
+ train_path:
+ source: train_csv
+ bind_mode: download
+ test_path:
+ source: test_csv
+ bind_mode: download
+
+```
+
+## Schedules
+
+When defining the schedule for a pipeline, it can be either datastore-triggered or recurring based on a time interval. The following are the keys used to define a schedule:
+
+| YAML key | Description |
+| -- | -- |
+| `description` | A description of the schedule. |
+| `recurrence` | Contains recurrence settings, if the schedule is recurring. |
+| `pipeline_parameters` | Any parameters that are required by the pipeline. |
+| `wait_for_provisioning` | Whether to wait for provisioning of the schedule to complete. |
+| `wait_timeout` | The number of seconds to wait before timing out. |
+| `datastore_name` | The datastore to monitor for modified/added blobs. |
+| `polling_interval` | How long, in minutes, between polling for modified/added blobs. Default value: 5 minutes. Only supported for datastore schedules. |
+| `data_path_parameter_name` | The name of the data path pipeline parameter to set with the changed blob path. Only supported for datastore schedules. |
+| `continue_on_step_failure` | Whether to continue execution of other steps in the submitted PipelineRun if a step fails. If provided, will override the `continue_on_step_failure` setting of the pipeline.
+| `path_on_datastore` | Optional. The path on the datastore to monitor for modified/added blobs. The path is under the container for the datastore, so the actual path the schedule monitors is container/`path_on_datastore`. If none, the datastore container is monitored. Additions/modifications made in a subfolder of the `path_on_datastore` are not monitored. Only supported for datastore schedules. |
+
+The following example contains the definition for a datastore-triggered schedule:
+
+```yaml
+Schedule:
+ description: "Test create with datastore"
+ recurrence: ~
+ pipeline_parameters: {}
+ wait_for_provisioning: True
+ wait_timeout: 3600
+ datastore_name: "workspaceblobstore"
+ polling_interval: 5
+ data_path_parameter_name: "input_data"
+ continue_on_step_failure: None
+ path_on_datastore: "file/path"
+```
+
+When defining a **recurring schedule**, use the following keys under `recurrence`:
+
+| YAML key | Description |
+| -- | -- |
+| `frequency` | How often the schedule recurs. Valid values are `"Minute"`, `"Hour"`, `"Day"`, `"Week"`, or `"Month"`. |
+| `interval` | How often the schedule fires. The integer value is the number of time units to wait until the schedule fires again. |
+| `start_time` | The start time for the schedule. The string format of the value is `YYYY-MM-DDThh:mm:ss`. If no start time is provided, the first workload is run instantly and future workloads are run based on the schedule. If the start time is in the past, the first workload is run at the next calculated run time. |
+| `time_zone` | The time zone for the start time. If no time zone is provided, UTC is used. |
+| `hours` | If `frequency` is `"Day"` or `"Week"`, you can specify one or more integers from 0 to 23, separated by commas, as the hours of the day when the pipeline should run. Only `time_of_day` or `hours` and `minutes` can be used. |
+| `minutes` | If `frequency` is `"Day"` or `"Week"`, you can specify one or more integers from 0 to 59, separated by commas, as the minutes of the hour when the pipeline should run. Only `time_of_day` or `hours` and `minutes` can be used. |
+| `time_of_day` | If `frequency` is `"Day"` or `"Week"`, you can specify a time of day for the schedule to run. The string format of the value is `hh:mm`. Only `time_of_day` or `hours` and `minutes` can be used. |
+| `week_days` | If `frequency` is `"Week"`, you can specify one or more days, separated by commas, when the schedule should run. Valid values are `"Monday"`, `"Tuesday"`, `"Wednesday"`, `"Thursday"`, `"Friday"`, `"Saturday"`, and `"Sunday"`. |
+
+The following example contains the definition for a recurring schedule:
+
+```yaml
+Schedule:
+ description: "Test create with recurrence"
+ recurrence:
+ frequency: Week # Can be "Minute", "Hour", "Day", "Week", or "Month".
+ interval: 1 # how often fires
+ start_time: 2019-06-07T10:50:00
+ time_zone: UTC
+ hours:
+ - 1
+ minutes:
+ - 0
+ time_of_day: null
+ week_days:
+ - Friday
+ pipeline_parameters:
+ 'a': 1
+ wait_for_provisioning: True
+ wait_timeout: 3600
+ datastore_name: ~
+ polling_interval: ~
+ data_path_parameter_name: ~
+ continue_on_step_failure: None
+ path_on_datastore: ~
+```
+
+## Next steps
+
+Learn how to [use the CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md).
machine-learning Reference Yaml Component Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-command.md
-+ Last updated 10/21/2021
machine-learning Reference Yaml Compute Aml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-aml.md
+
machine-learning Reference Yaml Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-instance.md
+
machine-learning Reference Yaml Compute Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-vm.md
+
machine-learning Reference Yaml Core Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-core-syntax.md
+
machine-learning Reference Yaml Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-dataset.md
+
machine-learning Reference Yaml Datastore Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-blob.md
+
machine-learning Reference Yaml Datastore Data Lake Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-data-lake-gen1.md
+
machine-learning Reference Yaml Datastore Data Lake Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-data-lake-gen2.md
+
machine-learning Reference Yaml Datastore Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-files.md
+
machine-learning Reference Yaml Deployment Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-batch.md
+
machine-learning Reference Yaml Deployment Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-managed-online.md
+
machine-learning Reference Yaml Endpoint Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-endpoint-batch.md
+
machine-learning Reference Yaml Endpoint Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-endpoint-managed-online.md
+
machine-learning Reference Yaml Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-environment.md
+
machine-learning Reference Yaml Job Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-command.md
+
machine-learning Reference Yaml Job Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-component.md
+
machine-learning Reference Yaml Job Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-pipeline.md
+
machine-learning Reference Yaml Job Sweep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-sweep.md
+
machine-learning Reference Yaml Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-model.md
+
machine-learning Reference Yaml Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-overview.md
+
machine-learning Reference Yaml Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-workspace.md
+
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
Before you configure your experiment, upload your data file to your workspace in
File format|Defines the layout and type of data stored in a file.| Delimited Delimiter|One or more characters for specifying the boundary between&nbsp; separate, independent regions in plain text or other data streams. |Comma Encoding|Identifies what bit to character schema table to use to read your dataset.| UTF-8
- Column headers| Indicates how the headers of the dataset, if any, will be treated.| Use headers from the first file
+ Column headers| Indicates how the headers of the dataset, if any, will be treated.| Only first file has headers
Skip rows | Indicates how many, if any, rows are skipped in the dataset.| None 1. The **Schema** form allows for further configuration of your data for this experiment.
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
Last updated 02/23/2022 -+ # How to create a secure workspace
machine-learning Tutorial Train Deploy Image Classification Model Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md
Last updated 05/25/2021-+ #Customer intent: As a professional data scientist, I want to learn how to train an image classification model using TensorFlow and the Azure Machine Learning Visual Studio Code Extension.
marketplace Create New Saas Offer Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-new-saas-offer-plans.md
The actions that are available in the **Action** column of the **Plan overview**
## Before you publish your offer
-If you haven't already done so, create a development and test (DEV) offer to test your offer before publishing your production offer live. To learn more, see [Create a test SaaS offer](create-saas-dev-test-offer.md).
+If you haven't already done so, create a development and test (DEV) offer to test your offer before publishing your production offer live. To learn more, see [Plan a test and development SaaS offer](plan-saas-dev-test-offer.md).
## Next steps
marketplace Create New Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-new-saas-offer.md
As a commercial marketplace publisher, you can create a software as a service (S
If you havenΓÇÖt already done so, read [Plan a SaaS offer](plan-saas-offer.md). It will explain the technical requirements for your SaaS app, and the information and assets youΓÇÖll need when you create your offer. Unless you plan to publish a simple listing (**Contact me** listing option) in the commercial marketplace, your SaaS application must meet technical requirements around authentication. > [!IMPORTANT]
-> We recommend that you create a separate development/test (DEV) offer and a separate production (PROD) offer. This article describes how to create a PROD offer. For details about creating a DEV offer, see [Create a test SaaS offer](create-saas-dev-test-offer.md).
+> We recommend that you create a separate development/test (DEV) offer and a separate production (PROD) offer. This article describes how to create a PROD offer. For details about creating a DEV offer, see [Plan a test and development SaaS offer](plan-saas-dev-test-offer.md).
## Create a SaaS offer
marketplace Create Saas Dev Test Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-saas-dev-test-offer.md
- Title: Create a test SaaS offer
-description: Create a separate development offer for testing your production offer in Azure Marketplace.
------ Previously updated : 04/20/2021--
-# Create a test SaaS offer
-
-To develop in a separate environment from your production offer, youΓÇÖll create a separate test and development (DEV) offer and a separate production (PROD) offer. For information about the benefits of using a separate DEV offer, see [Plan a SaaS offer](plan-saas-offer.md#test-offer).
-
-YouΓÇÖll configure most settings the same in the DEV and PROD offers. For example, the official marketing language and assets, such as screenshots and logos should be the same. In the cases where the configuration is the same, you can copy-and-paste fields from the plans in the DEV offer to the plans in the PROD offer.
-
-The following sections describe the configuration differences between the DEV and PROD offers.
-
-## Offer setup page
-
-We recommend that you use the same alias in the **Alias** box of both offers but append ΓÇ£_testΓÇ¥ to the alias of the DEV offer. For example, if the alias of your PROD offer is ΓÇ£contososolutionΓÇ¥ then the alias of the DEV offer should be ΓÇ£contososolution_testΓÇ¥. This way, you can easily identify which your DEV offer from your PROD offer.
-
-In the **Customer leads** section, use an Azure table or a test CRM environment for the DEV offer. Use the intended lead management system for the PROD offer.
-
-## Properties page
-
-Configure this page the same in both the DEV and PROD offers.
-
-## Offer listing page
-
-Configure this page the same in both the DEV and PROD offers.
-
-## Preview audience
-
-In the DEV offer, include the Azure Active Directory (AAD) user principal names or Microsoft account (MSA) email addresses of developers and testers, including yourself. The user principal name of a user on AAD can be different than the email of that user. For example, jane.doe@contoso.com will not work but janedoe@contoso.com will. The users you designate will have access to the DEV offer when you share the **Preview** link during the development and testing phase.
-
-In the PROD offer, include the Azure AD user principal name or Microsoft Account email of the users who will validate the offer before selecting the **Go Live button** to publish the offer live.
-
-## Technical configuration page
-
-This table describes the differences between the settings for DEV offers and PROD offers.
-
-***Table 1: Technical configuration differences***
-
-| Setting | DEV offer | PROD offer |
-| | - | - |
-| Landing page URL | Enter your dev/test endpoint. | Enter your production endpoint. |
-| Connection webhook | Enter your dev/test endpoint. | Enter your production endpoint. |
-| Azure Active Directory tenant ID | Enter your test app registration tenant ID (AAD directory ID). | Enter your production app registration tenant ID. |
-| Azure Active Directory application ID | Enter your test app registration application ID (client ID). | Enter your production app registration application ID. |
-||||
-
-## Plan overview page
-
-When you create your plans, we recommend that you use the same _Plan ID_ and _Plan name_ in both the DEV and PROD offers except append the plan ID in the DEV offer with **_test**. For example, if the Plan ID in the PROD offer is ΓÇ£enterpriseΓÇ¥, then the plan ID in the DEV offer should be ΓÇ£enterprise_testΓÇ¥. This way, you can easily identify which your DEV offer from your PROD offer. YouΓÇÖll create plans in the PROD offer with the pricing models and prices that you decide are best for your offer.
-
-### Plan listing
-
-On the **Plan overview** > **Plan listing** tab, enter the same plan description in both the DEV and PROD plans.
-
-### Pricing and availability page
-
-This section provides guidance for completing the **Plan overview** > **Pricing and availability** page.
-
-#### Markets
-
-Select the same markets for the DEV and PROD offers.
-
-#### Pricing
-
-Use the DEV offer to experiment with pricing models. After you verify which pricing model or models work best, youΓÇÖll create the plans in the PROD offer with the pricing models and prices you want.
-
-The DEV offer should have plans with zero or low prices in the plans. The PROD offer will have the prices you want to charge to customers.
-
-> [!IMPORTANT]
-> Purchases made in Preview will be processed for both DEV and PROD offers. If an offer has a $100/mo price, your company will be charged $100. If this happens, you can open a [support ticket](support.md) and we will issue a payout for the full amount (and take no store service fee).
-
-#### Pricing model
-
-Use the same Pricing model in the plans of the DEV and PROD offers. For example, if the plan in the PROD offer is Flat rate, with a monthly billing term, then configure the plan in the DEV offer using the same model.
-
-To reduce your cost for testing the pricing models, including Marketplace custom meter dimensions, we recommend that you configure the **Pricing** section of the **Pricing and availability** tab, in the DEV offer with lower prices than the PROD offer. Here are some guidelines you can follow when setting prices for plans in the DEV offer.
-
-***Table 2: Pricing guidelines***
-
-| Price | Comment |
-| | - |
-| $0.00 | Set a total transaction cost of zero to have no financial impact. Use this price when making calls to the metering APIs, or to test purchasing plans in your offer while developing your solution. |
-| $0.01 - $49.99 | Use this price range to test analytics, reporting, and the purchase process. |
-| $50.00 - $100.00 | Use this price range to test payout. For information about our payment schedule, see [Payout schedules and processes](/partner-center/payout-policy-details). |
-|||
-
-> [!IMPORTANT]
-> To avoid being charged a store service fee on your test, open a [support ticket](support.md) within 7 days of the test purchase.
-
-#### Free trial
-
-DonΓÇÖt enable a free trial for the DEV offer.
-
-## Co-sell with Microsoft page
-
-DonΓÇÖt configure the **Co-sell with Microsoft** tab of the DEV offer.
-
-## Resell through CSPs
-
-DonΓÇÖt configure the **Resell through CSPs** tab of the DEV offer.
-
-## Next steps
--- [Test and publish a SaaS offer](test-publish-saas-offer.md)
marketplace Marketplace Metering Service Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-metering-service-apis.md
TLS version 1.2 version is enforced as the minimal version for HTTPS communicati
The usage event API should be called by the publisher to emit usage events against an active resource (subscribed) for the plan purchased by the specific customer. The usage event is emitted separately for each custom dimension of the plan defined by the publisher when publishing the offer.
-Only one usage event can be emitted for each hour of a calendar day. For example, at 8:15am today, you can emit one usage event. If this event is accepted, the next usage event will be accepted from 9:00 am today. If you send an additional event between 8:15 and 8:59:59 today, it will be rejected as a duplicate. You should accumulate all units consumed in an hour and then emit it in a single event.
- Only one usage event can be emitted for each hour of a calendar day per resource. If more than one unit is consumed in an hour, then accumulate all the units consumed in the hour and then emit it in a single event. Usage events can only be emitted for the past 24 hours. If you emit a usage event at any time between 8:00 and 8:59:59 (and it is accepted) and send an additional event for the same day between 8:00 and 8:59:59, it will be rejected as a duplicate. **POST**: `https://marketplaceapi.microsoft.com/api/usageEvent?api-version=<ApiVersion>`
GET: https://marketplaceapi.microsoft.com/api/usageEvents
| Parameter | Recommendation | | | - |
-| ApiVersion | Use this format: 2018-08-31 |
+| ApiVersion | Use 2018-08-31. |
| usageStartDate | DateTime in ISO8601 format. For example, 2020-12-03T15:00 or 2020-12-03 | | UsageEndDate (optional) | DateTime in ISO8601 format. Default = current date | | offerId (optional) | Default = all available |
marketplace Plan Saas Dev Test Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-saas-dev-test-offer.md
+
+ Title: Plan a test and development SaaS offer in the Microsoft commercial marketplace
+description: Plan a separate development offer for testing your offer in Azure Marketplace.
++++++ Last updated : 03/29/2022++
+# Plan a test and development SaaS offer
+
+To develop in a separate environment from your production offer, youΓÇÖll create a separate test and development (DEV) offer and a separate production (PROD) offer. This article describes the benefits of doing your development and testing in a DEV offer and the configuration differences between DEV and production offers.
+
+## Benefits of a DEV offer
+
+Here are some reasons to create a separate DEV offer for the development team to use for development and testing of the PROD offer:
+
+- Avoid accidental customer charges
+- Evaluate pricing models
+- Not adding plans that do not target actual customers
+
+### Avoid accidental customer charges
+
+By using a DEV offer instead of the PROD offer and treating them as development and production environments, you can avoid accidental charges to customers.
+
+We recommend that you register two different Azure AD apps for calling the marketplace APIs. Developers will use one Azure AD app with the DEV offerΓÇÖs settings, and the operations team will use the PROD app registration. By doing this, you can isolate the development team from making inadvertent mistakes, such as calling the API to cancel a customerΓÇÖs subscription who pays $100K per month. You can also avoid charging a customer for metered usage they didnΓÇÖt consume.
+
+### Evaluate pricing models
+
+Testing pricing models in the DEV offer reduces risk when developers experiment with different pricing models.
+
+Publishers can create the plans they need in the DEV offer to determine which pricing model works best for their offer. Developers might want to create multiple plans in the DEV offer to test different pricing combinations. For example, you might create plans with different sets of custom metered dimensions. You might create a different plan with a mix of flat rate and custom metered dimensions.
+
+To test multiple pricing options, you need to create a plan for each unique pricing model. To learn more, see [Plans](plan-saas-offer.md#plans).
+
+### Not adding plans that do not target actual customers
+
+By using a DEV offer for development and testing, you can reduce unnecessary clutter in the PROD offer. For example, you canΓÇÖt delete plans you create to test different pricing models or technical configurations (without filing a support ticket). So by creating plans for testing in the DEV offer, you reduce the clutter in the PROD offer.
+
+Clutter in the PROD offer frustrates product and marketing teams, as they expect all the plans to target actual customers. Especially with large teams that are disjointed who all want different sandboxes to work with, creating two offers will provide two different environments for DEV and PROD. In some cases, you might want to create multiple DEV offers to support a larger team who have different people running different test scenarios. Letting different team members work in the DEV offer separate from the PROD offer , helps to keep production plans as close to production-ready as possible.
+
+Testing a DEV offer helps to avoid the 30 custom metered dimensions limit per offer. Developers can try different meter combinations in the DEV offer without affecting the custom metered dimension limit in the PROD offer.
+
+## Configuration differences between DEV and production offers
+
+YouΓÇÖll configure most settings the same in the test and development (DEV) and production (PROD) offers. For example, the official marketing language and assets, such as screenshots and logos should be the same. In the cases where the configuration is the same, you can copy-and-paste fields from the plans in the DEV offer to the plans in the PROD offer.
+
+The following sections describe the configuration differences between the DEV and PROD offers.
+
+### Offer setup page
+
+We recommend that you use the same alias in the **Alias** box of both offers but append ΓÇ£_testΓÇ¥ to the alias of the DEV offer. For example, if the alias of your PROD offer is ΓÇ£contososolutionΓÇ¥ then the alias of the DEV offer should be ΓÇ£contososolution_testΓÇ¥. This way, you can easily identify which your DEV offer from your PROD offer.
+
+In the **Customer leads** section, use an Azure table or a test CRM environment for the DEV offer. Use the intended lead management system for the PROD offer.
+
+### Properties page
+
+Configure this page the same in both the DEV and PROD offers.
+
+### Offer listing page
+
+Configure this page the same in both the DEV and PROD offers.
+
+### Preview audience
+
+In the DEV offer, include the Azure Active Directory (Azure AD) user principal names or Microsoft account (MSA) email addresses of developers and testers, including yourself. The user principal name of a user on Azure AD can be different than the email of that user. For example, jane.doe@contoso.com will not work but janedoe@contoso.com will. The users you designate will have access to the DEV offer when you share the **Preview** link during the development and testing phase.
+
+In the PROD offer, include the Azure AD user principal name or Microsoft Account email of the users who will validate the offer before selecting the **Go Live button** to publish the offer live.
+
+### Technical configuration page
+
+This table describes the differences between the settings for DEV offers and PROD offers.
+
+***Table 1: Technical configuration differences***
+
+| Setting | DEV offer | PROD offer |
+| | - | - |
+| Landing page URL | Enter your dev/test endpoint. | Enter your production endpoint. |
+| Connection webhook | Enter your dev/test endpoint. | Enter your production endpoint. |
+| Azure Active Directory tenant ID | Enter your test app registration tenant ID (Azure AD directory ID). | Enter your production app registration tenant ID. |
+| Azure Active Directory application ID | Enter your test app registration application ID (client ID). | Enter your production app registration application ID. |
+||||
+
+### Plan visibility
+
+We recommend that you configure your test plan as a private plan, so itΓÇÖs visible only to targeted developers and testers. This provides an extra level of protection from exposing your test plan to customers if you accidentally publish the offer live.
+
+If you choose to test your plan in a production offer instead of a DEV offer, this is especially important, so that customers will not be able to purchase the plan. We recommend that you create a separate private test plan and never publish the private test plan live. YouΓÇÖll use your private test plan to do your testing in preview. When youΓÇÖve completed your testing, you will create a production plan for publishing live. Then, you can then stop distribution of the test plan.
+
+### Plan overview page
+
+When you create your plans, we recommend that you use the same _Plan ID_ and _Plan name_ in both the DEV and PROD offers except append the plan ID in the DEV offer with **_test**. For example, if the Plan ID in the PROD offer is ΓÇ£enterpriseΓÇ¥, then the plan ID in the DEV offer should be ΓÇ£enterprise_testΓÇ¥. This way, you can easily identify which your DEV offer from your PROD offer. YouΓÇÖll create plans in the PROD offer with the pricing models and prices that you decide are best for your offer.
+
+#### Plan listing
+
+On the **Plan overview** > **Plan listing** tab, enter the same plan description in both the DEV and PROD plans.
+
+#### Pricing and availability page
+
+This section provides guidance for completing the **Plan overview** > **Pricing and availability** page.
+
+##### Markets
+
+Select the same markets for the DEV and PROD offers.
+
+##### Pricing
+
+Use the DEV offer to experiment with pricing models. After you verify which pricing model or models work best, youΓÇÖll create the plans in the PROD offer with the pricing models and prices you want.
+
+When you purchase the plan, you will be charged the prices defined in the plan. To minimize your testing costs, the DEV offer should have plans with zero or low prices in the plans. For example, $0.01 (one cent). This applies to flat rate, metered billing, and per user prices. The PROD offer will have the prices you want to charge to customers.
+
+> [!IMPORTANT]
+> Purchases made in Preview will be processed for both DEV and PROD offers. If an offer has a $100/mo price, your company will be charged $100. If this happens, you can open a [support ticket](support.md) and we will issue a payout for the full amount (and take no store service fee).
+
+YouΓÇÖll set the prices you want to charge the customer in the separate production plan that youΓÇÖll publish live.
+
+#### Pricing model
+
+Use the same plan structure in the plans of the DEV and PROD offers. For example, if the plan in the PROD offer is Flat rate, with a monthly billing term, then configure the plan in the DEV offer using the same model.
+
+To reduce your cost for testing the pricing models, including Marketplace custom meter dimensions, we recommend that you configure the **Pricing** section of the **Pricing and availability** tab, in the DEV offer with lower prices than the PROD offer. Here are some guidelines you can follow when setting prices for plans in the DEV offer.
+
+***Table 2: Pricing guidelines***
+
+| Price | Comment |
+| | - |
+| $0.00 - $0.01 | Set a total transaction cost of zero to have no financial impact or one cent to have a low cost. Use this price when making calls to the metering APIs, or to test purchasing plans in your offer while developing your solution. |
+| $0.01 | Use this price range to test analytics, reporting, and the purchase process. |
+| $50.00 - $100.00 | Use this price range to test payout. For information about our payment schedule, see [Payout schedules and processes](/partner-center/payout-policy-details). |
+|||
+
+> [!IMPORTANT]
+> To avoid being charged a store service fee on your test, open a [support ticket](support.md) within 7 days of the test purchase.
+
+### Co-sell with Microsoft page
+
+DonΓÇÖt configure the **Co-sell with Microsoft** tab of the DEV offer.
+
+### Resell through CSPs
+
+On the **Resell through CSPs** tab of the DEV offer, select **No partners in the CSP program**.
+
+## Next steps
+
+- To learn more about plans, see [Plan a SaaS offer for the commercial marketplace](plan-saas-offer.md#plans).
+- For step-by-step instructions about creating an offer, see [Create a SaaS offer](create-new-saas-offer.md)
+- To test a SaaS plan, see [Test a SaaS plan overview](test-saas-overview.md)
marketplace Plan Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-saas-offer.md
Last updated 10/26/2021
# Plan a SaaS offer for the commercial marketplace
-This article explains the different options and requirements for publishing software as a service (SaaS) offers to the Microsoft commercial marketplace. SaaS offers to let you deliver and license software solutions to your customers via online subscriptions. As a SaaS publisher, you manage and pay for the infrastructure required to support your customers' use of your offer. This article will help you prepare your offer for publishing to the commercial marketplace with Partner Center.
+This article explains the different options and requirements for publishing software as a service (SaaS) offers to the Microsoft commercial marketplace. SaaS offers let you deliver and license software solutions to your customers via online subscriptions. As a SaaS publisher, you manage and pay for the infrastructure required to support your customers' use of your offer. This article will help you prepare your offer for publishing to the commercial marketplace with Partner Center.
> [!TIP] > To see the customer's view of purchasing a SaaS offer in the Azure portal, see [Purchase a SaaS offer in the Azure portal](/marketplace/purchase-saas-offer-in-azure-portal) and [Purchase SaaS apps on Microsoft AppSource](/marketplace/purchase-software-appsource).
Before you publish your offer live, you should use the preview functionality to
To develop and test your SaaS offer with the lowest amount of risk, we recommend that you create a test and development (DEV) offer for experimentation and testing. The DEV offer will be separate from your production (PROD) offer.
-To prevent accidental purchases of the DEV offer, youΓÇÖll never push the **Go live** button to publish the DEV offer live.
+> [!IMPORTANT]
+> To prevent accidental purchases of the DEV offer, youΓÇÖll never push the **Go live** button to publish the DEV offer live.
![Illustrates the Offer overview page for an offer in Partner Center. The Go live button and preview links are shown. The View validation report link is also shown under Automated validation.](./media/review-publish-offer/publish-status-saas.png)
-Here are some reasons to create a separate DEV offer for the development team to use for development and testing of the PROD offer:
--- Avoid accidental customer charges-- Evaluate pricing models-- Not adding plans that do not target actual customers-
-### Avoid accidental customer charges
-
-By using a DEV offer instead of the PROD offer and treating them as development and production environments, you can avoid accidental charges to customers.
-
-We recommend that you register two different Azure AD apps for calling the marketplace APIs. Developers will use one Azure AD app with the DEV offerΓÇÖs settings, and the operations team will use the PROD app registration. By doing this, you can isolate the development team from making inadvertent mistakes, such as calling the API to cancel a customerΓÇÖs subscription who pays $100K per month. You can also avoid charging a customer for metered usage they didnΓÇÖt consume.
-
-### Evaluate pricing models
-
-Testing pricing models in the DEV offer reduces risk when developers experiment with different pricing models.
-
-Publishers can create the plans they need in the DEV offer to determine which pricing model works best for their offer. Developers might want to create multiple plans in the DEV offer to test different pricing combinations. For example, you might create plans with different sets of custom metered dimensions. You might create a different plan with a mix of flat rate and custom metered dimensions.
-
-To test multiple pricing options, you need to create a plan for each unique pricing model. To learn more, see [Plans](#plans).
-
-### Not adding plans that do not target actual customers
-
-By using a DEV offer for development and testing, you can reduce unnecessary clutter in the PROD offer. For example, you canΓÇÖt delete plans you create to test different pricing models or technical configurations (without filing a support ticket). So by creating plans for testing in the DEV offer, you reduce the clutter in the PROD offer.
-
-Clutter in the PROD offer frustrates product and marketing teams, as they expect all the plans to target actual customers. Especially with large teams that are disjointed who all want different sandboxes to work with, creating two offers will provide two different environments for DEV and PROD. In some cases, you might want to create multiple DEV offers to support a larger team who have different people running different test scenarios. Letting different team members work in the DEV offer separate from the PROD offer , helps to keep production plans as close to production-ready as possible.
-
-Testing a DEV offer helps to avoid the 30 custom metered dimensions limit per offer. Developers can try different meter combinations in the DEV offer without affecting the custom metered dimension limit in the PROD offer.
+For information about the benefits of a DEV offer, and the configuration difference between a DEV and production offer, see [Plan a test and development (DEV) offer](plan-saas-dev-test-offer.md).
## Additional sales opportunities
You can choose to opt into Microsoft-supported marketing and sales channels. Whe
## Next steps -- [How to create a SaaS offer in the commercial marketplace](create-new-saas-offer.md)
+- [Plan a test SaaS offer](plan-saas-dev-test-offer.md)
- [Offer listing best practices](gtm-offer-listing-best-practices.md)
+- [Create a SaaS offer](create-new-saas-offer.md)
marketplace Revenue Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/revenue-dashboard.md
Previously updated : 12/06/2021 Last updated : 04/06/2022 # Revenue dashboard in commercial marketplace analytics
In the lower left of most widgets, youΓÇÖll see a thumbs up and thumbs down icon
| Asset Id | An identifier for the software assets. Same as the `order id` in the orders report in Partner Center. | | Offer type | Type of offer, such as SaaS, VM, and so on. | | Offer name | Display name of the offer |
+| Is Private Offer | Indicates whether a marketplace offer is a private or a public offer.<br><ul><li>0 value indicates false</li><li>1 value indicates true</li></ul>
| Offer plan | Specific offer plan, also referred to as SKU | | Trial deployment | Denotes whether the offer was in trial deployment at the time of billing | | Service Start Date | The start date of the order subscription term |
marketplace Saas Go Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/saas-go-live.md
+
+ Title: How to publish a SaaS offer live - Azure Marketplace
+description: Learn how to publish a SaaS offer live.
++++++ Last updated : 03/25/2022++
+# Publish a SaaS offer
+
+After completing all tests on your preview, select **Go live** to publish your offer live to the commercial marketplace. If your offer is already live in the commercial marketplace, any updates you make won't go live until you select **Go live**.
+
+> [!IMPORTANT]
+> DonΓÇÖt ever select **Go live** for a [development/test offer](plan-saas-dev-test-offer.md) or an offer with a _test plan_.
+
+Now that youΓÇÖve chosen to make your offer available in the commercial marketplace, we perform a series of final validation checks to ensure the live offer is configured just like the preview version of the offer. For details about these validation checks, see [Publish phase](review-publish-offer.md#publish-phase).
+
+After these validation checks are complete, your offer will be live in the marketplace.
marketplace Test Publish Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/test-publish-saas-offer.md
Use the following steps to preview your offer.
After completing all tests on your preview, select **Go live** to publish your offer live to the commercial marketplace. If your offer is already live in the commercial marketplace, any updates you make won't go live until you select **Go live**. > [!IMPORTANT]
-> DonΓÇÖt ever select **Go live** for a [development/test offer](create-saas-dev-test-offer.md).
+> DonΓÇÖt ever select **Go live** for a [development/test offer](plan-saas-dev-test-offer.md).
Now that youΓÇÖve chosen to make your offer available in the commercial marketplace, we perform a series of final validation checks to ensure the live offer is configured just like the preview version of the offer. For details about these validation checks, see [Publish phase](review-publish-offer.md#publish-phase).
marketplace Test Saas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/test-saas-overview.md
+
+ Title: How to test a SaaS plan overview in the Microsoft commercial marketplace - Azure Marketplace
+description: Use Microsoft Partner Center to test a plan for a SaaS offer in the Microsoft commercial marketplace.
++++++ Last updated : 03/29/2022++
+# Test a SaaS plan overview
+
+This article provides guidance about how to test a software as a service (SaaS) plan.
+
+We recommend that you create a test SaaS plan in a separate test and development (DEV) offer. To learn how, see [Plan a test and development (DEV) SaaS offer](plan-saas-dev-test-offer.md). If you choose to create a test plan in a production offer instead, make sure that you create the test plan as a private plan and specify a preview audience.
+
+> [!IMPORTANT]
+> Never publish a test plan live.
+
+## Overall flow
+
+Create your test plan, preferably in a test and DEV offer.
+- To learn how to plan a SaaS offer, see [Plan a SaaS offer for the commercial marketplace](plan-saas-offer.md).
+- To understand the benefits of a test and DEV offer and the configuration differences between a test and DEV offer and a production (PROD) offer, see [Plan a test and development SaaS offer](plan-saas-dev-test-offer.md).
+- For instructions on creating offers and plans, see [Create a SaaS offer](create-new-saas-offer.md).
+
+Once you have the plan you want to test, do the following:
+- [Preview and subscribe to the test plan](test-saas-preview-offer.md)
+- [Test the SaaS plan](test-saas-plan.md)
+- [Create a SaaS plan in your production offer](create-new-saas-offer-plans.md) with the prices you want to charge customers and publish your production offer.
+- [Unsubscribe from and deactivate your test plan](test-saas-unsubscribe.md)
+
+## Next steps
+
+- After you create your test plan, youΓÇÖll then [Preview and subscribe to your offer](test-saas-preview-offer.md). From there, weΓÇÖll lead you through the remainder of the process, including how to test the offer, and how to clean up your test environment.
marketplace Test Saas Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/test-saas-plan.md
+
+ Title: How to test a SaaS plan - Azure Marketplace
+description: Learn how to test a SaaS plan before publishing your offer live.
++++++ Last updated : 03/30/2022++
+# Test a SaaS plan
+
+This article explains how to test a software as a service (SaaS) offer in preview.
+
+Here are some general guidelines to be aware of when youΓÇÖre testing your offer.
+
+- If your SaaS offer supports metered billing using the commercial marketplace metering service, review and follow the testing best practices detailed in [Marketplace metered billing APIs](/partner-center-portal/saas-metered-billing.md).
+- Review and follow the testing instructions in [Implementing a webhook on the SaaS service](/partner-center-portal/pc-saas-fulfillment-webhook#development-and-testing.md) to ensure your offer is successfully integrated with the APIs.
+- If the Offer validation step resulted in warnings, a **View validation report** link appears on the **Offer overview** page. Be sure to review the report and address the issues before you select the **Go live** button. Otherwise, certification will most likely fail and delay your offer from going Live.
+- If you need to make changes after previewing and testing the offer, you can edit and resubmit to publish a new preview. For more information, see [Update an existing offer in the commercial marketplace](update-existing-offer.md).
+
+> [!NOTE]
+> In this example we will use the [SaaS Accelerator](https://go.microsoft.com/fwlink/?linkid=2190938) for the landing page.
+
+## Verify the purchase of a private test plan
+
+Publishers can use Commercial Marketplace to verify the purchase of a test plan.
+
+1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
+
+1. On the Home page, select the **Insights** tile.
+
+ [ ![Illustrates the Insights tile on the Partner Center Home page.](./media/workspaces/partner-center-insights-tile.png) ](./media/workspaces/partner-center-insights-tile.png#lightbox)
+
+1. In the left menu, select **Orders**.
+
+1. In the upper-right, select the filter icon and filter by **Offer Name**. The **Orders by offers** widget shows the number of subscriptions that have been purchased. You should notice that the number of new orders have increased.
+
+ [ ![Screenshot shows the Orders by offers widget.](./media/review-publish-offer/subscriptions-purchased.png) ](./media/review-publish-offer/subscriptions-purchased.png#lightbox)
+
+ > [!NOTE]
+ > It can take up to 48 hours before offer orders appear in the report.
+
+1. In the left-nav, select **Downloads**.
+1. From the **Report type** list, select **Orders**.
+1. From the **Select timeframe** list, select **3 months**.
+1. Under **File Extension**, select **CSV**.
+1. Select **Generate**.
+
+ [ ![Screenshot of the Download Reports.](./media/review-publish-offer/download-reports.png) ](./media/review-publish-offer/download-reports.png#lightbox)
+
+1. Download the .CSV file and verify that the new subscription for your test plan is shown in the report.
+
+## Send metered usage Event
+
+If your test plan is configured to send usage events, it is important to test the metered plan usage by sending metered events to [Marketplace Metered API](marketplace-metering-service-apis.md). This is an example of how to send metered usage events using Postman.
+
+When you test metered usage, we recommend that you send more than the number of usage events that are covered by the base fee and then verify that only the number of usage events that exceed the base fee are sent to Microsoft.
+
+### Send a request
+
+Using an API tool such as Postman, send a request body like the one shown in this example.
+
+_Request body example:_
+
+```json
+{
+ "resourceId": <guid>, // unique identifier of the resource against which usage is emitted.
+ "quantity": 5.0, // how many units were consumed for the date and hour specified in effectiveStartTime, must be greater than 0 or a double integer
+ "dimension": "dim1", // custom dimension identifier
+ "effectiveStartTime": "2018-12-01T08:30:14", // time in UTC when the usage event occurred, from now and until 24 hours back
+ "planId": "plan1", // id of the plan purchased for the offer
+}
+```
+
+_Query parameters:_
+
+| Parameter | Comment |
+| | - |
+| resourceId | This will be your SaaS subscriptionId |
+| quantity | You can enter 1 for quantity as the test value |
+| dimension | Enter the name of the dimension defined in the metered plan |
+| planId | Enter the metered plan ID |
+|||
+
+### View the response
+
+You should receive a response like the one shown in this example.
+
+_Response payload example:_
+
+```json
+{
+"usageEventId": <guid>, // unique identifier associated with the usage event in Microsoft records
+"status": "Accepted" // this is the only value in case of single usage event
+"messageTime": "2020-01-12T13:19:35.3458658Z", // time in UTC this event was accepted
+"resourceId": <guid>, // unique identifier of the resource against which usage is emitted. For SaaS it's the subscriptionId.
+"quantity": 5.0, // amount of emitted units as recorded by Microsoft "dimension":
+"dim1", // custom dimension identifier
+"effectiveStartTime": "2018-12-01T08:30:14", // time in UTC when the usage event occurred, as sent by the ISV
+"planId": "plan1", // id of the plan purchased for the offer
+}
+```
+
+For more details about sending metered usage events, see [Marketplace metered billing APIs](marketplace-metering-service-apis.md).
+
+## Verify metered usage information
+
+1. On the Home page, select the **Insights** tile.
+
+ [ ![Illustrates the Insights tile on the Partner Center Home page.](./media/workspaces/partner-center-insights-tile.png) ](./media/workspaces/partner-center-insights-tile.png#lightbox)
+
+1. In the left menu, select **Usage**.
+
+1. From the list at the top of the page, select **Metered usage**.
+
+ [ ![Screenshot of the Usage dashboard.](./media/review-publish-offer/saas-metered-usage.png) ](./media/review-publish-offer/saas-metered-usage.png#lightbox)
+
+1. From the **Offer alias** list, select your offer name.
+1. From the **Meter ID** list, select a dimension (meter) Id you entered in the plan. The metered usage is shown in the Metered usage widget.
+
+ > [!NOTE]
+ > You should only see the metered usage above the base fee.
+
+### Download report
+
+> [!NOTE]
+> It can take up to 48 hours before offer orders appear in the report.
+
+1. In the left-nav, select **Downloads**.
+1. From the **Report type** list, select **Usage**.
+1. From the **Select timeframe** list, select **3 months**.
+1. Under **File Extension**, select **CSV**.
+1. Select **Generate**.
+
+ [ ![Screenshot of the Download Reports.](./media/review-publish-offer/download-reports.png) ](./media/review-publish-offer/download-reports.png#lightbox)
+
+1. Download the .CSV file and verify that the new subscription for your test plan is shown in the report.
+
+## Next steps
+
+When you complete your tests, you can do the following:
+- [Unsubscribe from and deactivate your test plan](test-saas-unsubscribe.md).
+- [Create a plan](create-new-saas-offer-plans.md) in your production offer with the prices you want to charge customers and [publish the production offer live](test-publish-saas-offer.md).
marketplace Test Saas Preview Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/test-saas-preview-offer.md
+
+ Title: Preview and subscribe to a SaaS offer in the Microsoft commercial marketplace
+description: Use Partner Center to preview and subscribe to your SaaS offer.
++++++ Last updated : 03/30/2022++
+# Preview and subscribe to a SaaS offer in the commercial marketplace
+
+This article explains how to use Partner Center to submit your SaaS offer for publishing, preview your offer, and subscribe to a plan. You must have already created an offer that you want to publish. For information about the overall process to test a SaaS, see [Test a SaaS plan overview](test-saas-overview.md).
+
+> [!NOTE]
+> We recommend that you create a separate test and development (DEV) offer as a low-risk way to test before you publish your production (PROD) offer. Follow these steps to create and test the DEV offer before you publish your (PROD) offer.
+
+## Submit your offer for publishing
+
+1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).
+
+1. On the Home page, select the **Marketplace offers** tile.
+
+ [ ![Illustrates the Marketplace offers tile on the Partner Center Home page.](./media/workspaces/partner-center-home.png) ](./media/workspaces/partner-center-home.png#lightbox)
+
+1. In the **Offer alias** column, select the offer you want to publish.
+1. In the upper-right corner of the portal, select **Review and publish**.
+1. Make sure that the **Status** column for each page says **Complete**. The three possible statuses are as follows:
+
+ - **Not started** ΓÇô The page is incomplete.
+ - **Incomplete** ΓÇô The page is missing required information or has errors that need to be fixed. You'll need to go back to the page and update it.
+ - **Complete** ΓÇô The page is complete. All required data has been provided and there are no errors.
+
+1. If any of the pages have a status other than **Complete**, select the page name, correct the issue, save the page, and then select **Review and publish** again to return to this page.
+1. After all the pages are complete, in the **Notes for certification** box, provide testing instructions to the certification team to ensure that your app is tested correctly. Provide any supplementary notes helpful for understanding your app.
+1. To start the publishing process for your offer, select **Publish**. The **Offer overview** page appears and shows the offer's **Publish status**.
+
+Your offer's publish status will change as it moves through the publication process. For detailed information on this process, see [Validation and publishing steps](review-publish-offer.md#validation-and-publishing-steps).
+
+## Preview and subscribe to your offer
+
+When the offer is ready for your test in the preview environment, weΓÇÖll send you an email to request that you review and approve your offer preview. You can also refresh the **Offer overview** page in your browser to see if your offer has reached the Publisher sign-off phase. If it has, the **Go live** button and preview links will be available. There will be a link for either Microsoft AppSource preview, Azure Marketplace preview, Azure portal, or a combination of them depending on the options you chose when creating your offer. If you chose to sell your offer through Microsoft, anyone who has been added to the preview audience can test the acquisition and deployment of your offer to ensure it meets your requirements during this stage.
+
+The following screenshot shows the **Offer overview** page for a SaaS offer, with two preview links under the **Go live** button. The validation steps youΓÇÖll see on this page vary depending on the selections you made when you created the offer.
+
+[ ![Screenshot shows the Publish status for an offer in Partner Center. The Go live button is shown. The View validation report link is also shown under Automated validation.](./media/review-publish-offer/publish-status-saas.png) ](./media/review-publish-offer/publish-status-saas.png#lightbox)
+
+> [!IMPORTANT]
+> To validate the end-to-end purchase and setup flow, purchase your offer while it is in Preview. First notify Microsoft with a support ticket to ensure we don't process a charge. Otherwise, the customer account used for the purchase will be billed and invoiced. Publisher Payout will occur when the [criteria](/partner-center/payment-thresholds-methods-timeframes) are met and will be paid out per the [payout schedule](/partner-center/payout-policy-details) with the agency fee deducted from the purchase price.
+
+If your offer is a _Contact Me_ listing, test that a lead is created as expected by providing the Contact Me details during preview.
+
+Use the following steps to preview your offer.
+
+1. Sign in to [Azure portal](https://ms.portal.azure.com/#home). Make sure youΓÇÖre signed in to the Azure tenant you submitted the offer to.
+1. In Partner Center, go to the **Offer overview** page, and then select the **Azure portal preview** link under the **Go live** button.
+
+## Subscribe to a test plan
+
+1. In Azure portal, on the offer subscribe page of your test offer, from the **Plan** list, select the test plan you created earlier.
+1. Select the **Subscribe** button.
+1. Under **Project details**, from the **Subscription** list, select the SaaS test environment subscription.
+1. Select an existing **Resource Group** or select **Create new** to create a new one.
+1. Under **SaaS details**, enter a **Name** for the SaaS instance.
+1. Leave **Recurring billing** set to **On**.
+
+ [ ![Illustrates the Publish status for an offer in Partner Center. The Go live button is shown. The View validation report link is also shown under Automated validation.](./media/review-publish-offer/saas-deployment-2.png) ](./media/review-publish-offer/saas-deployment-2.png#lightbox)
+
+1. Select the **Review + subscribe** button.
+1. The **Review + subscribe** page appears to finalize the subscription process.
+1. Near the bottom of the page, select **Subscribe**. The Deployment progress screen appears.
+1. Once the subscription is deployed, youΓÇÖll see the **Your SaaS subscription is Complete** message.
+
+## Activate a subscription
+
+As a publisher, you will be notified that you have a pending subscription. You can leverage the SaaS fulfillment APIs to activate the subscription.
+
+1. Once someone subscribes to your offer, youΓÇÖll be notified of a new subscription.
+1. Using Postman, you can post against the Activate a subscription API to activate the pending subscription by providing the `planId`.
+
+ [ ![Screedshot shows the planId in postman.](./media/review-publish-offer/test-plan-1.png) ](./media/review-publish-offer/test-plan-1.png#lightbox)
+
+Once activation is successful, you will receive status code 202 Accepted.
+
+[ ![Screenshot illustrates the 202 message.](./media/review-publish-offer/test-plan-2.png) ](./media/review-publish-offer/test-plan-2.png#lightbox)
+
+## Next step
+
+- [Test a SaaS plan ](test-saas-plan.md)
marketplace Test Saas Unsubscribe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/test-saas-unsubscribe.md
+
+ Title: How to unsubscribe from a SaaS plan - Azure Marketplace
+description: Learn how to unsubscribe from a SaaS plan in Azure Marketplace.
++++++ Last updated : 03/30/2022++
+# Unsubscribe from and deactivate a test plan
+
+When youΓÇÖre done testing a plan, you can unsubscribe from and deactivate the plan.
+
+## Unsubscribe from a plan
+
+1. Publishers can unsubscribe a customer from the system, using the [Cancel Subscription API](./partner-center-portal/pc-saas-fulfillment-subscription-api.md#cancel-a-subscription).
+1. Using Postman, send a **DELETE** API action against `subscriptionId`.
+1. Once unsubscribe is successful, you will receive Status Code **202 Accepted**.
+
+ [ ![Screenshot of the DELETE API action with status 200.](./media/review-publish-offer/test-plan-2.png) ](./media/review-publish-offer/test-plan-2.png#lightbox)
+
+## Deactivate a test plan
+
+Use these steps to deactivate a test plan after youΓÇÖre done testing the plan.
+
+> [!NOTE]
+> If there is only one plan in your offer, you must create another plan before you can deactivate the test plan.
+
+1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).
+
+1. On the Home page, select the **Marketplace offers** tile.
+
+ [ ![Illustrates the Marketplace offers tile on the Partner Center Home page.](./media/workspaces/partner-center-home.png) ](./media/workspaces/partner-center-home.png#lightbox)
+
+1. On the Marketplace Offers page, select the offer that contains the plan you want to deactivate.
+1. In the left-nav, select **Plan overview**.
+
+ [ ![Screenshot of the preview links under the Go live button.](./media/review-publish-offer/plan-deactivate-1.png) ](./media/review-publish-offer/plan-deactivate-1.png#lightbox)
+
+1. For the plan you want to deactivate, in the **Action** column, select **Stop distribution**.
+
+ [ ![Screenshot of the Stop distribution link in the Action column.](./media/review-publish-offer/plan-deactivate-2.png) ](./media/review-publish-offer/plan-deactivate-2.png#lightbox)
+
+1. To confirm that you want to deactivate the plan, in the dialog box that appears, enter the **Plan ID** and select the **Stop selling** button.
+
+ [ ![Screenshot of the dialog box to confirm you want to Stop selling a plan.](./media/review-publish-offer/plan-deactivate-3.png) ](./media/review-publish-offer/plan-deactivate-3.png#lightbox)
+
+Once you deactivate the plan, it will be no longer available in the marketplaces.
+
+## Next Steps
+
+- [Create a SaaS plan in your production offer](create-new-saas-offer-plans.md)
+- [Access analytic reports for the commercial marketplace in Partner Center](analytics.md)
mysql Howto Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-query-performance.md
Previously updated : 3/10/2022 Last updated : 3/30/2022 # Profile query performance in Azure Database for MySQL using EXPLAIN [!INCLUDE[applies-to-mysql-single-flexible-server](includes/applies-to-mysql-single-flexible-server.md)]
-**EXPLAIN** is a handy tool to optimize queries. EXPLAIN statement can be used to get information about how SQL statements are executed. The following output shows an example of the execution of an EXPLAIN statement.
+**EXPLAIN** is a handy tool that can help you optimize queries. You can use an EXPLAIN statement to get information about how SQL statements are run. The following shows example output from running an EXPLAIN statement.
```sql mysql> EXPLAIN SELECT * FROM tb1 WHERE id=100\G
possible_keys: NULL
Extra: Using where ```
-As can be seen from this example, the value of *key* is NULL. This output means MySQL can't find any indexes optimized for the query and it performs a full table scan. Let's optimize this query by adding an index on the **ID** column.
+In this example, the value of *key* is NULL, which means that MySQL can't locate any indexes optimized for the query. As a result, it performs a full table scan. Let's optimize this query by adding an index on the **ID** column, and then run the EXPLAIN statement again.
```sql mysql> ALTER TABLE tb1 ADD KEY (id);
possible_keys: id
Extra: NULL ```
-The new EXPLAIN shows that MySQL now uses an index to limit the number of rows to 1, which in turn dramatically shortened the search time.
+Now, the output shows that MySQL uses an index to limit the number of rows to 1, which dramatically shortens the search time.
## Covering index
-A covering index consists of all columns of a query in the index to reduce value retrieval from data tables. Here's an illustration in the following **GROUP BY** statement.
+A covering index includes of all columns of a query, which reduces value retrieval from data tables. The following **GROUP BY** statement and related output illustrates this.
```sql mysql> EXPLAIN SELECT MAX(c1), c2 FROM tb1 WHERE c2 LIKE '%100' GROUP BY c1\G
possible_keys: NULL
Extra: Using where; Using temporary; Using filesort ```
-As can be seen from the output, MySQL doesn't use any indexes because no proper indexes are available. It also shows *Using temporary; Using file sort*, which means MySQL creates a temporary table to satisfy the **GROUP BY** clause.
+The output shows that MySQL doesn't use any indexes, because proper indexes are unavailable. The output also shows *Using temporary; Using filesort*, which indicates that MySQL creates a temporary table to satisfy the **GROUP BY** clause.
-Creating an index on column **c2** alone makes no difference, and MySQL still needs to create a temporary table:
+Creating an index only on column **c2** makes no difference, and MySQL still needs to create a temporary table:
```sql mysql> ALTER TABLE tb1 ADD KEY (c2);
possible_keys: NULL
Extra: Using where; Using temporary; Using filesort ```
-In this case, a **covered index** on both **c1** and **c2** can be created, whereby adding the value of **c2**" directly in the index to eliminate further data lookup.
+In this case, you can create a **covered index** on both **c1** and **c2** by adding the value of **c2**" directly in the index, which will eliminate further data lookup.
-```sql 
+```sql
mysql> ALTER TABLE tb1 ADD KEY covered(c1,c2); mysql> EXPLAIN SELECT MAX(c1), c2 FROM tb1 WHERE c2 LIKE '%100' GROUP BY c1\G *************************** 1. row ***************************
possible_keys: covered
Extra: Using where; Using index ```
-As the above EXPLAIN shows, MySQL now uses the covered index and avoid creating a temporary table.
+As the output of the EXPLAIN above shows, MySQL now uses the covered index and avoids having to creating a temporary table.
## Combined index
possible_keys: NULL
Extra: Using where; Using filesort ```
-MySQL performs a *file sort* operation that is fairly slow, especially when it has to sort many rows. To optimize this query, a combined index can be created on both columns that are being sorted.
+MySQL performs a *file sort* operation that is fairly slow, especially when it has to sort many rows. To optimize this query, create a combined index on both of the columns that are being sorted.
```sql mysql> ALTER TABLE tb1 ADD KEY my_sort2 (c1, c2);
possible_keys: NULL
Extra: Using where; Using index ```
-The EXPLAIN now shows that MySQL can use a combined index to avoid additional sorting since the index is already sorted.
+The output of the EXPLAIN statement now shows that MySQL uses a combined index to avoid additional sorting as the index is already sorted.
## Conclusion
-Using EXPLAIN and different type of Indexes can increase performance significantly. Having an index on the table doesn't necessarily mean MySQL would be able to use it for your queries. Always validate your assumptions using EXPLAIN and optimize your queries using indexes.
+You can increase performance significantly by using EXPLAIN together with different types of indexes. Having an index on a table doesn't necessarily mean that MySQL can use it for your queries. Always validate your assumptions by using EXPLAIN and optimize your queries using indexes.
## Next steps -- To find peer answers to your most concerned questions or post a new question/answer, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
+- To find peer answers to your most important questions or to post or answer a question, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
private-link Create Private Link Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-portal.md
Previously updated : 08/18/2021 Last updated : 03/30/2022 #Customer intent: As someone with a basic network background who's new to Azure, I want to create an Azure Private Link service by using the Azure portal
In this section, you create a virtual network and subnet to host the load balanc
11. Select **Create**.
-### Create NAT gateway
-
-In this section, you'll create a NAT gateway and assign it to the subnet in the virtual network you created previously. The NAT gateway is used by the resources in the load balancer virtual network for outbound internet access. If the virtual machines in the backend pool of the load balancer don't require outbound internet access, you can proceed to the next section.
-
-1. On the upper-left side of the screen, select **Create a resource > Networking > NAT gateway** or search for **NAT gateway** in the search box.
-
-2. Select **Create**.
-
-3. In **Create network address translation (NAT) gateway**, enter or select this information in the **Basics** tab:
-
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription. |
- | Resource Group | Select **CreatePrivLinkService-rg**. |
- | **Instance details** | |
- | Name | Enter **myNATGateway** |
- | Region | Select **(US) East US 2** |
- | Availability Zone | Select **None**. |
- | Idle timeout (minutes) | Enter **10**. |
-
-4. Select the **Outbound IP** tab, or select the **Next: Outbound IP** button at the bottom of the page.
-
-5. In the **Outbound IP** tab, enter or select the following information:
-
- | **Setting** | **Value** |
- | -- | |
- | Public IP addresses | Select **Create a new public IP address**. </br> In **Name**, enter **myNATgatewayIP**. </br> Select **OK**. |
-
-6. Select the **Subnet** tab, or select the **Next: Subnet** button at the bottom of the page.
-
-7. In the **Subnet** tab, select **myVNet** in the **Virtual network** pull-down.
-
-8. Check the box next to **myBackendSubnet**.
-
-9. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
-
-10. Select **Create**.
- ### Create load balancer In this section, you create a load balancer that load balances virtual machines.
When you're done using the private link service, delete the resource group to cl
In this quickstart, you: * Created a virtual network and internal Azure Load Balancer.+ * Created a private link service.+ * Created a virtual network and a private endpoint for the private link service. To learn more about Azure Private endpoint, continue to:
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
| Azure Database for MariaDB | Microsoft.DBforMariaDB/servers | mariadbServer | | Azure Database for MySQL | Microsoft.DBforMySQL/servers | mysqlServer | | Azure Database for PostgreSQL - Single server | Microsoft.DBforPostgreSQL/servers | postgresqlServer |
+| Azure Device Provisioning Service | Microsoft.Devices/provisioningServices | iotDps |
| Azure IoT Hub | Microsoft.Devices/IotHubs | iotHub | | Azure Digital Twins | Microsoft.DigitalTwins/digitalTwinsInstances | digitaltwinsinstance | | Azure Event Grid | Microsoft.EventGrid/domains | domain |
purview Concept Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-workflow.md
description: This article describes workflows in Azure Purview, the roles they p
-+ Last updated 03/09/2022
purview How To Enable Data Use Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-enable-data-use-governance.md
To disable data use governance for a source, resource group, or subscription, a
1. Set the **Data use governance** toggle to **Disabled**.
-## Delegation of access control responsibility to Azure Purview
-Note:
-1. Once a resource has been enabled for *Data use Governance*, **any** Azure Purview *policy author* will be able to create access policies against it, and **any** Azure Purview *Data source admin* will be able to publish those policies at **any point afterwards**.
-1. **Any** Azure Purview *root collection admin* can create **new** *Data Source Admin* and *Policy author* roles.
- ## Additional considerations related to Data use governance - Make sure you write down the **Name** you use when registering in Azure Purview. You will need it when you publish a policy. The recommended practice is to make the registered name exactly the same as the endpoint name. - To disable a source for *Data use governance*, remove it first from being bound (i.e. published) in any policy.
purview How To Request Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-request-access.md
description: This article describes how a user can request access to a data sour
-+ Last updated 03/01/2022
purview How To Workflow Business Terms Approval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-business-terms-approval.md
description: This article describes how to create and manage workflows to approv
-+ Last updated 03/01/2022
purview How To Workflow Manage Requests Approvals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-manage-requests-approvals.md
description: This article outlines how to manage requests and approvals generate
-+ Last updated 03/09/2022
purview How To Workflow Manage Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-manage-runs.md
description: This article outlines how to manage workflow runs.
-+ Last updated 03/01/2022
purview How To Workflow Self Service Data Access Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-self-service-data-access-hybrid.md
description: This article describes how to create and manage hybrid self-service
-+ Last updated 03/09/2022
purview Tutorial Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-resource-group.md
The subscription or resource group needs to be registered with Azure Purview to
- [Register multiple sources - Azure Purview](register-scan-azure-multiple-sources.md)
-Enable the resource group or the subscription for access policies in Azure Purview by setting the **Data use governance** toggle to **Enabled**, as shown in the picture.
+Follow this link to [Enable the resource group or subscription for access policies](./how-to-enable-data-use-governance.md) in Azure Purview by setting the **Data use governance** toggle to **Enabled**, as shown in the picture.
![Image shows how to register a resource group or subscription for policy.](./media/tutorial-data-owner-policies-resource-group/register-resource-group-for-policy.png)
-Follow this link for more information and best practices related to [registering a data resource for Data use governance](./how-to-enable-data-use-governance.md)
- ## Create and publish a data owner policy Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish a policy similar to the example shown in the image: a policy that provides security group *sg-Finance* *modify* access to resource group *finance-rg*:
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-storage.md
Register and scan each Storage account with Azure Purview to later define access
- [Register and scan Azure Data Lake Storage (ADLS) Gen2 - Azure Purview](register-scan-adls-gen2.md)
-Enable the data source for access policies in Azure Purview by setting the **Data use governance** toggle to **Enabled**, as shown in the picture.
+Follow this link to [Enable the data source for access policies](./how-to-enable-data-use-governance.md) in Azure Purview by setting the **Data use governance** toggle to **Enabled**, as shown in the picture.
![Image shows how to register a data source for policy.](./media/tutorial-data-owner-policies-storage/register-data-source-for-policy-storage.png)
-Follow this link for more information and best practices related to [registering a data resource for Data use governance](./how-to-enable-data-use-governance.md)
- ## Create and publish a data owner policy Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish a policy similar to the example shown in the image: a policy that provides group *Contoso Team* *read* access to Storage account *marketinglake1*:
Check blog, demo and related tutorials
* [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md) * [Blog: What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954) * [Blog: Accessing data when folder level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-folder-level-permission/ba-p/3109583)
-* [Blog: Accessing data when file level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-file-level-permission/ba-p/3102166)
+* [Blog: Accessing data when file level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-file-level-permission/ba-p/3102166)
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Previously updated : 03/16/2022 Last updated : 03/30/2022 # Service limits in Azure Cognitive Search
Maximum limits on storage, workloads, and quantities of indexes and other object
| Resource | Free | Basic&nbsp;<sup>1</sup> | S1 | S2 | S3 | S3&nbsp;HD | L1 | L2 | | -- | - | - | | | | | | | | Maximum indexes |3 |5 or 15 |50 |200 |200 |1000 per partition or 3000 per service |10 |10 |
-| Maximum simple fields per index |1000 |100 |1000 |1000 |1000 |1000 |1000 |1000 |
-| Maximum complex collection fields per index |40 |40 |40 |40 |40 |40 |40 |40 |
-| Maximum elements across all complex collections per document&nbsp;<sup>2</sup> |3000 |3000 |3000 |3000 |3000 |3000 |3000 |3000 |
+| Maximum simple fields per index&nbsp;<sup>2</sup> |1000 |100 |1000 |1000 |1000 |1000 |1000 |1000 |
+| Maximum complex collections per index |40 |40 |40 |40 |40 |40 |40 |40 |
+| Maximum elements across all complex collections per document&nbsp;<sup>3</sup> |3000 |3000 |3000 |3000 |3000 |3000 |3000 |3000 |
| Maximum depth of complex fields |10 |10 |10 |10 |10 |10 |10 |10 | | Maximum [suggesters](/rest/api/searchservice/suggesters) per index |1 |1 |1 |1 |1 |1 |1 |1 | | Maximum [scoring profiles](/rest/api/searchservice/add-scoring-profiles-to-a-search-index) per index |100 |100 |100 |100 |100 |100 |100 |100 |
Maximum limits on storage, workloads, and quantities of indexes and other object
<sup>1</sup> Basic services created before December 2017 have lower limits (5 instead of 15) on indexes. Basic tier is the only SKU with a lower limit of 100 fields per index.
-<sup>2</sup> An upper limit exists for elements because having a large number of them significantly increases the storage required for your index. An element of a complex collection is defined as a member of that collection. For example, assume a [Hotel document with a Rooms complex collection](search-howto-complex-data-types.md#indexing-complex-types), each room in the Rooms collection is considered an element. During indexing, the indexing engine can safely process a maximum of 3000 elements across the document as a whole. [This limit](search-api-migration.md#upgrade-to-2019-05-06) was introduced in `api-version=2019-05-06` and applies to complex collections only, and not to string collections or to complex fields.
+<sup>2</sup> The upper limit on fields includes both first-level fields and nested subfields in a complex collection. For example, if an index contains 15 fields and has two complex collections with 5 subfields each, the field count of your index is 25.
+
+<sup>3</sup> An upper limit exists for elements because having a large number of them significantly increases the storage required for your index. An element of a complex collection is defined as a member of that collection. For example, assume a [Hotel document with a Rooms complex collection](search-howto-complex-data-types.md#indexing-complex-types), each room in the Rooms collection is considered an element. During indexing, the indexing engine can safely process a maximum of 3000 elements across the document as a whole. [This limit](search-api-migration.md#upgrade-to-2019-05-06) was introduced in `api-version=2019-05-06` and applies to complex collections only, and not to string collections or to complex fields.
<a name="document-limits"></a>
sentinel Connect Microsoft 365 Defender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-microsoft-365-defender.md
Title: Connect Microsoft 365 Defender data to Microsoft Sentinel| Microsoft Docs
description: Learn how to ingest incidents, alerts, and raw event data from Microsoft 365 Defender into Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 03/23/2022
> [!IMPORTANT] >
-> **Microsoft 365 Defender** was formerly known as **Microsoft Threat Protection** or **MTP**.
->
-> **Microsoft Defender for Endpoint** was formerly known as **Microsoft Defender Advanced Threat Protection** or **MDATP**.
->
-> **Microsoft Defender for Office 365** was formerly known as **Office 365 Advanced Threat Protection**.
+> **Microsoft Defender for Cloud Apps** was formerly known as **Microsoft Cloud App Security** or **MCAS**.
> > You may see the old names still in use for a period of time.
## Background
-Microsoft Sentinel's [Microsoft 365 Defender (M365D)](/microsoft-365/security/mtp/microsoft-threat-protection) connector with incident integration allows you to stream all M365D incidents and alerts into Microsoft Sentinel, and keeps the incidents synchronized between both portals. M365D incidents include all their alerts, entities, and other relevant information, and they are enriched by and group together alerts from M365D's component services **Microsoft Defender for Endpoint**, **Microsoft Defender for Identity**, **Microsoft Defender for Office 365**, and **Microsoft Defender for Cloud Apps**.
+Microsoft Sentinel's [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) connector with incident integration allows you to stream all Microsoft 365 Defender incidents and alerts into Microsoft Sentinel, and keeps the incidents synchronized between both portals. Microsoft 365 Defender incidents include all their alerts, entities, and other relevant information, and they group together, and are enriched by, alerts from Microsoft 365 Defender's component services **Microsoft Defender for Endpoint**, **Microsoft Defender for Identity**, **Microsoft Defender for Office 365**, and **Microsoft Defender for Cloud Apps**.
-The connector also lets you stream **advanced hunting** events from Microsoft Defender for Endpoint and Microsoft Defender for Office 365 into Microsoft Sentinel, allowing you to copy those Defender components' advanced hunting queries into Microsoft Sentinel, enrich Sentinel alerts with the Defender components' raw event data to provide additional insights, and store the logs with increased retention in Log Analytics.
+The connector also lets you stream **advanced hunting** events from *all* of the above components into Microsoft Sentinel, allowing you to copy those Defender components' advanced hunting queries into Microsoft Sentinel, enrich Sentinel alerts with the Defender components' raw event data to provide additional insights, and store the logs with increased retention in Log Analytics.
For more information about incident integration and advanced hunting event collection, see [Microsoft 365 Defender integration with Microsoft Sentinel](microsoft-365-defender-sentinel-integration.md#advanced-hunting-event-collection).
For more information about incident integration and advanced hunting event colle
1. To avoid duplication of incidents, it is recommended to mark the check box labeled **Turn off all Microsoft incident creation rules for these products.** > [!NOTE]
- > When you enable the Microsoft 365 Defender connector, all of the M365D componentsΓÇÖ connectors (the ones mentioned at the beginning of this article) are automatically connected in the background. In order to disconnect one of the componentsΓÇÖ connectors, you must first disconnect the Microsoft 365 Defender connector.
+ > When you enable the Microsoft 365 Defender connector, all of the Microsoft 365 Defender componentsΓÇÖ connectors (the ones mentioned at the beginning of this article) are automatically connected in the background. In order to disconnect one of the componentsΓÇÖ connectors, you must first disconnect the Microsoft 365 Defender connector.
1. To query Microsoft 365 Defender incident data, use the following statement in the query window: ```kusto
For more information about incident integration and advanced hunting event colle
| **[DeviceImageLoadEvents](/microsoft-365/security/defender/advanced-hunting-deviceimageloadevents-table)** | DLL loading events | | **[DeviceEvents](/microsoft-365/security/defender/advanced-hunting-deviceevents-table)** | Multiple event types, including events triggered by security controls such as Windows Defender Antivirus and exploit protection | | **[DeviceFileCertificateInfo](/microsoft-365/security/defender/advanced-hunting-DeviceFileCertificateInfo-table)** | Certificate information of signed files obtained from certificate verification events on endpoints |
- |
# [Defender for Office 365](#tab/MDO)
For more information about incident integration and advanced hunting event colle
| **[EmailEvents](/microsoft-365/security/defender/advanced-hunting-emailevents-table)** | Microsoft 365 email events, including email delivery and blocking events | | **[EmailPostDeliveryEvents](/microsoft-365/security/defender/advanced-hunting-emailpostdeliveryevents-table)** | Security events that occur post-delivery, after Microsoft 365 has delivered the emails to the recipient mailbox | | **[EmailUrlInfo](/microsoft-365/security/defender/advanced-hunting-emailurlinfo-table)** | Information about URLs on emails |
- |
+
+ # [Defender for Identity (New!)](#tab/MDI)
+
+ | Table name | Events type |
+ |-|-|
+ | **[IdentityDirectoryEvents](/microsoft-365/security/defender/advanced-hunting-identitydirectoryevents-table)** | Various identity-related events, like password changes, password expirations, and user principal name (UPN) changes, captured from an on-premises Active Directory domain controller<br><br>Also includes system events on the domain controller |
+ | **[IdentityInfo](/microsoft-365/security/defender/advanced-hunting-identityinfo-table)** | Information about user accounts obtained from various services, including Azure Active Directory |
+ | **[IdentityLogonEvents](/microsoft-365/security/defender/advanced-hunting-identitylogonevents-table)** | Authentication activities made through your on-premises Active Directory, as captured by Microsoft Defender for Identity <br><br>Authentication activities related to Microsoft online services, as captured by Microsoft Defender for Cloud Apps |
+ | **[IdentityQueryEvents](/microsoft-365/security/defender/advanced-hunting-identityqueryevents-table)** | Information about queries performed against Active Directory objects such as users, groups, devices, and domains |
+
+ # [Defender for Cloud Apps (New!)](#tab/MDCA)
+
+ | Table name | Events type |
+ |-|-|
+ | **[CloudAppEvents](/microsoft-365/security/defender/advanced-hunting-cloudappevents-table)** | Information about activities in various cloud apps and services covered by Microsoft Defender for Cloud Apps |
+
+ # [Defender alerts (New!)](#tab/MDA)
+
+ | Table name | Events type |
+ |-|-|
+ | **[AlertInfo](/microsoft-365/security/defender/advanced-hunting-alertinfo-table)** | Information about alerts from Microsoft 365 Defender components |
+ | **[AlertEvidence](/microsoft-365/security/defender/advanced-hunting-alertevidence-table)** | Information about various entities - files, IP addresses, URLs, users, devices - associated with alerts from Microsoft 365 Defender components |
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| | | | **Data ingestion method** | **Azure service-to-service integration:<br>[Connect data from Microsoft 365 Defender to Microsoft Sentinel](connect-microsoft-365-defender.md)** (Top connector article) | | **License prerequisites/<br>Cost information** | [Valid license for Microsoft 365 Defender](/microsoft-365/security/mtp/prerequisites)
-| **Log Analytics table(s)** | **Alerts:**<br>SecurityAlert<br>SecurityIncident<br>**Defender for Endpoint events:**<br>DeviceEvents<br>DeviceFileEvents<br>DeviceImageLoadEvents<br>DeviceInfo<br>DeviceLogonEvents<br>DeviceNetworkEvents<br>DeviceNetworkInfo<br>DeviceProcessEvents<br>DeviceRegistryEvents<br>DeviceFileCertificateInfo<br>**Defender for Office 365 events:**<br>EmailAttachmentInfo<br>EmailUrlInfo<br>EmailEvents<br>EmailPostDeliveryEvents |
+| **Log Analytics table(s)** | **Alerts:**<br>SecurityAlert<br>SecurityIncident<br>**Defender for Endpoint events:**<br>DeviceEvents<br>DeviceFileEvents<br>DeviceImageLoadEvents<br>DeviceInfo<br>DeviceLogonEvents<br>DeviceNetworkEvents<br>DeviceNetworkInfo<br>DeviceProcessEvents<br>DeviceRegistryEvents<br>DeviceFileCertificateInfo<br>**Defender for Office 365 events:**<br>EmailAttachmentInfo<br>EmailUrlInfo<br>EmailEvents<br>EmailPostDeliveryEvents<br>**Defender for Identity events:**<br>IdentityDirectoryEvents<br>IdentityInfo<br>IdentityLogonEvents<br>IdentityQueryEvents<br>**Defender for Cloud Apps events:**<br>CloudAppEvents<br>**Defender alerts as events:**<br>AlertInfo<br>AlertEvidence |
| **DCR support** | Not currently supported | | **Supported by** | Microsoft |
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dns-normalization-schema.md
Microsoft Sentinel provides the following out-of-the-box, product-specific DNS p
| **Source** | **Built-in parsers** | **Workspace deployed parsers** | | | | |
-|**Microsoft DNS Server**<br>Collected by the DNS connector<br> and the Log Analytics Agent | `_ASim_DnsMicrosoftOMS` (regular) <br> `_Im_DnsMicrosoftOMS` (filtering) <br><br> | `ASimDnsMicrosoftOMS` (regular) <br>`vimDnsMicrosoftOMS` (filtering) <br><br> |
-| **Microsoft DNS Server**<br>Collected by NXlog| `_ASim_DnsMicrosoftNXlog` (regular)<br>`_Im_DnsMicrosoftNXlog` (filtering)| `ASimDnsMicrosoftNXlog` (regular)<br> `vimDnsMicrosoftNXlog` (filtering)|
-| **Azure Firewall** | `_ASim_DnsAzureFirewall` (regular)<br> `_Im_DnsAzureFirewall` (filtering) | `ASimDnsAzureFirewall` (regular)<br>`vimDnsAzureFirewall` (filtering) |
-| **Sysmon for Windows** (event 22)<br> Collected by the Log Analytics Agent<br> or the Azure Monitor Agent,<br>supporting both the<br> `Event` and `WindowsEvent` tables | `_ASim_DnsMicrosoftSysmon` (regular)<br> `_Im_DnsMicrosoftSysmon` (filtering) | `ASimDnsMicrosoftSysmon` (regular)<br> `vimDnsMicrosoftSysmon` (filtering) |
-| **Cisco Umbrella** | `_ASim_DnsCiscoUmbrella` (regular)<br> `_Im_DnsCiscoUmbrella` (filtering) | `ASimDnsCiscoUmbrella` (regular)<br> `vimDnsCiscoUmbrella` (filtering) |
-| **Infoblox NIOS**<br><br>The InfoBlox parsers<br>require [configuring the relevant sources](normalization-manage-parsers.md#configure-the-sources-relevant-to-a-source-specific-parser).<br> Use `InfobloxNIOS` as the source type. | `_ASim_DnsInfobloxNIOS` (regular)<br> `_Im_DnsInfobloxNIOS` (filtering) | `ASimDnsInfobloxNIOS` (regular)<br> `vimDnsInfobloxNIOS` (filtering) |
-| **GCP DNS** | `_ASim_DnsGcp` (regular)<br> `_Im_DnsGcp` (filtering) | `ASimDnsGcp` (regular)<br> `vimDnsGcp` (filtering) |
-| **Corelight Zeek DNS events** | `_ASim_DnsCorelightZeek` (regular)<br> `_Im_DnsCorelightZeek` (filtering) | `ASimDnsCorelightZeek` (regular)<br> `vimDnsCorelightZeek` (filtering) |
-| **Zscaler ZIA** |`_ASim_DnsZscalerZIA` (regular)<br> `_Im_DnsZscalerZIA` (filtering) | `AsimDnsZscalerZIA` (regular)<br> `vimDnsSzcalerZIA` (filtering) |
-
+|**Microsoft DNS Server**<br>Collected by the DNS connector<br> and the Log Analytics Agent | `_ASim_Dns_MicrosoftOMS` (regular) <br> `_Im_Dns_MicrosoftOMS` (filtering) <br><br> | `ASimDnsMicrosoftOMS` (regular) <br>`vimDnsMicrosoftOMS` (filtering) <br><br> |
+| **Microsoft DNS Server**<br>Collected by NXlog| `_ASim_Dns_MicrosoftNXlog` (regular)<br>`_Im_Dns_MicrosoftNXlog` (filtering)| `ASimDnsMicrosoftNXlog` (regular)<br> `vimDnsMicrosoftNXlog` (filtering)|
+| **Azure Firewall** | `_ASim_Dns_AzureFirewall` (regular)<br> `_Im_Dns_AzureFirewall` (filtering) | `ASimDnsAzureFirewall` (regular)<br>`vimDnsAzureFirewall` (filtering) |
+| **Sysmon for Windows** (event 22)<br> Collected by the Log Analytics Agent<br> or the Azure Monitor Agent,<br>supporting both the<br> `Event` and `WindowsEvent` tables | `_ASim_Dns_MicrosoftSysmon` (regular)<br> `_Im_Dns_MicrosoftSysmon` (filtering) | `ASimDnsMicrosoftSysmon` (regular)<br> `vimDnsMicrosoftSysmon` (filtering) |
+| **Cisco Umbrella** | `_ASim_Dns_CiscoUmbrella` (regular)<br> `_Im_Dns_CiscoUmbrella` (filtering) | `ASimDnsCiscoUmbrella` (regular)<br> `vimDnsCiscoUmbrella` (filtering) |
+| **Infoblox NIOS**<br><br>The InfoBlox parsers<br>require [configuring the relevant sources](normalization-manage-parsers.md#configure-the-sources-relevant-to-a-source-specific-parser).<br> Use `InfobloxNIOS` as the source type. | `_ASim_Dns_InfobloxNIOS` (regular)<br> `_Im_Dns_InfobloxNIOS` (filtering) | `ASimDnsInfobloxNIOS` (regular)<br> `vimDnsInfobloxNIOS` (filtering) |
+| **GCP DNS** | `_ASim_Dns_Gcp` (regular)<br> `_Im_Dns_Gcp` (filtering) | `ASimDnsGcp` (regular)<br> `vimDnsGcp` (filtering) |
+| **Corelight Zeek DNS events** | `_ASim_Dns_CorelightZeek` (regular)<br> `_Im_Dns_CorelightZeek` (filtering) | `ASimDnsCorelightZeek` (regular)<br> `vimDnsCorelightZeek` (filtering) |
+| **Vectra AI** |`_ASim_Dns_VectraIA` (regular)<br> `_Im_Dns_VectraIA` (filtering) | `AsimDnsVectraAI` (regular)<br> `vimDnsVectraAI` (filtering) |
+| **Zscaler ZIA** |`_ASim_Dns_ZscalerZIA` (regular)<br> `_Im_Dns_ZscalerZIA` (filtering) | `AsimDnsZscalerZIA` (regular)<br> `vimDnsSzcalerZIA` (filtering) |
+||||
These parsers can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/azsentinelDNS).
sentinel Microsoft 365 Defender Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-365-defender-sentinel-integration.md
Title: Microsoft 365 Defender integration with Microsoft Sentinel | Microsoft Do
description: Learn how using Microsoft 365 Defender together with Microsoft Sentinel lets you use Microsoft Sentinel as your universal incidents queue while seamlessly applying Microsoft 365 Defender's strengths to help investigate Microsoft 365 security incidents. Also, learn how to ingest Defender components' advanced hunting data into Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 03/23/2022
> [!IMPORTANT] >
-> **Microsoft 365 Defender** was formerly known as **Microsoft Threat Protection** or **MTP**.
->
-> **Microsoft Defender for Endpoint** was formerly known as **Microsoft Defender Advanced Threat Protection** or **MDATP**.
->
-> **Microsoft Defender for Office 365** was formerly known as **Office 365 Advanced Threat Protection**.
+> **Microsoft Defender for Cloud Apps** was formerly known as **Microsoft Cloud App Security** or **MCAS**.
> > You may see the old names still in use for a period of time. ## Incident integration
-Microsoft Sentinel's [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) incident integration allows you to stream all Microsoft 365 Defender incidents into Microsoft Sentinel and keep them synchronized between both portals. Incidents from Microsoft 365 Defender (formerly known as Microsoft Threat Protection or MTP) include all associated alerts, entities, and relevant information, providing you with enough context to perform triage and preliminary investigation in Microsoft Sentinel. Once in Sentinel, Incidents will remain bi-directionally synced with Microsoft 365 Defender, allowing you to take advantage of the benefits of both portals in your incident investigation.
+Microsoft Sentinel's [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) incident integration allows you to stream all Microsoft 365 Defender incidents into Microsoft Sentinel and keep them synchronized between both portals. Incidents from Microsoft 365 Defender (formerly known as Microsoft Threat Protection or MTP) include all associated alerts, entities, and relevant information, providing you with enough context to perform triage and preliminary investigation in Microsoft Sentinel. Once in Sentinel, incidents will remain bi-directionally synced with Microsoft 365 Defender, allowing you to take advantage of the benefits of both portals in your incident investigation.
This integration gives Microsoft 365 security incidents the visibility to be managed from within Microsoft Sentinel, as part of the primary incident queue across the entire organization, so you can see ΓÇô and correlate ΓÇô Microsoft 365 incidents together with those from all of your other cloud and on-premises systems. At the same time, it allows you to take advantage of the unique strengths and capabilities of Microsoft 365 Defender for in-depth investigations and a Microsoft 365-specific experience across the Microsoft 365 ecosystem. Microsoft 365 Defender enriches and groups alerts from multiple Microsoft 365 products, both reducing the size of the SOCΓÇÖs incident queue and shortening the time to resolve. The component services that are part of the Microsoft 365 Defender stack are: - **Microsoft Defender for Endpoint** (formerly Microsoft Defender ATP) - **Microsoft Defender for Identity** (formerly Azure ATP) - **Microsoft Defender for Office 365** (formerly Office 365 ATP)-- **Microsoft Defender for Cloud Apps**
+- **Microsoft Defender for Cloud Apps** (formerly Microsoft Cloud App Security)
In addition to collecting alerts from these components, Microsoft 365 Defender generates alerts of its own. It creates incidents from all of these alerts and sends them to Microsoft Sentinel.
In Microsoft 365 Defender, all alerts from one incident can be transferred to an
## Advanced hunting event collection
-The Microsoft 365 Defender connector also lets you stream **advanced hunting** events - a type of raw event data - from Microsoft 365 Defender and its component services into Microsoft Sentinel. You can currently collect [advanced hunting](/microsoft-365/security/defender/advanced-hunting-overview) events from Microsoft Defender for Endpoint and *(from October 2021)* from Microsoft Defender for Office 365, and stream them straight into purpose-built tables in your Microsoft Sentinel workspace. These tables are built on the same schema that is used in the Microsoft 365 Defender portal, giving you complete access to the full set of advanced hunting events, and allowing you to do the following:
+The Microsoft 365 Defender connector also lets you stream **advanced hunting** events - a type of raw event data - from Microsoft 365 Defender and its component services into Microsoft Sentinel. You can now *(as of April 2022)* collect [advanced hunting](/microsoft-365/security/defender/advanced-hunting-overview) events from *all* Microsoft 365 Defender components, and stream them straight into purpose-built tables in your Microsoft Sentinel workspace. These tables are built on the same schema that is used in the Microsoft 365 Defender portal, giving you complete access to the full set of advanced hunting events, and allowing you to do the following:
-- Easily copy your existing Microsoft Defender for Endpoint/Office 365 advanced hunting queries into Microsoft Sentinel.
+- Easily copy your existing Microsoft Defender for Endpoint/Office 365/Identity/Cloud Apps advanced hunting queries into Microsoft Sentinel.
- Use the raw event logs to provide further insights for your alerts, hunting, and investigation, and correlate these events with events from other data sources in Microsoft Sentinel. -- Store the logs with increased retention, beyond Microsoft Defender for Endpoint's/Office 365's or Microsoft 365 DefenderΓÇÖs default retention of 30 days. You can do so by configuring the retention of your workspace or by configuring per-table retention in Log Analytics.
+- Store the logs with increased retention, beyond Microsoft 365 DefenderΓÇÖs or its components' default retention of 30 days. You can do so by configuring the retention of your workspace or by configuring per-table retention in Log Analytics.
## Next steps
sentinel Sap Solution Deploy Alternate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-deploy-alternate.md
spname=<sp-name>
kvname=<keyvaultname> # Optional when Azure MI not enabled - Create sp user for AZ cli connection, save details for env.list file
-az ad sp create-for-rbac ΓÇôname $spname --role Contributor
+az ad sp create-for-rbac ΓÇôname $spname --role Contributor --scopes /subscriptions/<subscription_id>
SpID=$(az ad sp list ΓÇôdisplay-name $spname ΓÇôquery ΓÇ£[].appIdΓÇ¥ --output tsv
sentinel Sap Solution Deploy Snc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-deploy-snc.md
spname=<sp-name>
kvname=<keyvaultname> # Optional when Azure MI not enabled - Create sp user for AZ cli connection, save details for env.list file
-az ad sp create-for-rbac ΓÇôname $spname --role Contributor
+az ad sp create-for-rbac ΓÇôname $spname --role Contributor --scopes /subscriptions/<subscription_id>
SpID=$(az ad sp list ΓÇôdisplay-name $spname ΓÇôquery ΓÇ£[].appIdΓÇ¥ --output tsv
service-fabric Service Fabric Reliable Services Quick Start Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-quick-start-java.md
This article explains the basics of Azure Service Fabric Reliable Services and walks you through creating and deploying a simple Reliable Service application written in Java.
+[Check this page for a training video that shows you how to create a stateless Reliable service:](/shows/building-microservices-applications-on-azure-service-fabric/creating-a-stateless-reliable-service)
+ ## Installation and setup Before you start, make sure you have the Service Fabric development environment set up on your machine. If you need to set it up, go to [getting started on Mac](service-fabric-get-started-mac.md) or [getting started on Linux](service-fabric-get-started-linux.md).
service-fabric Service Fabric Reliable Services Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-quick-start.md
An Azure Service Fabric application contains one or more services that run your code. This guide shows you how to create both stateless and stateful Service Fabric applications with [Reliable Services](service-fabric-reliable-services-introduction.md).
+[Check this page for a training video which would also show you how to create a stateless Reliable service.](/shows/building-microservices-applications-on-azure-service-fabric/creating-a-stateless-reliable-service)
## Basic concepts To get started with Reliable Services, you only need to understand a few basic concepts:
spatial-anchors Get Started Unity Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-android.md
To complete this quickstart, make sure you have:
[!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)]
-## Download and open the Unity sample project
+## Download sample project and import SDK
### Clone Samples Repository
To complete this quickstart, make sure you have:
Follow the instructions [here](../how-tos/setup-unity-project.md#download-asa-packages) to download and import the ASA SDK packages required for the Android platform.
-### Configure Unity
+## Configure Unity
[!INCLUDE [Open Unity Project](../../../includes/spatial-anchors-open-unity-project.md)]
spatial-anchors Get Started Unity Hololens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-hololens.md
To complete this quickstart:
- You need a HoloLens device with [developer mode](/windows/mixed-reality/using-visual-studio) enabled. [Windows 10 May 2020 Update or later](/windows/mixed-reality/whats-new/release-notes-may-2020) must be installed on the device. To update to the latest release on HoloLens, open the **Settings** app, go to **Update & Security**, and then select **Check for updates**. - You need a Windows computer with <a href="https://www.visualstudio.com/downloads/" target="_blank">Visual Studio 2019</a> or later installed. Your Visual Studio installation must include the **Universal Windows Platform development** workload and the **Windows 10 SDK (10.0.18362.0 or newer)** component. You must also install <a href="https://git-scm.com/download/win" target="_blank">Git for Windows</a> and <a href="https://git-lfs.github.com/">Git LFS</a>. - You need to have Unity installed. For supported versions and required capabilities, visit the [Unity project setup page](../how-tos/setup-unity-project.md).-- On your app, you need to enable **Virtual Reality Supported** with **Windows Mixed Reality SDK**. This setting is in **Build Settings** > **Player Settings** > **XR Settings**.+ [!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)]
-## Download and open the Unity sample project
+## Download sample project and import SDK
### Clone Samples Repository
To complete this quickstart:
Follow the instructions [here](../how-tos/setup-unity-project.md#download-asa-packages) to download and import the ASA SDK packages required for the HoloLens platform.
-### Configure Unity
+## Configure Unity
[!INCLUDE [Open Unity Project](../../../includes/spatial-anchors-open-unity-project.md)] Open **Build Settings** by selecting **File** > **Build Settings**. + In the **Platform** section, select **Universal Windows Platform**. Change the **Target Device** to **HoloLens**. Select **Switch Platform** to change the platform to **Universal Windows Platform**. Unity might prompt you to install UWP support components if they're missing.
spatial-anchors Get Started Unity Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-ios.md
To complete this quickstart, make sure you have:
[!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)]
-## Download and open the Unity sample project
+## Download sample project and import SDK
### Clone Samples Repository
To complete this quickstart, make sure you have:
Follow the instructions [here](../how-tos/setup-unity-project.md#download-asa-packages) to download and import the ASA SDK packages required for the iOS platform.
-### Configure Unity
+## Configure Unity
[!INCLUDE [Open Unity Project](../../../includes/spatial-anchors-open-unity-project.md)]
spring-cloud Concept App Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/concept-app-status.md
description: Learn the app status categories in Azure Spring Cloud
Previously updated : 04/10/2020 Last updated : 03/30/2022
This article shows you how to view app status for Azure Spring Cloud.
-The Azure Spring Cloud UI delivers information about the status of running applications. There is an **Apps** option for each resource group in a subscription that displays general status of application types. For each application type, there's a display of **Application instances**.
+The Azure Spring Cloud UI delivers information about the status of running applications. There's an **Apps** option for each resource group in a subscription that displays general status of application types. For each application type, there's a display of **Application instances**.
## Apps status
To view general status of an application type, select **Apps** in the left navig
* **Provisioning Status**: Shows the deploymentΓÇÖs provisioning state. * **Running instance**: Shows how many app instances are running and how many app instances you desire. If you stop the app, this column shows **stopped**.
-* **Registered Instance**: Shows how many app instances are registered to Eureka and how many app instances you desire. If you stop the app, this column shows **stopped**. Note that Eureka is not applicable to enterprise tier. For more information if you're using the enterprise tier, see [Use Service Registry](how-to-enterprise-service-registry.md).
+* **Registered Instance**: Shows how many app instances are registered to Eureka and how many app instances you desire. If you stop the app, this column shows **stopped**. Eureka isn't applicable to enterprise tier. For more information if you're using the enterprise tier, see [Use Service Registry](how-to-enterprise-service-registry.md).
![Apps status](media/spring-cloud-concept-app-status/apps-ui-status.png)
-The deployment status is reported as one of the following values:
+## Deployment status
-| Enum | Definition |
+The deployment status shows the running state of the deployment. The status is reported as one of the following values:
+
+| Value | Definition |
||--| | Running | The deployment SHOULD be running. | | Stopped | The deployment SHOULD be stopped. |
-The provisioning state is accessible only from the CLI. It is reported as one of the following values:
+## Provisioning status
+
+The *deployment provisioning* status describes the state of operations of the deployment resource. This status shows the comparison between the functionality and the deployment definition.
+
+The provisioning state is accessible only from the CLI. It's reported as one of the following values:
-| Enum | Definition |
+| Value | Definition |
|--||
-| Creating | The resource is creating. |
-| Updating | The resource is updating. |
-| Succeeded | Successfully supplied resources and deploys the binary. |
+| Creating | The resource is creating and isn't ready. |
+| Updating | The resource is updating and the functionality may be different from the deployment definition until the update is complete. |
+| Succeeded | Successfully supplied resources and deploys the binary. The deployment's functionality is the same as the definition and all app instances are working. |
| Failed | Failed to achieve the *Succeeded* goal. |
-| Deleting | The resource is being deleted. This prevents operation, and the resource is not available in this status. |
+| Deleting | The resource is being deleted which prevents operation, and the resource isn't available in this status. |
## App instances status
-To view the status of a specific instance of a deployed app, select the **Name** of the app in the **Apps** UI. The results will display:
+The *app instance* status represents every instance of the app. To view the status of a specific instance of a deployed app, select the **Name** of the app in the **Apps** UI. The results will display:
-* **Status**: Whether the instance is running or its state
-* **DiscoveryStatus**: The registered status of the app instance in Eureka server
+* **Status**: Whether the instance is running or its current state
+* **DiscoveryStatus**: The registered status of the app instance in the Eureka server
![App instances status](media/spring-cloud-concept-app-status/apps-ui-instance-status.png)
+### App instance status
+ The instance status is reported as one of the following values:
-| Enum | Definition |
+| Value | Definition |
|-||
-| Starting | The binary is successfully deployed to the given instance. Instance booting the jar file may fail because jar cannot run properly. |
-| Running | The instance works. |
-| Failed | The app instance failed to start userΓÇÖs binary after several retries. |
-| Terminating | The app instance is shutting down. |
+| Starting | The binary is successfully deployed to the given instance. The instance booting the jar file may fail because the jar can't run properly. Azure Spring Cloud will restart the app instance in 60 seconds if it detects that the app instance is still in the *Starting* state. |
+| Running | The instance works. The instance can serve requests from inside Azure Spring Cloud. |
+| Failed | The app instance failed to start the userΓÇÖs binary after several retries. The app instance may be in one of the following states:<br/>- The app may stay in the *Starting* status and never be ready for serving requests.<br/>- The app may boot up but crashed in a few seconds. |
+| Terminating | The app instance is shutting down. The app may not serve requests and the app instance will be removed. |
+
+### App discovery status
The discovery status of the instance is reported as one of the following values:
-| Enum | Definition |
+| Value | Definition |
|-|| | UP | The app instance is registered to Eureka and ready to receive traffic | | OUT_OF_SERVICE | The app instance is registered to Eureka and able to receive traffic. but shuts down for traffic intentionally. |
-| DOWN | The app instance is not registered to Eureka or is registered but not able to receive traffic. |
+| DOWN | The app instance isn't registered to Eureka or is registered but not able to receive traffic. |
+
+## App registration status
+The *app registration* status shows the state in service discovery. Azure Spring Cloud uses Eureka for service discovery. For more information on how the Eureka client calculates the state, see [Eureka's health checks](https://cloud.spring.io/spring-cloud-static/Greenwich.RELEASE/multi/multi__service_discovery_eureka_clients.html#_eureka_s_health_checks).
## Next steps * [Prepare a Spring or Steeltoe application for deployment in Azure Spring Cloud](how-to-prepare-app-deployment.md)
spring-cloud Tutorial Managed Identities Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-managed-identities-mysql.md
+
+ Title: "Tutorial: Managed identity to connect an Azure Database for MySQL to apps in Azure Spring Cloud"
+description: Set up managed identity to connect an Azure Database for MySQL to apps in Azure Spring Cloud
++++ Last updated : 03/30/2022+++
+# Tutorial: Use a managed identity to connect an Azure Database for MySQL to an app in Azure Spring Cloud
+
+**This article applies to:** ✔️ Java
+
+This article shows you how to create a managed identity for an app in Azure Spring Cloud. This article also shows you how to use the managed identity to access an Azure Database for MySQL with the MySQL password stored in Key Vault.
+
+The following video describes how to manage secrets using Azure Key Vault.
++
+> [!VIDEO https://www.youtube.com/embed/A8YQOoZncu8?list=PLPeZXlCR7ew8LlhnSH63KcM0XhMKxT1k_]
+
+## Prerequisites
+
+* [JDK 8](/azure/java/jdk/java-jdk-install)
+* [Maven 3.0 or above](http://maven.apache.org/install.html)
+* [Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest) or [Azure Cloud Shell](/azure/cloud-shell/overview)
+* An existing Key Vault. If you need to create a Key Vault, you can use the [Azure portal](/azure/key-vault/secrets/quick-create-portal) or [Azure CLI](/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
+* An existing Azure Database for MySQL instance with a database named `demo`. If you need to create an Azure Database for MySQL, you can use the [Azure portal](/azure/mysql/quickstart-create-mysql-server-database-using-azure-portal) or [Azure CLI](/azure/mysql/quickstart-create-mysql-server-database-using-azure-cli)
+
+## Create a resource group
+
+A resource group is a logical container where Azure resources are deployed and managed. Create a resource group to contain both the Key Vault and Spring Cloud using the command [az group create](/cli/azure/group#az_group_create):
+
+```azurecli
+az group create --location <myLocation> --name <myResourceGroup>
+```
+
+## Set up your Key Vault
+
+To create a Key Vault, use the command [az keyvault create](/cli/azure/keyvault#az_keyvault_create):
+
+> [!Important]
+> Each Key Vault must have a unique name. Replace *\<myKeyVaultName>* with the name of your Key Vault in the following examples.
+
+```azurecli
+az keyvault create --name <myKeyVaultName> -g <myResourceGroup>
+```
+
+Make a note of the returned `vaultUri`, which will be in the format `https://<your-keyvault-name>.vault.azure.net`. It will be used in the following step.
+
+You can now place a secret in your Key Vault with the command [az keyvault secret set](/cli/azure/keyvault/secret#az_keyvault_secret_set):
+
+```azurecli
+az keyvault secret set \
+ --vault-name <your-keyvault-name> \
+ --name <MYSQL-PASSWORD> \
+ --value <MySQL-PASSWORD>
+```
+
+## Se