Updates from: 08/07/2021 03:12:24
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector Token Enrichment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-api-connector-token-enrichment.md
+
+ Title: Token enrichment - Azure Active Directory B2C
+description: Enrich tokens with claims from external sources using APIs.
+++++++ Last updated : 08/04/2021++
+zone_pivot_groups: b2c-policy-type
++
+# Enrich tokens with claims from external sources using API connectors
++
+Azure Active Directory B2C (Azure AD B2C) enables identity developers to integrate an interaction with a RESTful API into their user flow using [API connectors](api-connectors-overview.md). At the end of this walkthrough, you'll be able to create an Azure AD B2C user flow that interacts with APIs to enrich tokens with information from external sources.
++
+You can use API connectors applied to the **Before sending the token (preview)** step to enrich tokens for your applications with information from external sources. When a user signs in or signs up, Azure AD B2C will call the API endpoint configured in the API connector, which can query information about a user in downstream services such as cloud services, custom user stores, custom permission systems, legacy identity systems, and more.
++
+You can create an API endpoint using one of our [samples](api-connector-samples.md#api-connector-rest-api-samples).
+
+## Prerequisites
++
+## Create an API connector
+
+To use an [API connector](api-connectors-overview.md), you first create the API connector and then enable it in a user flow.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Under **Azure services**, select **Azure AD B2C**.
+4. Select **API connectors**, and then select **New API connector**.
+
+ ![Screenshot of the basic API connector configuration](media/add-api-connector-token-enrichment/api-connector-new.png)
+
+5. Provide a display name for the call. For example, **Enrich token from external source**.
+6. Provide the **Endpoint URL** for the API call.
+7. Choose the **Authentication type** and configure the authentication information for calling your API. Learn how to [Secure your API Connector](secure-rest-api.md).
+
+ ![Screenshot of authentication configuration for an API connector](media/add-api-connector-token-enrichment/api-connector-config.png)
+
+8. Select **Save**.
+
+## Enable the API connector in a user flow
+
+Follow these steps to add an API connector to a sign-up user flow.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Under **Azure services**, select **Azure AD B2C**.
+4. Select **User flows**, and then select the user flow you want to add the API connector to.
+5. Select **API connectors**, and then select the API endpoint you want to invoke at the **Before sending the token (preview)** step in the user flow:
+
+ ![Screenshot of selecting an API connector for a user flow step](media/add-api-connector-token-enrichment/api-connectors-user-flow-select.png)
+
+6. Select **Save**.
+
+This step only exists for **Sign up and sign in (Recommended)**, **Sign up (Recommended)**, and **Sign in (Recommended)** user flows.
+
+## Example request sent to the API at this step
+
+An API connector at this step is invoked when a token is about to be issued during sign-ins and sign-ups.
+
+An API connector materializes as an **HTTP POST** request, sending user attributes ('claims') as key-value pairs in a JSON body. Attributes are serialized similarly to [Microsoft Graph](/graph/api/resources/user#properties) user properties.
+
+```http
+POST <API-endpoint>
+Content-type: application/json
+
+{
+ "email": "johnsmith@fabrikam.onmicrosoft.com",
+ "identities": [
+ {
+ "signInType":"federated",
+ "issuer":"facebook.com",
+ "issuerAssignedId":"0123456789"
+ }
+ ],
+ "displayName": "John Smith",
+ "objectId": "ab3ec3b2-a435-45e4-b93a-56a005e88bb7",
+ "extension_<extensions-app-id>_CustomAttribute1": "custom attribute value",
+ "extension_<extensions-app-id>_CustomAttribute2": "custom attribute value",
+ "objectId": "ab3ec3b2-a435-45e4-b93a-56a005e88bb7",
+ "client_id": "231c70e8-8424-48ac-9b5d-5623b9e4ccf3",
+ "step": "PreTokenIssuance",
+ "ui_locales":"en-US"
+}
+```
+
+The claims that are sent to the API depend on the information defined for the user.
+
+Only user properties and custom attributes listed in the **Azure AD B2C** > **User attributes** experience are available to be sent in the request.
+
+Custom attributes exist in the **extension_\<extensions-app-id>_CustomAttribute** format in the directory. Your API should expect to receive claims in this same serialized format. For more information on custom attributes, see [Define custom attributes in Azure AD B2C](user-flow-custom-attributes.md).
+
+Additionally, these claims are typically sent in all requests for this step:
+- **UI Locales ('ui_locales')** - An end-user's locale(s) as configured on their device. This can be used by your API to return internationalized responses.
+- **Step ('step')** - The step or point on the user flow that the API connector was invoked for. Value for this step is `
+- **Client ID ('client_id')** - The `appId` value of the application that an end-user is authenticating to in a user flow. This is *not* the resource application's `appId` in access tokens.
+- **objectId** - The identifier of the user. You can use this to query downstream services for information about the user.
+
+> [!IMPORTANT]
+> If a claim does not have a value at the time the API endpoint is called, the claim will not be sent to the API. Your API should be designed to explicitly check and handle the case in which a claim is not in the request.
+
+## Expected response types from the web API at this step
+
+When the web API receives an HTTP request from Azure AD during a user flow, it can return a "continuation response."
+
+### Continuation response
+
+A continuation response indicates that the user flow should continue to the next step: issuing the token.
+
+In a continuation response, the API can return additional claims. A claim returned by the API that you wish to return in the token must be a built-in claim or [defined as a custom attribute](user-flow-custom-attributes.md) and must be selected in the **Application claims** configuration of the user flow.
+
+The claim value in the token will be that returned by the API, not the value in the directory. Some claim values cannot be overwritten by the API response. Claims that can be returned by the API correspond to the set found under **User attributes** with the exception of `email`.
+
+> [!NOTE]
+> The API is only invoked during an initial authentication. When using refresh tokens to silently get new access or ID tokens, the token will include the values evaluated during the initial authentication.
+
+## Example response
+
+### Example of a continuation response
+
+```http
+HTTP/1.1 200 OK
+Content-type: application/json
+
+{
+ "version": "1.0.0",
+ "action": "Continue",
+ "postalCode": "12349", // return claim
+ "extension_<extensions-app-id>_CustomAttribute": "value" // return claim
+}
+```
+
+| Parameter | Type | Required | Description |
+| -- | -- | -- | -- |
+| version | String | Yes | The version of your API. |
+| action | String | Yes | Value must be `Continue`. |
+| \<builtInUserAttribute> | \<attribute-type> | No | They can returned in the token if selected as an **Application claim**. |
+| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`, it is *optional*. They can returned in the token if selected as an **Application claim**. |
+++
+In this scenario, we enrich the user's token data by integrating with a corporate line-of-business workflow. During sign-up or sign-in with local or federated account, Azure AD B2C invokes a REST API to get the user's extended profile data from a remote data source. In this sample, Azure AD B2C sends the user's unique identifier, the objectId. The REST API then returns the user's account balance (a random number). Use this sample as a starting point to integrate with your own CRM system, marketing database, or any line-of-business workflow.
+
+You can also design the interaction as a validation technical profile. This is suitable when the REST API will be validating data on screen and returning claims. For more information, see [Walkthrough: Add an API connector to a sign-up user flow](add-api-connector.md).
+
+## Prerequisites
+
+- Complete the steps in [Get started with custom policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy). You should have a working custom policy for sign-up and sign-in with local accounts.
+- Learn how to [Integrate REST API claims exchanges in your Azure AD B2C custom policy](api-connectors-overview.md).
+
+## Prepare a REST API endpoint
+
+For this walkthrough, you should have a REST API that validates whether a user's Azure AD B2C objectId is registered in your back-end system.
+If registered, the REST API returns the user account balance. Otherwise, the REST API registers the new account in the directory and returns the starting balance `50.00`.
+
+The following JSON code illustrates the data Azure AD B2C will send to your REST API endpoint.
+
+```json
+{
+ "objectId": "User objectId",
+ "lang": "Current UI language"
+}
+```
+
+Once your REST API validates the data, it must return an HTTP 200 (Ok), with the following JSON data:
+
+```json
+{
+ "balance": "760.50"
+}
+```
+
+The setup of the REST API endpoint is outside the scope of this article. We have created an [Azure Functions](../azure-functions/functions-reference.md) sample. You can access the complete Azure function code at [GitHub](https://github.com/azure-ad-b2c/rest-api/tree/master/source-code/azure-function).
+
+## Define claims
+
+A claim provides temporary storage of data during an Azure AD B2C policy execution. You can declare claims within the [claims schema](claimsschema.md) section.
+
+1. Open the extensions file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**</em>.
+1. Search for the [BuildingBlocks](buildingblocks.md) element. If the element doesn't exist, add it.
+1. Locate the [ClaimsSchema](claimsschema.md) element. If the element doesn't exist, add it.
+1. Add the following claims to the **ClaimsSchema** element.
+
+```xml
+<ClaimType Id="balance">
+ <DisplayName>Your Balance</DisplayName>
+ <DataType>string</DataType>
+</ClaimType>
+<ClaimType Id="userLanguage">
+ <DisplayName>User UI language (used by REST API to return localized error messages)</DisplayName>
+ <DataType>string</DataType>
+</ClaimType>
+```
+
+## Add the RESTful API technical profile
+
+A [Restful technical profile](restful-technical-profile.md) provides support for interfacing with your own RESTful service. Azure AD B2C sends data to the RESTful service in an `InputClaims` collection and receives data back in an `OutputClaims` collection. Find the **ClaimsProviders** element in your <em>**`TrustFrameworkExtensions.xml`**</em> file and add a new claims provider as follows:
+
+```xml
+<ClaimsProvider>
+ <DisplayName>REST APIs</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="REST-GetProfile">
+ <DisplayName>Get user extended profile Azure Function web hook</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <!-- Set the ServiceUrl with your own REST API endpoint -->
+ <Item Key="ServiceUrl">https://your-account.azurewebsites.net/api/GetProfile?code=your-code</Item>
+ <Item Key="SendClaimsIn">Body</Item>
+ <!-- Set AuthenticationType to Basic or ClientCertificate in production environments -->
+ <Item Key="AuthenticationType">None</Item>
+ <!-- REMOVE the following line in production environments -->
+ <Item Key="AllowInsecureAuthInProduction">true</Item>
+ </Metadata>
+ <InputClaims>
+ <!-- Claims sent to your REST API -->
+ <InputClaim ClaimTypeReferenceId="objectId" />
+ <InputClaim ClaimTypeReferenceId="userLanguage" PartnerClaimType="lang" DefaultValue="{Culture:LCID}" AlwaysUseDefaultValue="true" />
+ </InputClaims>
+ <OutputClaims>
+ <!-- Claims parsed from your REST API -->
+ <OutputClaim ClaimTypeReferenceId="balance" />
+ </OutputClaims>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
+ </TechnicalProfile>
+ </TechnicalProfiles>
+</ClaimsProvider>
+```
+
+In this example, the `userLanguage` will be sent to the REST service as `lang` within the JSON payload. The value of the `userLanguage` claim contains the current user language ID. For more information, see [claim resolver](claim-resolver-overview.md).
+
+### Configure the RESTful API technical profile
+
+After you deploy your REST API, set the metadata of the `REST-GetProfile` technical profile to reflect your own REST API, including:
+
+- **ServiceUrl**. Set the URL of the REST API endpoint.
+- **SendClaimsIn**. Specify how the input claims are sent to the RESTful claims provider.
+- **AuthenticationType**. Set the type of authentication being performed by the RESTful claims provider.
+- **AllowInsecureAuthInProduction**. In a production environment, make sure to set this metadata to `true`
+
+See the [RESTful technical profile metadata](restful-technical-profile.md#metadata) for more configurations.
+
+The comments above `AuthenticationType` and `AllowInsecureAuthInProduction` specify changes you should make when you move to a production environment. To learn how to secure your RESTful APIs for production, see [Secure RESTful API](secure-rest-api.md).
+
+## Add an orchestration step
+
+[User journeys](userjourneys.md) specify explicit paths through which a policy allows a relying party application to obtain the desired claims for a user. A user journey is represented as an orchestration sequence that must be followed through for a successful transaction. You can add or subtract orchestration steps. In this case, you will add a new orchestration step that is used to augment the information provided to the application after the user sign-up or sign-in via the REST API call.
+
+1. Open the base file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkBase.xml`**</em>.
+1. Search for the `<UserJourneys>` element. Copy the entire element, and then delete it.
+1. Open the extensions file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**</em>.
+1. Paste the `<UserJourneys>` into the extensions file, after the close of the `<ClaimsProviders>` element.
+1. Locate the `<UserJourney Id="SignUpOrSignIn">`, and add the following orchestration step before the last one.
+
+ ```xml
+ <OrchestrationStep Order="7" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="RESTGetProfile" TechnicalProfileReferenceId="REST-GetProfile" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ ```
+
+1. Refactor the last orchestration step by changing the `Order` to `8`. Your final two orchestration steps should look like the following:
+
+ ```xml
+ <OrchestrationStep Order="7" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="RESTGetProfile" TechnicalProfileReferenceId="REST-GetProfile" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+
+ <OrchestrationStep Order="8" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" />
+ ```
+
+1. Repeat the last two steps for the **ProfileEdit** and **PasswordReset** user journeys.
++
+## Include a claim in the token
+
+To return the `balance` claim back to the relying party application, add an output claim to the <em>`SocialAndLocalAccounts/`**`SignUpOrSignIn.xml`**</em> file. Adding an output claim will issue the claim into the token after a successful user journey, and will be sent to the application. Modify the technical profile element within the relying party section to add `balance` as an output claim.
+
+```xml
+<RelyingParty>
+ <DefaultUserJourney ReferenceId="SignUpOrSignIn" />
+ <TechnicalProfile Id="PolicyProfile">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="displayName" />
+ <OutputClaim ClaimTypeReferenceId="givenName" />
+ <OutputClaim ClaimTypeReferenceId="surname" />
+ <OutputClaim ClaimTypeReferenceId="email" />
+ <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub"/>
+ <OutputClaim ClaimTypeReferenceId="identityProvider" />
+ <OutputClaim ClaimTypeReferenceId="tenantId" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
+ <OutputClaim ClaimTypeReferenceId="balance" DefaultValue="" />
+ </OutputClaims>
+ <SubjectNamingInfo ClaimType="sub" />
+ </TechnicalProfile>
+</RelyingParty>
+```
+
+Repeat this step for the **ProfileEdit.xml**, and **PasswordReset.xml** user journeys.
+
+Save the files you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensions.xml*, *SignUpOrSignin.xml*, *ProfileEdit.xml*, and *PasswordReset.xml*.
+
+## Test the custom policy
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains your Azure AD tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your Azure AD tenant.
+1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**.
+1. Select **Identity Experience Framework**.
+1. Select **Upload Custom Policy**, and then upload the policy files that you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensions.xml*, *SignUpOrSignin.xml*, *ProfileEdit.xml*, and *PasswordReset.xml*.
+1. Select the sign-up or sign-in policy that you uploaded, and click the **Run now** button.
+1. You should be able to sign up using an email address or a Facebook account.
+1. The token sent back to your application includes the `balance` claim.
+
+```json
+{
+ "typ": "JWT",
+ "alg": "RS256",
+ "kid": "X5eXk4xyojNFum1kl2Ytv8dlNP4-c57dO6QGTVBwaNk"
+}.{
+ "exp": 1584961516,
+ "nbf": 1584957916,
+ "ver": "1.0",
+ "iss": "https://contoso.b2clogin.com/f06c2fe8-709f-4030-85dc-38a4bfd9e82d/v2.0/",
+ "aud": "e1d2612f-c2bc-4599-8e7b-d874eaca1ee1",
+ "acr": "b2c_1a_signup_signin",
+ "nonce": "defaultNonce",
+ "iat": 1584957916,
+ "auth_time": 1584957916,
+ "name": "Emily Smith",
+ "email": "emily@outlook.com",
+ "given_name": "Emily",
+ "family_name": "Smith",
+ "balance": "202.75"
+ ...
+}
+```
++
+## Best practices and how to troubleshoot
+
+### Using serverless cloud functions
+
+Serverless functions, like [HTTP triggers in Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md), provide a way create API endpoints to use with the API connector. The serverless cloud function can also call and invoke other web APIs, data stores, and other cloud services for complex scenarios.
+
+### Best practices
+
+Ensure that:
+* Your API is following the API request and response contracts as outlined above.
+* The **Endpoint URL** of the API connector points to the correct API endpoint.
+* Your API explicitly checks for null values of received claims that it depends on.
+* Your API implements an authentication method outlined in [secure your API connector](secure-rest-api.md).
+* Your API responds as quickly as possible to ensure a fluid user experience.
+ * Azure AD B2C will wait for a maximum of *20 seconds* to receive a response. If none is received, it will make *one more attempt (retry)* at calling your API.
+ * If you're using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm" in production. For Azure Functions, it's recommended you use at minimum the [Premium plan](../azure-functions/functions-scale.md) in production.
+* Ensure high availability of your API.
+* Monitor and optimize performance of downstream APIs, databases, or other dependencies of your API.
+
+
+### Use logging
+
+In general, it's helpful to use the logging tools enabled by your web API service, like [Application insights](../azure-functions/functions-monitoring.md), to monitor your API for unexpected error codes, exceptions, and poor performance.
+* Monitor for HTTP status codes that aren't HTTP 200 or 400.
+* A 401 or 403 HTTP status code typically indicates there's an issue with your authentication. Double-check your API's authentication layer and the corresponding configuration in the API connector.
+* Use more aggressive levels of logging (for example "trace" or "debug") in development if needed.
+* Monitor your API for long response times.
+
+Additionally, Azure AD B2C logs metadata about the API transactions that happen during user authentications via a user flow. To find these:
+1. Go to **Azure AD B2C**
+2. Under **Activities**, select **Audit logs**.
+3. Filter the list view: For **Date**, select the time interval you want, and for **Activity**, select **An API was called as part of a user flow**.
+4. Inspect individual logs. Each row represents an API connector attempting to be called during a user flow. If an API call fails and a retry occurs, it's still represented as a single row. The `numberOfAttempts` indicates the number of times your API was called. This value can be `1`or `2`. Other information about the API call is detailed in the logs.
+
+ ![Screenshot of an example audit log with API connector transaction](media/add-api-connector-token-enrichment/example-anonymized-audit-log.png)
++
+## Next steps
++
+- Get started with our [samples](api-connector-samples.md#api-connector-rest-api-samples).
+- [Secure your API Connector](secure-rest-api.md)
+++
+To learn how to secure your APIs, see the following articles:
+
+- [Walkthrough: Integrate REST API claims exchanges in your Azure AD B2C user journey as an orchestration step](add-api-connector-token-enrichment.md)
+- [Secure your RESTful API](secure-rest-api.md)
+- [Reference: RESTful technical profile](restful-technical-profile.md)
+++
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-api-connector.md
Title: Add API connectors to user flows
-description: Configure an API connector to be used in a user flow.
+ Title: Add API connectors to sign up user flows
+description: Configure an API connector to be used in a sign-up user flow.
You can create an API endpoint using one of our [samples](api-connector-samples.
In this scenario, we'll add the ability for users to enter a loyalty number into the Azure AD B2C sign-up page. The REST API validates whether the combination of email and loyalty number is mapped to a promotional code. If the REST API finds a promotional code for this user, it will be returned to Azure AD B2C. Finally, the promotional code will be inserted into the token claims for the application to consume.
-You can also design the interaction as an orchestration step. This is suitable when the REST API will not be validating data on screen, and always return claims. For more information, see [Walkthrough: Integrate REST API claims exchanges in your Azure AD B2C user journey as an orchestration step](custom-policy-rest-api-claims-exchange.md).
+You can also design the interaction as an orchestration step. This is suitable when the REST API will not be validating data on screen, and always return claims. For more information, see [Walkthrough: Integrate REST API claims exchanges in your Azure AD B2C user journey as an orchestration step](add-api-connector-token-enrichment.md).
::: zone-end
To use an [API connector](api-connectors-overview.md), you first create the API
2. Under **Azure services**, select **Azure AD B2C**. 4. Select **API connectors**, and then select **New API connector**.
- :::image type="content" source="media/add-api-connector/api-connector-new.png" alt-text="Providing the basic configuration like target URL and display name for an API connector during the creation experience.":::
+ ![Screenshot of basic configuration for an API connector](media/add-api-connector/api-connector-new.png)
5. Provide a display name for the call. For example, **Validate user information**. 6. Provide the **Endpoint URL** for the API call. 7. Choose the **Authentication type** and configure the authentication information for calling your API. Learn how to [Secure your API Connector](secure-rest-api.md).
- :::image type="content" source="media/add-api-connector/api-connector-config.png" alt-text="Providing authentication configuration for an API connector during the creation experience.":::
+ ![Screenshot of authentication configuration for an API connector](media/add-api-connector/api-connector-config.png)
8. Select **Save**.
Only user properties and custom attributes listed in the **Azure AD B2C** > **Us
Custom attributes exist in the **extension_\<extensions-app-id>_CustomAttribute** format in the directory. Your API should expect to receive claims in this same serialized format. For more information on custom attributes, see [Define custom attributes in Azure AD B2C](user-flow-custom-attributes.md).
-Additionally, the claims are typically sent in all request:
+Additionally, these claims are typically sent in all requests:
- **UI Locales ('ui_locales')** - An end-user's locale(s) as configured on their device. This can be used by your API to return internationalized responses. - **Step ('step')** - The step or point on the user flow that the API connector was invoked for. Values include:
- - `postFederationSignup` - corresponds to "After federating with an identity provider during sign-up"
- - `postAttributeCollection` - corresponds to "Before creating the user"
+ - `PostFederationSignup` - corresponds to "After federating with an identity provider during sign-up"
+ - `PostAttributeCollection` - corresponds to "Before creating the user"
+ - `PreTokenIssuance` - corresponds to "Before sending the token (preview)". [Learn more about this step](add-api-connector-token-enrichment.md)
- **Client ID ('client_id')** - The `appId` value of the application that an end-user is authenticating to in a user flow. This is *not* the resource application's `appId` in access tokens. - **Email Address ('email')** or [**identities ('identities')**](/graph/api/resources/objectidentity) - these claims can be used by your API to identify the end-user that is authenticating to the application.
Follow these steps to add an API connector to a sign-up user flow.
- **After federating with an identity provider during sign-up** - **Before creating the user**
+ - **Before sending the token (preview)**
- :::image type="content" source="media/add-api-connector/api-connectors-user-flow-select.png" alt-text="Selecting which API connector to use for a step in the user flow like 'Before creating the user'.":::
+ ![Selecting an API connector for a step in the user flow](media/add-api-connector/api-connectors-user-flow-select.png)
6. Select **Save**.
+These steps only exist for **Sign up and sign in (Recommended)** and **Sign up (Recommended)** but only apply to the sign-up part of the experience.
+ ## After federating with an identity provider during sign-up An API connector at this step in the sign-up process is invoked immediately after the user authenticates with an identity provider (like Google, Facebook, & Azure AD). This step precedes the ***attribute collection page***, which is the form presented to the user to collect user attributes. This step is not invoked if a user is registering with a local account.
Content-type: application/json
"displayName": "John Smith", "givenName":"John", "lastName":"Smith",
- "step": "postFederationSignup",
+ "step": "PostFederationSignup",
"client_id":"<guid>", "ui_locales":"en-US" }
Content-type: application/json
"country":"United States", "extension_<extensions-app-id>_CustomAttribute1": "custom attribute value", "extension_<extensions-app-id>_CustomAttribute2": "custom attribute value",
- "step": "postAttributeCollection",
+ "step": "PostAttributeCollection",
"client_id":"93fd07aa-333c-409d-955d-96008fd08dd9", "ui_locales":"en-US" }
See an example of a [blocking response](#example-of-a-blocking-response).
See an example of a [validation-error response](#example-of-a-validation-error-response).
+## Before sending the token (preview)
+
+> [!IMPORTANT]
+> API connectors used in this step are in preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+An API connector at this step is invoked when a token is about to be issued during sign-ins and sign-ups. An API connector for this step can be used to enrich the token with claim values from external sources.
+
+### Example request sent to the API at this step
+
+```http
+POST <API-endpoint>
+Content-type: application/json
+
+{
+ "clientId": "231c70e8-8424-48ac-9b5d-5623b9e4ccf3",
+ "step": "PreTokenApplicationClaims",
+ "ui_locales":"en-US"
+ "email": "johnsmith@fabrikam.onmicrosoft.com",
+ "identities": [
+ {
+ "signInType":"federated",
+ "issuer":"facebook.com",
+ "issuerAssignedId":"0123456789"
+ }
+ ],
+ "displayName": "John Smith",
+ "extension_<extensions-app-id>_CustomAttribute1": "custom attribute value",
+ "extension_<extensions-app-id>_CustomAttribute2": "custom attribute value",
+}
+```
+
+The claims that are sent to the API depend on the information defined for the user.
+
+### Expected response types from the web API at this step
+
+When the web API receives an HTTP request from Azure AD during a user flow, it can return these responses:
+
+- Continuation response
+
+#### Continuation response
+
+A continuation response indicates that the user flow should continue to the next step: issue the token.
+
+In a continuation response, the API can return additional claims. A claim returned by the API that you want to return in the token must be a built-in claim or [defined as a custom attribute](user-flow-custom-attributes.md) and must be selected in the **Application claims** configuration of the user flow.
+
+The claim value in the token will be the value returned by the API, not the value in the directory. Some claim values cannot be overwritten by the API response. Claims that can be returned by the API correspond to the set found under **User attributes** with the exception of `email`.
+
+See an example of a [continuation response](#example-of-a-continuation-response).
+
+> [!NOTE]
+> The API is only invoked during an initial authentication. When using refresh tokens to silently get new access or ID tokens, the token will include the values evaluated during the initial authentication.
+ ## Example responses ### Example of a continuation response
Content-type: application/json
| -- | -- | -- | -- | | version | String | Yes | The version of your API. | | action | String | Yes | Value must be `Continue`. |
-| \<builtInUserAttribute> | \<attribute-type> | No | Returned values can overwrite values collected from a user. |
-| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`, it is *optional*. Returned values can overwrite values collected from a user. |
+| \<builtInUserAttribute> | \<attribute-type> | No | Returned values can overwrite values collected from a user. |
+| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`, it is *optional*. Returned values can overwrite values collected from a user. |
### Example of a blocking response
Content-type: application/json
**End-user experience with a blocking response**
+![Example of a blocking response](media/add-api-connector/blocking-page-response.png)
### Example of a validation-error response
Content-type: application/json
**End-user experience with a validation-error response**
- :::image type="content" source="media/add-api-connector/validation-error-postal-code.png" alt-text="An example image of what the end-user experience looks like after an API returns a validation-error response.":::
+![Example of a validation-error response](media/add-api-connector/validation-error-postal-code.png)
::: zone-end
Ensure that:
* Your API explicitly checks for null values of received claims that it depends on. * Your API implements an authentication method outlined in [secure your API Connector](secure-rest-api.md). * Your API responds as quickly as possible to ensure a fluid user experience.
+ * Azure AD B2C will wait for a maximum of *20 seconds* to receive a response. If none is received, it will make *one more attempt (retry)* at calling your API.
* If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm" in production. For Azure Functions, it's recommended to use at minimum the [Premium plan](../azure-functions/functions-scale.md) in production. * Ensure high availability of your API. * Monitor and optimize performance of downstream APIs, databases, or other dependencies of your API.
In general, it's helpful to use the logging tools enabled by your web API servic
* Use more aggressive levels of logging (for example "trace" or "debug") in development if needed. * Monitor your API for long response times.
+Additionally, Azure AD B2C logs metadata about the API transactions that happen during user authentications via a user flow. To find these:
+1. Go to **Azure AD B2C**.
+2. Under **Activities**, select **Audit logs**.
+3. Filter the list view: For **Date**, select the time interval you want, and for **Activity**, select **An API was called as part of a user flow**.
+4. Inspect individual logs. Each row represents an API connector attempting to be called during a user flow. If an API call fails and a retry occurs, it's still represented as a single row. The `numberOfAttempts` indicates the number of times your API was called. This value can be `1`or `2`. Other information about the API call is detailed in the logs.
+
+![Example of an API connector transaction during user authentication](media/add-api-connector/example-anonymized-audit-log.png)
+ ::: zone-end ::: zone pivot="b2c-custom-policy"
In general, it's helpful to use the logging tools enabled by your web API servic
::: zone pivot="b2c-custom-policy" -- [Walkthrough: Integrate REST API claims exchanges in your Azure AD B2C user journey as an orchestration step](custom-policy-rest-api-claims-exchange.md)
+- [Walkthrough: Integrate REST API claims exchanges in your Azure AD B2C user journey as an orchestration step](add-api-connector-token-enrichment.md)
- [Secure your API Connector](secure-rest-api.md) - [Reference: RESTful technical profile](restful-technical-profile.md)
active-directory-b2c Api Connectors Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/api-connectors-overview.md
As a developer or IT administrator, you can use API connectors to integrate your
- **Validate user input data**. Validate against malformed or invalid user data. For example, you can validate user-provided data against existing data in an external data store or list of permitted values. If invalid, you can ask a user to provide valid data or block the user from continuing the sign-up flow. - **Verify user identity**. Use an identity verification service to add an extra level of security to account creation decisions. - **Integrate with a custom approval workflow**. Connect to a custom approval system for managing and limiting account creation.
+- **Augment tokens with attributes from external sources**. Enrich tokens with attributes about the user from sources external to Azure AD B2C such as cloud systems, custom user stores, custom permission systems, legacy identity services, and more.
- **Overwrite user attributes**. Reformat or assign a value to an attribute collected from the user. For example, if a user enters the first name in all lowercase or all uppercase letters, you can format the name with only the first letter capitalized. - **Run custom business logic**. You can trigger downstream events in your cloud systems to send push notifications, update corporate databases, manage permissions, audit databases, and perform other custom actions.
An API connector provides Azure AD B2C with the information needed to call API e
## Where you can enable an API connector in a user flow
-There are two places in a user flow where you can enable an API connector:
+There are three places in a user flow where you can enable an API connector:
-- After federating with an identity provider during sign-up-- Before creating the user-
-> [!IMPORTANT]
-> In both of these cases, the API connectors are invoked during user **sign-up**, not sign-in.
+- **After federating with an identity provider during sign-up** - applies to sign-ups experiences only
+- **Before creating the user** - applies to sign-ups experiences only
+- **Before sending the token (preview)** - applies to sign-ups and sign-ins
### After federating with an identity provider during sign-up
An API connector at this step in the sign-up process is invoked after the attrib
- Verify user identity. - Query external systems for existing data about the user to return it in the application token or store it in Azure AD.
+### Before sending the token (preview)
++
+An API connector at this step in the sign-up or sign-in process is invoked before a token is issued. The following are examples of scenarios you might enable at this step:
+- Enriching the token with attributes about the user from sources different than the directory including legacy identity systems, HR systems, external user stores, and more.
+- Enriching the token with group or role attributes that you store and manage in your own permission system.
+- Applying claims transformations or manipulations to values of claims in the directory.
+ ::: zone-end ::: zone pivot="b2c-custom-policy"
Your should design your REST API service and its underlying components (such as
::: zone pivot="b2c-user-flow" -- Learn how to [add an API connector to a user flow](add-api-connector.md)-- Learn how to [Secure your API Connector](secure-rest-api.md)
+- Learn how to [add an API connector to modify sign-up experiences](add-api-connector.md)
+- Learn how to [add an API connector to enrich tokens with external claims](add-api-connector-token-enrichment.md)
+- Learn how to [secure your API Connector](secure-rest-api.md)
- Get started with our [samples](api-connector-samples.md#api-connector-rest-api-samples) ::: zone-end
Your should design your REST API service and its underlying components (such as
See the following articles for examples of using a RESTful technical profile: - [Walkthrough: Add an API connector to a sign-up user flow](add-api-connector.md)-- [Walkthrough: Add REST API claims exchanges to custom policies in Azure Active Directory B2C](custom-policy-rest-api-claims-exchange.md)
+- [Walkthrough: Add REST API claims exchanges to custom policies in Azure Active Directory B2C](add-api-connector-token-enrichment.md)
- [Secure your REST API services](secure-rest-api.md) - [Reference: RESTful technical profile](restful-technical-profile.md)
active-directory-b2c Custom Policy Rest Api Claims Exchange https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-rest-api-claims-exchange.md
- Title: REST API claims exchanges - Azure Active Directory B2C
-description: Add REST API claims exchanges to custom policies in Active Directory B2C.
------- Previously updated : 08/04/2021--
-zone_pivot_groups: b2c-policy-type
--
-# Walkthrough: Add REST API claims exchanges to custom policies in Azure Active Directory B2C
------
-Azure Active Directory B2C (Azure AD B2C) enables identity developers to integrate an interaction with a RESTful API in a user journey. At the end of this walkthrough, you'll be able to create an Azure AD B2C user journey that interacts with [RESTful services](api-connectors-overview.md).
-
-In this scenario, we enrich the user's token data by integrating with a corporate line-of-business workflow. During sign-up or sign-in with local or federated account, Azure AD B2C invokes a REST API to get the user's extended profile data from a remote data source. In this sample, Azure AD B2C sends the user's unique identifier, the objectId. The REST API then returns the user's account balance (a random number). Use this sample as a starting point to integrate with your own CRM system, marketing database, or any line-of-business workflow.
-
-You can also design the interaction as a validation technical profile. This is suitable when the REST API will be validating data on screen and returning claims. For more information, see [Walkthrough: Add an API connector to a sign-up user flow](add-api-connector.md).
-
-## Prerequisites
--- Complete the steps in [Get started with custom policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy). You should have a working custom policy for sign-up and sign-in with local accounts.-- Learn how to [Integrate REST API claims exchanges in your Azure AD B2C custom policy](api-connectors-overview.md).-
-## Prepare a REST API endpoint
-
-For this walkthrough, you should have a REST API that validates whether a user's Azure AD B2C objectId is registered in your back-end system.
-If registered, the REST API returns the user account balance. Otherwise, the REST API registers the new account in the directory and returns the starting balance `50.00`.
-
-The following JSON code illustrates the data Azure AD B2C will send to your REST API endpoint.
-
-```json
-{
- "objectId": "User objectId",
- "lang": "Current UI language"
-}
-```
-
-Once your REST API validates the data, it must return an HTTP 200 (Ok), with the following JSON data:
-
-```json
-{
- "balance": "760.50"
-}
-```
-
-The setup of the REST API endpoint is outside the scope of this article. We have created an [Azure Functions](../azure-functions/functions-reference.md) sample. You can access the complete Azure function code at [GitHub](https://github.com/azure-ad-b2c/rest-api/tree/master/source-code/azure-function).
-
-## Define claims
-
-A claim provides temporary storage of data during an Azure AD B2C policy execution. You can declare claims within the [claims schema](claimsschema.md) section.
-
-1. Open the extensions file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**</em>.
-1. Search for the [BuildingBlocks](buildingblocks.md) element. If the element doesn't exist, add it.
-1. Locate the [ClaimsSchema](claimsschema.md) element. If the element doesn't exist, add it.
-1. Add the following claims to the **ClaimsSchema** element.
-
-```xml
-<ClaimType Id="balance">
- <DisplayName>Your Balance</DisplayName>
- <DataType>string</DataType>
-</ClaimType>
-<ClaimType Id="userLanguage">
- <DisplayName>User UI language (used by REST API to return localized error messages)</DisplayName>
- <DataType>string</DataType>
-</ClaimType>
-```
-
-## Add the RESTful API technical profile
-
-A [Restful technical profile](restful-technical-profile.md) provides support for interfacing with your own RESTful service. Azure AD B2C sends data to the RESTful service in an `InputClaims` collection and receives data back in an `OutputClaims` collection. Find the **ClaimsProviders** element in your <em>**`TrustFrameworkExtensions.xml`**</em> file and add a new claims provider as follows:
-
-```xml
-<ClaimsProvider>
- <DisplayName>REST APIs</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="REST-GetProfile">
- <DisplayName>Get user extended profile Azure Function web hook</DisplayName>
- <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
- <Metadata>
- <!-- Set the ServiceUrl with your own REST API endpoint -->
- <Item Key="ServiceUrl">https://your-account.azurewebsites.net/api/GetProfile?code=your-code</Item>
- <Item Key="SendClaimsIn">Body</Item>
- <!-- Set AuthenticationType to Basic or ClientCertificate in production environments -->
- <Item Key="AuthenticationType">None</Item>
- <!-- REMOVE the following line in production environments -->
- <Item Key="AllowInsecureAuthInProduction">true</Item>
- </Metadata>
- <InputClaims>
- <!-- Claims sent to your REST API -->
- <InputClaim ClaimTypeReferenceId="objectId" />
- <InputClaim ClaimTypeReferenceId="userLanguage" PartnerClaimType="lang" DefaultValue="{Culture:LCID}" AlwaysUseDefaultValue="true" />
- </InputClaims>
- <OutputClaims>
- <!-- Claims parsed from your REST API -->
- <OutputClaim ClaimTypeReferenceId="balance" />
- </OutputClaims>
- <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
- </TechnicalProfile>
- </TechnicalProfiles>
-</ClaimsProvider>
-```
-
-In this example, the `userLanguage` will be sent to the REST service as `lang` within the JSON payload. The value of the `userLanguage` claim contains the current user language ID. For more information, see [claim resolver](claim-resolver-overview.md).
-
-### Configure the RESTful API technical profile
-
-After you deploy your REST API, set the metadata of the `REST-GetProfile` technical profile to reflect your own REST API, including:
--- **ServiceUrl**. Set the URL of the REST API endpoint.-- **SendClaimsIn**. Specify how the input claims are sent to the RESTful claims provider.-- **AuthenticationType**. Set the type of authentication being performed by the RESTful claims provider. -- **AllowInsecureAuthInProduction**. In a production environment, make sure to set this metadata to `true`
-
-See the [RESTful technical profile metadata](restful-technical-profile.md#metadata) for more configurations.
-
-The comments above `AuthenticationType` and `AllowInsecureAuthInProduction` specify changes you should make when you move to a production environment. To learn how to secure your RESTful APIs for production, see [Secure RESTful API](secure-rest-api.md).
-
-## Add an orchestration step
-
-[User journeys](userjourneys.md) specify explicit paths through which a policy allows a relying party application to obtain the desired claims for a user. A user journey is represented as an orchestration sequence that must be followed through for a successful transaction. You can add or subtract orchestration steps. In this case, you will add a new orchestration step that is used to augment the information provided to the application after the user sign-up or sign-in via the REST API call.
-
-1. Open the base file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkBase.xml`**</em>.
-1. Search for the `<UserJourneys>` element. Copy the entire element, and then delete it.
-1. Open the extensions file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**</em>.
-1. Paste the `<UserJourneys>` into the extensions file, after the close of the `<ClaimsProviders>` element.
-1. Locate the `<UserJourney Id="SignUpOrSignIn">`, and add the following orchestration step before the last one.
-
- ```xml
- <OrchestrationStep Order="7" Type="ClaimsExchange">
- <ClaimsExchanges>
- <ClaimsExchange Id="RESTGetProfile" TechnicalProfileReferenceId="REST-GetProfile" />
- </ClaimsExchanges>
- </OrchestrationStep>
- ```
-
-1. Refactor the last orchestration step by changing the `Order` to `8`. Your final two orchestration steps should look like the following:
-
- ```xml
- <OrchestrationStep Order="7" Type="ClaimsExchange">
- <ClaimsExchanges>
- <ClaimsExchange Id="RESTGetProfile" TechnicalProfileReferenceId="REST-GetProfile" />
- </ClaimsExchanges>
- </OrchestrationStep>
-
- <OrchestrationStep Order="8" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" />
- ```
-
-1. Repeat the last two steps for the **ProfileEdit** and **PasswordReset** user journeys.
--
-## Include a claim in the token
-
-To return the `balance` claim back to the relying party application, add an output claim to the <em>`SocialAndLocalAccounts/`**`SignUpOrSignIn.xml`**</em> file. Adding an output claim will issue the claim into the token after a successful user journey, and will be sent to the application. Modify the technical profile element within the relying party section to add `balance` as an output claim.
-
-```xml
-<RelyingParty>
- <DefaultUserJourney ReferenceId="SignUpOrSignIn" />
- <TechnicalProfile Id="PolicyProfile">
- <DisplayName>PolicyProfile</DisplayName>
- <Protocol Name="OpenIdConnect" />
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="displayName" />
- <OutputClaim ClaimTypeReferenceId="givenName" />
- <OutputClaim ClaimTypeReferenceId="surname" />
- <OutputClaim ClaimTypeReferenceId="email" />
- <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub"/>
- <OutputClaim ClaimTypeReferenceId="identityProvider" />
- <OutputClaim ClaimTypeReferenceId="tenantId" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
- <OutputClaim ClaimTypeReferenceId="balance" DefaultValue="" />
- </OutputClaims>
- <SubjectNamingInfo ClaimType="sub" />
- </TechnicalProfile>
-</RelyingParty>
-```
-
-Repeat this step for the **ProfileEdit.xml**, and **PasswordReset.xml** user journeys.
-
-Save the files you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensions.xml*, *SignUpOrSignin.xml*, *ProfileEdit.xml*, and *PasswordReset.xml*.
-
-## Test the custom policy
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your Azure AD tenant.
-1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**.
-1. Select **Identity Experience Framework**.
-1. Select **Upload Custom Policy**, and then upload the policy files that you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensions.xml*, *SignUpOrSignin.xml*, *ProfileEdit.xml*, and *PasswordReset.xml*.
-1. Select the sign-up or sign-in policy that you uploaded, and click the **Run now** button.
-1. You should be able to sign up using an email address or a Facebook account.
-1. The token sent back to your application includes the `balance` claim.
-
-```json
-{
- "typ": "JWT",
- "alg": "RS256",
- "kid": "X5eXk4xyojNFum1kl2Ytv8dlNP4-c57dO6QGTVBwaNk"
-}.{
- "exp": 1584961516,
- "nbf": 1584957916,
- "ver": "1.0",
- "iss": "https://contoso.b2clogin.com/f06c2fe8-709f-4030-85dc-38a4bfd9e82d/v2.0/",
- "aud": "e1d2612f-c2bc-4599-8e7b-d874eaca1ee1",
- "acr": "b2c_1a_signup_signin",
- "nonce": "defaultNonce",
- "iat": 1584957916,
- "auth_time": 1584957916,
- "name": "Emily Smith",
- "email": "emily@outlook.com",
- "given_name": "Emily",
- "family_name": "Smith",
- "balance": "202.75"
- ...
-}
-```
-
-## Next steps
-
-To learn how to secure your APIs, see the following articles:
--- [Walkthrough: Integrate REST API claims exchanges in your Azure AD B2C user journey as an orchestration step](custom-policy-rest-api-claims-exchange.md)-- [Secure your RESTful API](secure-rest-api.md)-- [Reference: RESTful technical profile](restful-technical-profile.md)-
active-directory-b2c Partner Arkose Labs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-arkose-labs.md
To redeploy the local instance during testing, repeat steps 1 to 4.
This sample protects the web API endpoint using [HTTP Basic authentication](https://tools.ietf.org/html/rfc7617).
-Username and password are stored as environment variables and not as part of the repository. See [local.settings.json](../azure-functions/functions-run-local.md?tabs=macos%2ccsharp%2cbash#local-settings-file) file for more information.
+Username and password are stored as environment variables and not as part of the repository. See [local.settings.json](../azure-functions/functions-develop-local.md#local-settings-file) file for more information.
1. Create a local.settings.json file in your root folder
active-directory-b2c Partner Bloksec https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-bloksec.md
To get started, you'll need:
- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). -- An [Azure AD B2C tenant](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-tenant) that's linked to your Azure subscription.
+- An [Azure AD B2C tenant](/azure/active-directory-b2c/tutorial-create-tenant) that's linked to your Azure subscription.
- A BlokSec [trial account](https://bloksec.com/). -- If you haven't already done so, [register](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-register-applications) a web application, [and enable ID token implicit grant](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-register-applications#enable-id-token-implicit-grant).
+- If you haven't already done so, [register](/azure/active-directory-b2c/tutorial-register-applications) a web application, [and enable ID token implicit grant](/azure/active-directory-b2c/tutorial-register-applications#enable-id-token-implicit-grant).
::: zone-end ::: zone pivot="b2c-custom-policy"
To get started, you'll need:
- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). -- An [Azure AD B2C tenant](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-tenant) that's linked to your Azure subscription.
+- An [Azure AD B2C tenant](/azure/active-directory-b2c/tutorial-create-tenant) that's linked to your Azure subscription.
- A BlokSec [trial account](https://bloksec.com/). -- If you haven't already done so, [register](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-register-applications) a web application, [and enable ID token implicit grant](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-register-applications#enable-id-token-implicit-grant).
+- If you haven't already done so, [register](/azure/active-directory-b2c/tutorial-register-applications) a web application, [and enable ID token implicit grant](/azure/active-directory-b2c/tutorial-register-applications#enable-id-token-implicit-grant).
-- Complete the steps in the [**Get started with custom policies in Azure Active Directory B2C**](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy).
+- Complete the steps in the [**Get started with custom policies in Azure Active Directory B2C**](/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy).
::: zone-end ### Part 1 - Create an application registration in BlokSec
To get started, you'll need:
||| | Name |Azure AD B2C or your desired application name| |SSO type | OIDC|
- |Logo URI |[https://bloksec.io/assets/AzureB2C.png/](https://bloksec.io/assets/AzureB2C.png/) a link to the image of your choice|
+ |Logo URI |[https://bloksec.io/assets/AzureB2C.png](https://bloksec.io/assets/AzureB2C.png) a link to the image of your choice|
|Redirect URIs | https://**your-B2C-tenant-name**.b2clogin.com/**your-B2C-tenant-name**.onmicrosoft.com/oauth2/authresp<BR>**For Example**: 'https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp' <BR><BR>If you use a custom domain, enter https://**your-domain-name**/**your-tenant-name**.onmicrosoft.com/oauth2/authresp. <BR> Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant. |
- |Post log out redirect URIs |https://**your-B2C-tenant-name**.b2clogin.com/**your-B2C-tenant-name**.onmicrosoft.com/**{policy}**/oauth2/v2.0/logout <BR> [Send a sign-out request](https://docs.microsoft.com/azure/active-directory-b2c/openid-connect#send-a-sign-out-request). |
+ |Post log out redirect URIs |https://**your-B2C-tenant-name**.b2clogin.com/**your-B2C-tenant-name**.onmicrosoft.com/**{policy}**/oauth2/v2.0/logout <BR> [Send a sign-out request](/azure/active-directory-b2c/openid-connect#send-a-sign-out-request). |
4. Once saved, select the newly created Azure AD B2C application to open the application configuration, select **Generate App Secret**.
You should now see BlokSec as a new OIDC Identity provider listed within your B2
For additional information, review the following articles: -- [Custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-overview)
+- [Custom policies in Azure AD B2C](/azure/active-directory-b2c/custom-policy-overview)
-- [Get started with custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy)
+- [Get started with custom policies in Azure AD B2C](/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy)
::: zone-end ::: zone pivot="b2c-custom-policy" >[!NOTE]
->In Azure Active Directory B2C, [**custom policies**](https://docs.microsoft.com/azure/active-directory-b2c/user-flow-overview) are designed primarily to address complex scenarios. For most scenarios, we recommend that you use built-in [**user flows**](https://docs.microsoft.com/azure/active-directory-b2c/user-flow-overview).
+>In Azure Active Directory B2C, [**custom policies**](/azure/active-directory-b2c/user-flow-overview) are designed primarily to address complex scenarios. For most scenarios, we recommend that you use built-in [**user flows**](/azure/active-directory-b2c/user-flow-overview).
### Part 2 - Create a policy key
Select **Upload Custom Policy**, and then upload the two policy files that you c
1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-2. For **Application**, select a web application that you [previously registered](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-register-applications). The **Reply URL** should show `https://jwt.ms`.
+2. For **Application**, select a web application that you [previously registered](/azure/active-directory-b2c/tutorial-register-applications). The **Reply URL** should show `https://jwt.ms`.
3. Select the **Run now** button.
If the sign-in process is successful, your browser is redirected to `https://jwt
For additional information, review the following articles: -- [Custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-overview)
+- [Custom policies in Azure AD B2C](/azure/active-directory-b2c/custom-policy-overview)
-- [Get started with custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy)
+- [Get started with custom policies in Azure AD B2C](/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy)
::: zone-end
active-directory-b2c Restful Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/restful-technical-profile.md
See the following articles for examples of using a RESTful technical profile:
- [Integrate REST API claims exchanges in your Azure AD B2C custom policy](api-connectors-overview.md) - [Walkthrough: Add an API connector to a sign-up user flow](add-api-connector.md)-- [Walkthrough: Add REST API claims exchanges to custom policies in Azure Active Directory B2C](custom-policy-rest-api-claims-exchange.md)
+- [Walkthrough: Add REST API claims exchanges to custom policies in Azure Active Directory B2C](add-api-connector-token-enrichment.md)
- [Secure your REST API services](secure-rest-api.md)
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/tutorial-create-tenant.md
Last updated 12/03/2020 + # Tutorial: Create an Azure Active Directory B2C tenant
If you don't have an Azure subscription, create a [free account](https://azure.m
![Subscription tenant, Directory + Subscription filter with subscription tenant selected](media/tutorial-create-tenant/portal-01-pick-directory.png)
+1. Add **Microsoft.AzureActiveDirectory** as a resource provider for the Azure subscription your're using ([learn more](https://docs.microsoft.com/azure/azure-resource-manager/management/resource-providers-and-types?WT.mc_id=Portal-Microsoft_Azure_Support#register-resource-provider-1)):
+
+ 1. On the Azure portal menu or from the **Home** page, select **Subscriptions**.
+ 2. Select your subscription, and then select **Resource providers**.
+ 3. Make sure the **Microsoft.AzureActiveDirectory** row shows a status of **Registered**. If it doesn't, select the row, and then select **Register**.
+ 1. On the Azure portal menu or from the **Home** page, select **Create a resource**. ![Select the Create a resource button](media/tutorial-create-tenant/create-a-resource.png)
In this article, you learned how to:
Next, learn how to register a web application in your new tenant. > [!div class="nextstepaction"]
-> [Register your applications >](tutorial-register-applications.md)
+> [Register your applications >](tutorial-register-applications.md)
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/whats-new-docs.md
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Developer notes for Azure Active Directory B2C](custom-policy-developer-notes.md) - [Add an API connector to a sign-up user flow](add-api-connector.md)-- [Walkthrough: Add REST API claims exchanges to custom policies in Azure Active Directory B2C](custom-policy-rest-api-claims-exchange.md)
+- [Walkthrough: Add REST API claims exchanges to custom policies in Azure Active Directory B2C](add-api-connector-token-enrichment.md)
- [Secure your API Connector](secure-rest-api.md) - [Use API connectors to customize and extend sign-up user flows](api-connectors-overview.md) - [Technical and feature overview of Azure Active Directory B2C](technical-overview.md)
active-directory Concept Mfa Howitworks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-mfa-howitworks.md
Previously updated : 07/14/2020 Last updated : 08/05/2021
Azure AD Multi-Factor Authentication works by requiring two or more of the follo
* Something you have, such as a trusted device that is not easily duplicated, like a phone or hardware key. * Something you are - biometrics like a fingerprint or face scan.
-Users can register themselves for both self-service password reset and Azure AD Multi-Factor Authentication in one step to simplify the on-boarding experience. Administrators can define what forms of secondary authentication can be used. Azure AD Multi-Factor Authentication can also be required when users perform a self-service password reset to further secure that process.
+Azure AD Multi-Factor Authentication can also further secure password reset. When users register themselves for Azure AD Multi-Factor Authentication, they can also register for self-service password reset in one step. Administrators can choose forms of secondary authentication and configure challenges for MFA based on configuration decisions.
-![Authentication methods in use at the sign-in screen](media/concept-authentication-methods/overview-login.png)
-
-Azure AD Multi-Factor Authentication helps safeguard access to data and applications while maintaining simplicity for users. It provides additional security by requiring a second form of authentication and delivers strong authentication via a range of easy to use [authentication methods](concept-authentication-methods.md). Users may or may not be challenged for MFA based on configuration decisions that an administrator makes.
+Apps and services don't need changes to use Azure AD Multi-Factor Authentication. The verification prompts are part of the Azure AD sign-in event, which automatically requests and processes the MFA challenge when required.
-Your applications or services don't need to make any changes to use Azure AD Multi-Factor Authentication. The verification prompts are part of the Azure AD sign-in event, which automatically requests and processes the MFA challenge when required.
+![Authentication methods in use at the sign-in screen](media/concept-authentication-methods/overview-login.png)
## Available verification methods
-When a user signs in to an application or service and receive an MFA prompt, they can choose from one of their registered forms of additional verification. An administrator could require registration of these Azure AD Multi-Factor Authentication verification methods, or the user can access their own [My Profile](https://myprofile.microsoft.com) to edit or add verification methods.
+When a user signs in to an application or service and receives an MFA prompt, they can choose from one of their registered forms of additional verification. Users can access [My Profile](https://myprofile.microsoft.com) to edit or add verification methods.
The following additional forms of verification can be used with Azure AD Multi-Factor Authentication: * Microsoft Authenticator app
-* OATH Hardware token
+* OATH Hardware token (preview)
+* OATH Software token
* SMS * Voice call ## How to enable and use Azure AD Multi-Factor Authentication
-Users and groups can be enabled for Azure AD Multi-Factor Authentication to prompt for additional verification during the sign-in event. [Security defaults](../fundamentals/concept-fundamentals-security-defaults.md) are available for all Azure AD tenants to quickly enable the use of the Microsoft Authenticator app for all users.
+All Azure AD tenants can use [security defaults](../fundamentals/concept-fundamentals-security-defaults.md) to quickly enable Microsoft Authenticator for all users. Users and groups can be enabled for Azure AD Multi-Factor Authentication to prompt for additional verification during the sign-in event.
For more granular controls, [Conditional Access](../conditional-access/overview.md) policies can be used to define events or applications that require MFA. These policies can allow regular sign-in events when the user is on the corporate network or a registered device, but prompt for additional verification factors when remote or on a personal device.
For more granular controls, [Conditional Access](../conditional-access/overview.
To learn about licensing, see [Features and licenses for Azure AD Multi-Factor Authentication](concept-mfa-licensing.md).
+To learn more about different authentication and validation methods, see [Authentication methods in Azure Active Directory](concept-authentication-methods.md).
+ To see MFA in action, enable Azure AD Multi-Factor Authentication for a set of test users in the following tutorial: > [!div class="nextstepaction"]
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-dynamic-membership.md
Previously updated : 02/18/2021 Last updated : 08/06/2021
David evaluates to true, Da evaluates to false.
The values used in an expression can consist of several types, including:
-* Strings
-* Boolean ΓÇô true, false
-* Numbers
-* Arrays ΓÇô number array, string array
+- Strings
+- Boolean ΓÇô true, false
+- Numbers
+- Arrays ΓÇô number array, string array
When specifying a value within an expression it is important to use the correct syntax to avoid errors. Some syntax tips are:
-* Double quotes are optional unless the value is a string.
-* String and regex operations are not case sensitive.
-* When a string value contains double quotes, both quotes should be escaped using the \` character, for example, user.department -eq \`"Sales\`" is the proper syntax when "Sales" is the value.
-* You can also perform Null checks, using null as a value, for example, `user.department -eq null`.
+- Double quotes are optional unless the value is a string.
+- String and regex operations are not case sensitive.
+- When a string value contains double quotes, both quotes should be escaped using the \` character, for example, user.department -eq \`"Sales\`" is the proper syntax when "Sales" is the value. Single quotes should be escaped by using two single quotes instead of one each time.
+- You can also perform Null checks, using null as a value, for example, `user.department -eq null`.
### Use of Null values To specify a null value in a rule, you can use the *null* value.
-* Use -eq or -ne when comparing the *null* value in an expression.
-* Use quotes around the word *null* only if you want it to be interpreted as a literal string value.
-* The -not operator can't be used as a comparative operator for null. If you use it, you get an error whether you use null or $null.
+- Use -eq or -ne when comparing the *null* value in an expression.
+- Use quotes around the word *null* only if you want it to be interpreted as a literal string value.
+- The -not operator can't be used as a comparative operator for null. If you use it, you get an error whether you use null or $null.
The correct way to reference the null value is as follows:
Parentheses are needed only when precedence does not meet your requirements. For
A membership rule can consist of complex expressions where the properties, operators, and values take on more complex forms. Expressions are considered complex when any of the following are true:
-* The property consists of a collection of values; specifically, multi-valued properties
-* The expressions use the -any and -all operators
-* The value of the expression can itself be one or more expressions
+- The property consists of a collection of values; specifically, multi-valued properties
+- The expressions use the -any and -all operators
+- The value of the expression can itself be one or more expressions
## Multi-value properties
Multi-value properties are collections of objects of the same type. They can be
You can use -any and -all operators to apply a condition to one or all of the items in the collection, respectively.
-* -any (satisfied when at least one item in the collection matches the condition)
-* -all (satisfied when all items in the collection match the condition)
+- -any (satisfied when at least one item in the collection matches the condition)
+- -all (satisfied when all items in the collection match the condition)
#### Example 1
Extension attributes and custom extension properties are supported as string pro
[Custom extension properties](../hybrid/how-to-connect-sync-feature-directory-extensions.md) are synced from on-premises Windows Server AD or from a connected SaaS application and are of the format of `user.extension_[GUID]_[Attribute]`, where:
-* [GUID] is the unique identifier in Azure AD for the application that created the property in Azure AD
-* [Attribute] is the name of the property as it was created
+- [GUID] is the unique identifier in Azure AD for the application that created the property in Azure AD
+- [Attribute] is the name of the property as it was created
An example of a rule that uses a custom extension property is:
active-directory Groups Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-naming-policy.md
Previously updated : 06/11/2021 Last updated : 08/06/2021
To enforce consistent naming conventions for Microsoft 365 groups created or edi
> [!IMPORTANT] > Using Azure AD naming policy for Microsoft 365 groups requires that you possess but not necessarily assign an Azure Active Directory Premium P1 license or Azure AD Basic EDU license for each unique user that is a member of one or more Microsoft 365 groups.
-The naming policy is applied to creating or editing groups created across workloads (for example, Outlook, Microsoft Teams, SharePoint, Exchange, or Planner). It is applied to both the group name and group alias. If you set up your naming policy in Azure AD and you have an existing Exchange group naming policy, the Azure AD naming policy is enforced in your organization.
+The naming policy is applied to creating or editing groups created across workloads (for example, Outlook, Microsoft Teams, SharePoint, Exchange, or Planner), even if no editing changes are made. It is applied to both the group name and group alias. If you set up your naming policy in Azure AD and you have an existing Exchange group naming policy, the Azure AD naming policy is enforced in your organization.
-When group naming policy is configured, the policy will be applied to new Microsoft 365 groups created by end users. Naming policy does not apply to certain directory roles, such as Global Administrator or User Administrator (please see below for the complete list of roles exempted from group naming policy). For existing Microsoft 365 groups, the policy will not immediately apply at the time of configuration. Once group owner edits the group name for these groups, naming policy will be enforced.
+When group naming policy is configured, the policy will be applied to new Microsoft 365 groups created by end users. Naming policy does not apply to certain directory roles, such as Global Administrator or User Administrator (please see below for the complete list of roles exempted from group naming policy). For existing Microsoft 365 groups, the policy will not immediately apply at the time of configuration. Once group owner edits the group name for these groups, naming policy will be enforced, even if no changes are made.
## Naming policy features
You can enforce naming policy for groups in two different ways:
### Prefix-suffix naming policy
-The general structure of the naming convention is ΓÇÿPrefix[GroupName]SuffixΓÇÖ. While you can define multiple prefixes and suffixes, you can only have one instance of the [GroupName] in the setting. The prefixes or suffixes can be either fixed strings or user attributes such as \[Department\] that are substituted based on the user who is creating the group. The total allowable number of characters for your prefix and suffix strings including group name is 53 characters.
+The general structure of the naming convention is ΓÇÿPrefix[GroupName]SuffixΓÇÖ. While you can define multiple prefixes and suffixes, you can only have one instance of the [GroupName] in the setting. The prefixes or suffixes can be either fixed strings or user attributes such as \[Department\] that are substituted based on the user who is creating the group. The total allowable number of characters for your prefix and suffix strings including group name is 53 characters.
-Prefixes and suffixes can contain special characters that are supported in group name and group alias. Any characters in the prefix or suffix that are not supported in the group alias are still applied in the group name, but removed from the group alias. Because of this restriction, the prefixes and suffixes applied to the group name might be different from the ones applied to the group alias.
+Prefixes and suffixes can contain special characters that are supported in group name and group alias. Any characters in the prefix or suffix that are not supported in the group alias are still applied in the group name, but removed from the group alias. Because of this restriction, the prefixes and suffixes applied to the group name might be different from the ones applied to the group alias.
#### Fixed strings
Some administrator roles are exempted from these policies, across all group work
![edit and upload blocked words list for naming policy](./media/groups-naming-policy/blockedwords.png)
-1. View or edit the current list of custom blocked words by selecting **Download**.
+1. View or edit the current list of custom blocked words by selecting **Download**. New entries must be added to the existing entries.
1. Upload the new list of custom blocked words by selecting the file icon. 1. Save your changes for the new policy to go into effect by selecting **Save**.
Planner | Planner is compliant with the naming policy. Planner shows the naming
Dynamics 365 for Customer Engagement | Dynamics 365 for Customer Engagement is compliant with the naming policy. Dynamics 365 shows the naming policy enforced name when the user types a group name or group email alias. When the user enters a custom blocked word, an error message is shown with the blocked word so the user can remove it. School Data Sync (SDS) | Groups created through SDS comply with naming policy, but the naming policy isn't applied automatically. SDS administrators have to append the prefixes and suffixes to class names for which groups need to be created and then uploaded to SDS. Group create or edit would fail otherwise. Classroom app | Groups created in Classroom app comply with the naming policy, but the naming policy isn't applied automatically, and the naming policy preview isn't shown to the users while entering a classroom group name. Users must enter the enforced classroom group name with prefixes and suffixes. If not, the classroom group create or edit operation fails with errors.
-Power BI | Power BI workspaces are compliant with the naming policy.
+Power BI | Power BI workspaces are compliant with the naming policy.
Yammer | When a user signed in to Yammer with their Azure Active Directory account creates a group or edits a group name, the group name will comply with naming policy. This applies both to Microsoft 365 connected groups and all other Yammer groups.<br>If a Microsoft 365 connected group was created before the naming policy is in place, the group name will not automatically follow the naming policies. When a user edits the group name, they will be prompted to add the prefix and suffix. StaffHub | StaffHub teams do not follow the naming policy, but the underlying Microsoft 365 group does. StaffHub team name does not apply the prefixes and suffixes and does not check for custom blocked words. But StaffHub does apply the prefixes and suffixes and removes blocked words from the underlying Microsoft 365 group. Exchange PowerShell | Exchange PowerShell cmdlets are compliant with the naming policy. Users receive appropriate error messages with suggested prefixes and suffixes and for custom blocked words if they don't follow the naming policy in the group name and group alias (mailNickname).
active-directory Self Service Sign Up Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/self-service-sign-up-add-api-connector.md
Custom attributes exist in the **extension_\<extensions-app-id>_AttributeName**
Additionally, the claims are typically sent in all request: - **UI Locales ('ui_locales')** - An end-user's locale(s) as configured on their device. This can be used by your API to return internationalized responses. <!-
- - `postFederationSignup` - corresponds to "After federating with an identity provider during sign-up"
- - `postAttributeCollection` - corresponds to "Before creating the user"
+ - `PostFederationSignup` - corresponds to "After federating with an identity provider during sign-up"
+ - `PostAttributeCollection` - corresponds to "Before creating the user"
- **Client ID ('client_id')** - The `appId` value of the application that an end-user is authenticating to in a user flow. This is *not* the resource application's `appId` in access tokens. --> - **Email Address ('email')** or [**identities ('identities')**](/graph/api/resources/objectidentity) - these claims can be used by your API to identify the end-user that is authenticating to the application.
Ensure that:
* Your API explicitly checks for null values of received claims that it depends on. * Your API implements an authentication method outlined in [secure your API Connector](self-service-sign-up-secure-api-connector.md). * Your API responds as quickly as possible to ensure a fluid user experience.
+ * Azure AD will wait for a maximum of *20 seconds* to receive a response. If none is received, it will make *one more attempt (retry)* at calling your API.
* If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm" in production. For Azure Functions, it's recommended to use at minimum the [Premium plan](../../azure-functions/functions-scale.md) * Ensure high availability of your API. * Monitor and optimize performance of downstream APIs, databases, or other dependencies of your API.
In general, it's helpful to use the logging tools enabled by your web API servic
## Next steps - Learn how to [add a custom approval workflow to self-service sign-up](self-service-sign-up-add-approvals.md)-- Get started with our [quickstart samples](code-samples-self-service-sign-up.md#api-connector-azure-function-quickstarts).
+- Get started with our [quickstart samples](code-samples-self-service-sign-up.md#api-connector-azure-function-quickstarts).
active-directory Howto Integrate Activity Logs With Splunk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-splunk.md
na Previously updated : 03/10/2021 Last updated : 08/05/2021
To use this feature, you need:
![The "Data Summary" button](./media/howto-integrate-activity-logs-with-splunk/DataSummary.png)
-2. Select the **Sourcetypes** tab, and then select **amal: aadal:audit**
+2. Select the **Sourcetypes** tab, and then select **mscs:azure:eventhub**
- ![The Data Summary Sourcetypes tab](./media/howto-integrate-activity-logs-with-splunk/sourcetypeaadal.png)
+ ![The Data Summary Sourcetypes tab](./media/howto-integrate-activity-logs-with-splunk/source-eventhub.png)
- The Azure AD activity logs are shown in the following figure:
+Append **body.records.category=AuditLogs** to the search. The Azure AD activity logs are shown in the following figure:
- ![Activity logs](./media/howto-integrate-activity-logs-with-splunk/activitylogs.png)
+ ![Activity logs](./media/howto-integrate-activity-logs-with-splunk/activity-logs.png)
> [!NOTE]
-> If you cannot install an add-on in your Splunk instance (for example, if you're using a proxy or running on Splunk Cloud), you can forward these events to the Splunk HTTP Event Collector. To do so, use this [Azure function](https://github.com/Microsoft/AzureFunctionforSplunkVS), which is triggered by new messages in the event hub.
+> If you cannot install an add-on in your Splunk instance (for example, if you're using a proxy or running on Splunk Cloud), you can forward these events to the Splunk HTTP Event Collector. To do so, use this [Azure function](https://github.com/splunk/azure-functions-splunk), which is triggered by new messages in the event hub.
> ## Next steps
active-directory Custom Available Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-available-permissions.md
This article contains the currently available app registration permissions for custom role definitions in Azure Active Directory (Azure AD).
+## License requirements
++ ## Permissions for managing single-tenant applications When choosing the permissions for your custom role, you have the option to grant access to manage only single-tenant applications. single-tenant applications are available only to users in the Azure AD organization where the application is registered. single-tenant applications are defined as having **Supported account types** set to "Accounts in this organizational directory only." In the Graph API, single-tenant applications have the signInAudience property set to "AzureADMyOrg."
To grant access to manage only single-tenant applications, use the permissions b
See the [custom roles overview](custom-overview.md) for an explanation of what the general terms subtype, permission, and property set mean. The following information is specific to application registrations.
-### Create and delete
+## Create and delete
There are two permissions available for granting the ability to create application registrations, each with different behavior:
Grants the ability to delete app registrations restricted to those that are acce
> [!NOTE] > When assigning a role that contains create permissions, the role assignment must be made at the directory scope. A create permission assigned at a resource scope does not grant the ability to create app registrations.
-### Read
+## Read
All member users in the organization can read app registration information by default. However, guest users and application service principals can't. If you plan to assign a role to a guest user or application, you must include the appropriate read permissions.
Grants access to read standard application registration properties. This include
Grants the same permissions as microsoft.directory/applications/standard/read, but for only single-tenant applications.
-### Update
+## Update
#### microsoft.directory/applications/allProperties/update
Ability to update the delegated permissions, application permissions, authorized
Grants the same permissions as microsoft.directory/applications/permissions/update, but only for single-tenant applications.
-## License requirements
-- ## Next steps - Create custom roles using [the Azure portal, Azure AD PowerShell, and Graph API](custom-create.md)
active-directory Custom Consent Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-consent-permissions.md
Use the permissions listed in this article to manage app consent policies, as we
> [!NOTE] > The Azure AD admin portal does not yet support adding the permissions listed in this article to a custom directory role definition. You must [use Azure AD PowerShell to create a custom directory role](custom-create.md#create-a-role-using-powershell) with the permissions listed in this article.
-### Granting delegated permissions to apps on behalf of self (user consent)
+#### Granting delegated permissions to apps on behalf of self (user consent)
To allow users to grant consent to applications on behalf of themselves (user consent), subject to an app consent policy.
Where `{id}` is replaced by the ID of an [app consent policy](../manage-apps/man
For example, to allow users to grant consent on their own behalf, subject to the built-in app consent policy with ID `microsoft-user-default-low`, you would use the permission `...managePermissionGrantsForSelf.microsoft-user-default-low`.
-### Granting permissions to apps on behalf of all (admin consent)
+#### Granting permissions to apps on behalf of all (admin consent)
To delegate tenant-wide admin consent to apps, for both delegated permissions and application permissions (app roles):
Where `{id}` is replaced by the ID of an [app consent policy](../manage-apps/man
For example, to allow role assignees to grant tenant-wide admin consent to apps subject to a custom [app consent policy](../manage-apps/manage-app-consent-policies.md) with ID `low-risk-any-app`, you would use the permission `microsoft.directory/servicePrincipals/managePermissionGrantsForAll.low-risk-any-app`.
-### Managing app consent policies
+#### Managing app consent policies
To delegate the creation, update and deletion of [app consent policies](../manage-apps/manage-app-consent-policies.md).
active-directory Custom Enterprise App Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-enterprise-app-permissions.md
Previously updated : 11/04/2020 Last updated : 08/06/2021
# Enterprise application permissions for custom roles in Azure Active Directory
-This article contains the currently available enterprise application permissions for custom role definitions in Azure Active Directory (Azure AD). In this article, you'll find permission lists for some common scenarios and the full list of enterprise app permissions. Application Proxy permissions are not currently rolled out in this release.
+This article contains the currently available enterprise application permissions for custom role definitions in Azure Active Directory (Azure AD). In this article, you'll find permission lists for some common scenarios and the full list of enterprise app permissions.
## License requirements
This article contains the currently available enterprise application permissions
For more information about how to use these permissions, see [Assign custom roles to manage enterprise apps](custom-enterprise-apps.md)
-### Assigning users or groups to an application
+#### Assigning users or groups to an application
To delegate the assignment of user and groups that can access SAML based single sign-on applications. Permissions required - microsoft.directory/servicePrincipals/appRoleAssignedTo/update
-### Creating gallery applications
+#### Creating gallery applications
To delegate the creation of Azure AD Gallery applications such as ServiceNow, F5, Salesforce, among others. Permissions required: - microsoft.directory/applicationTemplates/instantiate
-### Configuring basic SAML URLs
+#### Configuring basic SAML URLs
To delegate the update and read of basic SAML Configurations for SAML based single sign-on applications. Permissions required: - microsoft.directory/servicePrincipals/authentication/update - microsoft.directory/applications.myOrganization/authentication/update
-### Rolling over or creating signing certs
+#### Rolling over or creating signing certs
To delegate the management of signing certificates for SAML based single sign-on applications. Permissions required. microsoft.directory/applications/credentials/update
-### Update expiring sign-in cert notification email address
+#### Update expiring sign-in cert notification email address
To delegate the update of expiring sign-in certificates notification email addresses for SAML based single sign-on applications. Permissions required:
To delegate the update of expiring sign-in certificates notification email addre
- microsoft.directory/servicePrincipals/authentication/update - microsoft.directory/servicePrincipals/basic/update
-### Manage SAML token signature and Sign-in algorithm
+#### Manage SAML token signature and Sign-in algorithm
To delegate the update of the SAML token signature and sign-in algorithm for SAML based single sign-on applications. Permissions required:
To delegate the update of the SAML token signature and sign-in algorithm for SAM
- microsoft.directory/applications/authentication/update - microsoft.directory/servicePrincipals/policies/update
-### Manage user attributes and claims
+#### Manage user attributes and claims
To delegate the create, delete, and update of user attributes and claims for SAML based single sign-on applications. Permissions required:
Performing any write operation such as managing the job, schema, or credentials
Setting the scope to all users and groups or assigned users and groups currently requires both the synchronizationJob and synchronizationCredentials permissions.
-### Turn on or restart provisioning jobs
+#### Turn on or restart provisioning jobs
To delegate ability to turn on, off and restart provisioning jobs. Permissions required: - microsoft.directory/servicePrincipals/synchronizationJobs/manage
-### Configure the provisioning schema
+#### Configure the provisioning schema
To delegate updates to attribute mapping. Permissions required: - microsoft.directory/servicePrincipals/synchronizationSchema/manage
-### Read provisioning settings associated with the application object
+#### Read provisioning settings associated with the application object
To delegate ability to read provisioning settings associated with the object. Permissions required: - microsoft.directory/applications/synchronization/standard/read
-### Read provisioning settings associated with your service principal
+#### Read provisioning settings associated with your service principal
To delegate ability to read provisioning settings associated with your service principal. Permissions required: - microsoft.directory/servicePrincipals/synchronization/standard/read
-### Authorize application access for provisioning
+#### Authorize application access for provisioning
To delegate ability to authorize application access for provisioning. Example input Oauth bearer token. Permissions required: - microsoft.directory/servicePrincipals/synchronizationCredentials/manage
+## Application Proxy permissions
+
+Performing any write operations to the Application Proxy properties of the application also requires the permissions to update the application's basic properties and authentication.
+
+To read and perform any write operations to the Application Proxy properties of the application also requires the read permissions to view connector groups as this is part of the list of properties shown on the page.
+
+#### Delegate Application Proxy connector management
+
+To delegate create, read, update, and delete actions for connector management. Permissions required:
+
+- microsoft.directory/connectorGroups/allProperties/read
+- microsoft.directory/connectorGroups/allProperties/update
+- microsoft.directory/connectorGroups/create
+- microsoft.directory/connectorGroups/delete
+- microsoft.directory/connectors/allProperties/read
+- microsoft.directory/connectors/create
++
+#### Delegate Application Proxy settings management
+
+To delegate create, read, update, and delete actions for Application Proxy properties on an app. Permissions required:
+
+- microsoft.directory/applications/applicationProxy/read
+- microsoft.directory/applications/applicationProxy/update
+- microsoft.directory/applications/applicationProxyAuthentication/update
+- microsoft.directory/applications/applicationProxySslCertificate/update
+- microsoft.directory/applications/applicationProxyUrlSettings/update
+- microsoft.directory/applications/basic/update
+- microsoft.directory/applications/authentication/update
+- microsoft.directory/connectorGroups/allProperties/read
+
+#### Read Application Proxy Settings for an app
+
+To delegate read permissions for Application Proxy properties on an app. Permissions required:
+
+- microsoft.directory/applications/applicationProxy/read
+- microsoft.directory/connectorGroups/allProperties/read
+
+#### Update URL configuration Application Proxy settings for an app
+
+To delegate create, read, update, and delete (CRUD) permissions for updating the Application Proxy external URL, internal URL, and SSL certificate properties. Permissions required:
+
+- microsoft.directory/applications/applicationProxy/read
+- microsoft.directory/connectorGroups/allProperties/read
+- microsoft.directory/applications/basic/update
+- microsoft.directory/applications/authentication/update
+- microsoft.directory/applications/applicationProxyAuthentication/update
+- microsoft.directory/applications/applicationProxySslCertificate/update
+- microsoft.directory/applications/applicationProxyUrlSettings/update
+ ## Full list of permissions > [!div class="mx-tableFixed"] > | Permission | Description | > | - | -- |
-> | microsoft.directory/applicationPolicies/allProperties/read | Read all properties on application policies. |
-> | microsoft.directory/applicationPolicies/allProperties/update | Update all properties on application policies. |
-> | microsoft.directory/applicationPolicies/basic/update | Update standard properties of application policies. |
-> | microsoft.directory/applicationPolicies/create | Create application policies. |
-> | microsoft.directory/applicationPolicies/createAsOwner | Create application policies. Creator is added as the first owner. |
-> | microsoft.directory/applicationPolicies/delete | Delete application policies. |
-> | microsoft.directory/applicationPolicies/owners/read | Read owners on application policies. |
-> | microsoft.directory/applicationPolicies/owners/update | Update the owner property of application policies. |
-> | microsoft.directory/applicationPolicies/policyAppliedTo/read | Read application policies applied to objects list. |
-> | microsoft.directory/applicationPolicies/standard/read | Read standard properties of application policies. |
-> | microsoft.directory/servicePrincipals/allProperties/allTasks | Create and delete servicePrincipals, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/servicePrincipals/allProperties/read | Read all properties on servicePrincipals. |
-> | microsoft.directory/servicePrincipals/allProperties/update | Update all properties on servicePrincipals. |
-> | microsoft.directory/servicePrincipals/appRoleAssignedTo/read | Read service principal role assignments. |
-> | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments. |
-> | microsoft.directory/servicePrincipals/appRoleAssignments/read | Read role assignments assigned to service principals. |
-> | microsoft.directory/servicePrincipals/audience/update | Update audience properties on service principals. |
-> | microsoft.directory/servicePrincipals/authentication/update | Update authentication properties on service principals. |
-> | microsoft.directory/servicePrincipals/basic/update | Update basic properties on service principals. |
-> | microsoft.directory/servicePrincipals/create | Create service principals. |
-> | microsoft.directory/servicePrincipals/createAsOwner | Create service principals. Creator is added as the first owner. |
-> | microsoft.directory/servicePrincipals/credentials/update | Update credentials properties on service principals. |
-> | microsoft.directory/servicePrincipals/delete | Delete service principals. |
-> | microsoft.directory/servicePrincipals/disable | Disable service principals. |
-> | microsoft.directory/servicePrincipals/enable | Enable service principals. |
-> | microsoft.directory/servicePrincipals/getPasswordSingleSignOnCredentials | Read password single sign-on credentials on service principals. |
-> | microsoft.directory/servicePrincipals/managePasswordSingleSignOnCredentials | Manage password single sign-on credentials on service principals. |
-> | microsoft.directory/servicePrincipals/oAuth2PermissionGrants/read | Read delegated permission grants on service principals. |
-> | microsoft.directory/servicePrincipals/owners/read | Read owners on service principals. |
-> | microsoft.directory/servicePrincipals/owners/update | Update owners on service principals. |
-> | microsoft.directory/servicePrincipals/permissions/update | |
-> | microsoft.directory/servicePrincipals/policies/read | Read policies on service principals. |
-> | microsoft.directory/servicePrincipals/policies/update | Update policies on service principals. |
-> | microsoft.directory/servicePrincipals/standard/read | Read standard properties of service principals. |
-> | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with your service principal. |
-> | microsoft.directory/servicePrincipals/tag/update | Update tags property on service principals. |
-> | microsoft.directory/applicationTemplates/instantiate | Instantiate gallery applications from application templates. |
-> | microsoft.directory/auditLogs/allProperties/read | Read audit logs. |
-> | microsoft.directory/signInReports/allProperties/read | Read sign-in reports. |
-> | microsoft.directory/applications/synchronization/standard/read | Read provisioning settings associated with the application object. |
+> | microsoft.directory/applicationPolicies/allProperties/read | Read all properties on application policies |
+> | microsoft.directory/applicationPolicies/allProperties/update | Update all properties on application policies |
+> | microsoft.directory/applicationPolicies/basic/update | Update standard properties of application policies |
+> | microsoft.directory/applicationPolicies/create | Create application policies |
+> | microsoft.directory/applicationPolicies/createAsOwner | Create application policies. Creator is added as the first owner |
+> | microsoft.directory/applicationPolicies/delete | Delete application policies |
+> | microsoft.directory/applicationPolicies/owners/read | Read owners on application policies |
+> | microsoft.directory/applicationPolicies/owners/update | Update the owner property of application policies |
+> | microsoft.directory/applicationPolicies/policyAppliedTo/read | Read application policies applied to objects list |
+> | microsoft.directory/applicationPolicies/standard/read | Read standard properties of application policies |
+> | microsoft.directory/servicePrincipals/allProperties/allTasks | Create and delete servicePrincipals, and read and update all properties in Azure Active Directory |
+> | microsoft.directory/servicePrincipals/allProperties/read | Read all properties on servicePrincipals |
+> | microsoft.directory/servicePrincipals/allProperties/update | Update all properties on servicePrincipals |
+> | microsoft.directory/servicePrincipals/appRoleAssignedTo/read | Read service principal role assignments |
+> | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments |
+> | microsoft.directory/servicePrincipals/appRoleAssignments/read | Read role assignments assigned to service principals |
+> | microsoft.directory/servicePrincipals/audience/update | Update audience properties on service principals |
+> | microsoft.directory/servicePrincipals/authentication/update | Update authentication properties on service principals |
+> | microsoft.directory/servicePrincipals/basic/update | Update basic properties on service principals |
+> | microsoft.directory/servicePrincipals/create | Create service principals |
+> | microsoft.directory/servicePrincipals/createAsOwner | Create service principals. Creator is added as the first owner |
+> | microsoft.directory/servicePrincipals/credentials/update | Update credentials properties on service principals |
+> | microsoft.directory/servicePrincipals/delete | Delete service principals |
+> | microsoft.directory/servicePrincipals/disable | Disable service principals |
+> | microsoft.directory/servicePrincipals/enable | Enable service principals |
+> | microsoft.directory/servicePrincipals/getPasswordSingleSignOnCredentials | Read password single sign-on credentials on service principals |
+> | microsoft.directory/servicePrincipals/managePasswordSingleSignOnCredentials | Manage password single sign-on credentials on service principals |
+> | microsoft.directory/servicePrincipals/oAuth2PermissionGrants/read | Read delegated permission grants on service principals |
+> | microsoft.directory/servicePrincipals/owners/read | Read owners on service principals |
+> | microsoft.directory/servicePrincipals/owners/update | Update owners on service principals |
+> | microsoft.directory/servicePrincipals/permissions/update | Update permissions of service principals |
+> | microsoft.directory/servicePrincipals/policies/read | Read policies on service principals |
+> | microsoft.directory/servicePrincipals/policies/update | Update policies on service principals |
+> | microsoft.directory/servicePrincipals/standard/read | Read standard properties of service principals |
+> | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with your service principal |
+> | microsoft.directory/servicePrincipals/tag/update | Update tags property on service principals |
+> | microsoft.directory/applicationTemplates/instantiate | Instantiate gallery applications from application templates |
+> | microsoft.directory/auditLogs/allProperties/read | Read audit logs |
+> | microsoft.directory/signInReports/allProperties/read | Read sign-in reports |
+> | microsoft.directory/applications/applicationProxy/read | Read all application proxy properties of all types of applications |
+> | microsoft.directory/applications/applicationProxy/update | Update all application proxy properties of all types of applications |
+> | microsoft.directory/applications/applicationProxyAuthentication/update | Update application proxy authentication properties of all types of applications |
+> | microsoft.directory/applications/applicationProxyUrlSettings/update | Update application proxy internal and external URLs of all types of applications |
+> | microsoft.directory/applications/applicationProxySslCertificate/update | Update application proxy custom domains of all types of applications |
+> | microsoft.directory/applications/synchronization/standard/read | Read provisioning settings associated with the application object |
+> | microsoft.directory/connectorGroups/create | Create application proxy connector groups |
+> | microsoft.directory/connectorGroups/delete | Delete application proxy connector groups |
+> | microsoft.directory/connectorGroups/allProperties/read | Read all properties of application proxy connector groups |
+> | microsoft.directory/connectorGroups/allProperties/update | Update all properties of application proxy connector groups |
+> | microsoft.directory/connectors/create | Create application proxy connectors |
+> | microsoft.directory/connectors/allProperties/read | Read all properties of application proxy connectors |
> | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Manage all aspects of job synchronization for service principal resources | > | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with service principals | > | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Manage all aspects of schema synchronization for service principal resources |
active-directory Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/prerequisites.md
Previously updated : 07/30/2021 Last updated : 08/06/2021
To use PowerShell commands to do the following:
You must have the following module installed: -- [AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview) version 2.0.2.129 or later
+- [AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview) version 2.0.2.138 or later
#### Check AzureADPreview version
You should see output similar to the following:
```powershell Version Name Repository Description - - - --
-2.0.2.129 AzureADPreview PSGallery Azure Active Directory V2 Preview Module. ...
+2.0.2.138 AzureADPreview PSGallery Azure Active Directory V2 Preview Module. ...
``` #### Install AzureADPreview
To use AzureADPreview, follow these steps to make sure it is imported into the c
```powershell ModuleType Version Name ExportedCommands - - - -
- Binary 2.0.2.129 AzureADPreview {Add-AzureADAdministrativeUnitMember, Add-AzureADApplicati...
+ Binary 2.0.2.138 AzureADPreview {Add-AzureADAdministrativeUnitMember, Add-AzureADApplicati...
``` ## Graph Explorer
active-directory User Help Auth App Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/user-help-auth-app-faq.md
On Android, Microsoft recommends allowing the app to access location all the tim
**A**: The Authenticator app collects your GPS information to determine what country you are located in. The country name and location coordinates are sent back to the system to determine if you are allowed to access the protected resource. The country name is stored and reported back to your IT admin, but your actual coordinates are never saved or stored on Microsoft servers.
+### Notification blocks sign-in
+
+**Q**: IΓÇÖm trying to sign in and I need to select the number in my app thatΓÇÖs displayed on the sign- in screen. However, the notification prompt from Authenticator is blocking the sign-in screen. What do I do?
+
+**A**: Select the ΓÇ£HideΓÇ¥ option on the notification so you can see the sign-in screen and the number you need to select. The prompt will reappear after 5 seconds, and you can select the correct number then.
+ ### Registering a device **Q**: Is registering a device agreeing to give the company or service access to my device?
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-security.md
When an AKS cluster is created or scaled up, the nodes are automatically deploye
#### Linux nodes Each evening, Linux nodes in AKS get security patches through their distro security update channel. This behavior is automatically configured as the nodes are deployed in an AKS cluster. To minimize disruption and potential impact to running workloads, nodes are not automatically rebooted if a security patch or kernel update requires it. For more information about how to handle node reboots, see [Apply security and kernel updates to nodes in AKS][aks-kured].
-Nightly updates apply security updates to the OS on the node, but the node image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node will receive all the security and kernel updates available during the automatic check every night but will remain unpatched until all checks and restarts are complete. You can use node image upgrade to check for and update node images used by your cluster. For more details on nod image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][node-image-upgrade].
+Nightly updates apply security updates to the OS on the node, but the node image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node will receive all the security and kernel updates available during the automatic check every night but will remain unpatched until all checks and restarts are complete. You can use node image upgrade to check for and update node images used by your cluster. For more details on node image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][node-image-upgrade].
#### Windows Server nodes
For more information on core Kubernetes and AKS concepts, see:
[authorized-ip-ranges]: api-server-authorized-ip-ranges.md [private-clusters]: private-clusters.md [network-policy]: use-network-policies.md
-[node-image-upgrade]: node-image-upgrade.md
+[node-image-upgrade]: node-image-upgrade.md
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/gpu-cluster.md
Title: Use GPUs on Azure Kubernetes Service (AKS)
description: Learn how to use GPUs for high performance compute or graphics-intensive workloads on Azure Kubernetes Service (AKS) Previously updated : 08/21/2020-- Last updated : 08/06/2021 #Customer intent: As a cluster administrator or developer, I want to create an AKS cluster that can use high-performance GPU-based VMs for compute-intensive workloads. # Use GPUs for compute-intensive workloads on Azure Kubernetes Service (AKS)
-Graphical processing units (GPUs) are often used for compute-intensive workloads such as graphics and visualization workloads. AKS supports the creation of GPU-enabled node pools to run these compute-intensive workloads in Kubernetes. For more information on available GPU-enabled VMs, see [GPU optimized VM sizes in Azure][gpu-skus]. For AKS nodes, we recommend a minimum size of *Standard_NC6*.
+Graphical processing units (GPUs) are often used for compute-intensive workloads such as graphics and visualization workloads. AKS supports the creation of GPU-enabled node pools to run these compute-intensive workloads in Kubernetes. For more information on available GPU-enabled VMs, see [GPU optimized VM sizes in Azure][gpu-skus]. For AKS node pools, we recommend a minimum size of *Standard_NC6*.
> [!NOTE] > GPU-enabled VMs contain specialized hardware that is subject to higher pricing and region availability. For more information, see the [pricing][azure-pricing] tool and [region availability][azure-availability].
Currently, using GPU-enabled node pools is only available for Linux node pools.
## Before you begin
-This article assumes that you have an existing AKS cluster with nodes that support GPUs. Your AKS cluster must run Kubernetes 1.10 or later. If you need an AKS cluster that meets these requirements, see the first section of this article to [create an AKS cluster](#create-an-aks-cluster).
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see [Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI][aks-quickstart].
You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-## Create an AKS cluster
+## Get the credentials for your cluster
-If you need an AKS cluster that meets the minimum requirements (GPU-enabled node and Kubernetes version 1.10 or later), complete the following steps. If you already have an AKS cluster that meets these requirements, [skip to the next section](#confirm-that-gpus-are-schedulable).
-
-First, create a resource group for the cluster using the [az group create][az-group-create] command. The following example creates a resource group name *myResourceGroup* in the *eastus* region:
+Get the credentials for your AKS cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following example command gets the credentials for the *myAKSCluster* in the *myResourceGroup* resource group.
```azurecli-interactive
-az group create --name myResourceGroup --location eastus
+az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+```
+
+## Add the NVIDIA device plugin
+
+There are two options for adding the NVIDIA device plugin:
+
+* Use the AKS GPU image
+* Manually install the NVIDIA device plugin
+
+> [!WARNING]
+> You can use either of the above options, but you shouldn't manually install the NVIDIA device plugin daemon set with clusters that use the AKS GPU image.
+
+### Update your cluster to use the AKS GPU image (preview)
+
+AKS provides is providing a fully configured AKS image that already contains the [NVIDIA device plugin for Kubernetes][nvidia-github].
+
+Register the `GPUDedicatedVHDPreview` feature:
+
+```azurecli
+az feature register --name GPUDedicatedVHDPreview --namespace Microsoft.ContainerService
+```
+
+It might take several minutes for the status to show as **Registered**. You can check the registration status by using the [az feature list](/cli/azure/feature#az_feature_list) command:
+
+```azurecli
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/GPUDedicatedVHDPreview')].{Name:name,State:properties.state}"
+```
+
+When the status shows as registered, refresh the registration of the `Microsoft.ContainerService` resource provider by using the [az provider register](/cli/azure/provider#az_provider_register) command:
+
+```azurecli
+az provider register --namespace Microsoft.ContainerService
+```
+
+To install the aks-preview CLI extension, use the following Azure CLI commands:
+
+```azurecli
+az extension add --name aks-preview
+```
+
+To update the aks-preview CLI extension, use the following Azure CLI commands:
+
+```azurecli
+az extension update --name aks-preview
```
-Now create an AKS cluster using the [az aks create][az-aks-create] command. The following example creates a cluster with a single node of size `Standard_NC6`:
+## Add a node pool for GPU nodes
+
+To add a node pool with to your cluster, use [az aks nodepool add][az-aks-nodepool-add].
```azurecli-interactive
-az aks create \
+az aks nodepool add \
--resource-group myResourceGroup \
- --name myAKSCluster \
+ --cluster-name myAKSCluster \
+ --name gpunp \
+ --node-count 1 \
--node-vm-size Standard_NC6 \
- --node-count 1
+ --node-taints sku=gpu:NoSchedule \
+ --aks-custom-headers UseGPUDedicatedVHD=true \
+ --enable-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 3
```
-Get the credentials for your AKS cluster using the [az aks get-credentials][az-aks-get-credentials] command:
+The above command adds a node pool named *gpunp* to the *myAKSCluster* in the *myResourceGroup* resource group. The command also sets the VM size for the nodes in the node pool to *Standard_NC6*, enables the cluster autoscaler, configures the cluster autoscaler to maintain a minimum of one node and a maximum of three nodes in the node pool, specifies a specialized AKS GPU image nodes on your new node pool, and specifies a *sku=gpu:NoSchedule* taint for the node pool.
+
+> [!NOTE]
+> A taint and VM size can only be set for node pools during node pool creation, but the autoscaler settings can be updated at any time.
+
+> [!NOTE]
+> If your GPU sku requires generation two VMs use *--aks-custom-headers UseGPUDedicatedVHD=true,usegen2vm=true*. For example:
+>
+> ```azurecli
+> az aks nodepool add \
+> --resource-group myResourceGroup \
+> --cluster-name myAKSCluster \
+> --name gpunp \
+> --node-count 1 \
+> --node-vm-size Standard_NC6 \
+> --node-taints sku=gpu:NoSchedule \
+> --aks-custom-headers UseGPUDedicatedVHD=true,usegen2vm=true \
+> --enable-cluster-autoscaler \
+> --min-count 1 \
+> --max-count 3
+> ```
+
+### Manually install the NVIDIA device plugin
+
+Alternatively, you can deploy a DaemonSet for the NVIDIA device plugin. This DaemonSet runs a pod on each node to provide the required drivers for the GPUs.
+
+Add a node pool with to your cluster using [az aks nodepool add][az-aks-nodepool-add].
```azurecli-interactive
-az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name gpunp \
+ --node-count 1 \
+ --node-vm-size Standard_NC6 \
+ --node-taints sku=gpu:NoSchedule \
+ --enable-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 3
```
-## Install NVIDIA device plugin
+The above command adds a node pool named *gpunp* to the *myAKSCluster* in the *myResourceGroup* resource group. The command also sets the VM size for the nodes in the node pool to *Standard_NC6*, enables the cluster autoscaler, configures the cluster autoscaler to maintain a minimum of one node and a maximum of three nodes in the node pool, and specifies a *sku=gpu:NoSchedule* taint for the node pool.
-Before the GPUs in the nodes can be used, you must deploy a DaemonSet for the NVIDIA device plugin. This DaemonSet runs a pod on each node to provide the required drivers for the GPUs.
+> [!NOTE]
+> A taint and VM size can only be set for node pools during node pool creation, but the autoscaler settings can be updated at any time.
-First, create a namespace using the [kubectl create namespace][kubectl-create] command, such as *gpu-resources*:
+Create a namespace using the [kubectl create namespace][kubectl-create] command, such as *gpu-resources*:
```console kubectl create namespace gpu-resources
spec:
- key: nvidia.com/gpu operator: Exists effect: NoSchedule
+ - key: "sku"
+ operator: "Equal"
+ value: "gpu"
+ effect: "NoSchedule"
containers: - image: mcr.microsoft.com/oss/nvidia/k8s-device-plugin:1.11 name: nvidia-device-plugin-ctr
spec:
path: /var/lib/kubelet/device-plugins ```
-Now use the [kubectl apply][kubectl-apply] command to create the DaemonSet and confirm the NVIDIA device plugin is created successfully, as shown in the following example output:
+Use [kubectl apply][kubectl-apply] to create the DaemonSet and confirm the NVIDIA device plugin is created successfully, as shown in the following example output:
```console $ kubectl apply -f nvidia-device-plugin-ds.yaml
$ kubectl apply -f nvidia-device-plugin-ds.yaml
daemonset "nvidia-device-plugin" created ```
-## Use the AKS specialized GPU image (preview)
-
-As alternative to these steps, AKS is providing a fully configured AKS image that already contains the [NVIDIA device plugin for Kubernetes][nvidia-github].
-
-> [!WARNING]
-> You should not manually install the NVIDIA device plugin daemon set for clusters using the new AKS specialized GPU image.
--
-Register the `GPUDedicatedVHDPreview` feature:
-
-```azurecli
-az feature register --name GPUDedicatedVHDPreview --namespace Microsoft.ContainerService
-```
-
-It might take several minutes for the status to show as **Registered**. You can check the registration status by using the [az feature list](/cli/azure/feature#az_feature_list) command:
-
-```azurecli
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/GPUDedicatedVHDPreview')].{Name:name,State:properties.state}"
-```
-
-When the status shows as registered, refresh the registration of the `Microsoft.ContainerService` resource provider by using the [az provider register](/cli/azure/provider#az_provider_register) command:
-
-```azurecli
-az provider register --namespace Microsoft.ContainerService
-```
-
-To install the aks-preview CLI extension, use the following Azure CLI commands:
-
-```azurecli
-az extension add --name aks-preview
-```
-
-To update the aks-preview CLI extension, use the following Azure CLI commands:
-
-```azurecli
-az extension update --name aks-preview
-```
-
-### Use the AKS specialized GPU image on new clusters (preview)
-
-Configure the cluster to use the AKS specialized GPU image when the cluster is created. Use the `--aks-custom-headers` flag for the GPU agent nodes on your new cluster to use the AKS specialized GPU image.
-
-```azurecli
-az aks create --name myAKSCluster --resource-group myResourceGroup --node-vm-size Standard_NC6 --node-count 1 --aks-custom-headers UseGPUDedicatedVHD=true
-```
-
-If you want to create a cluster using the regular AKS images, you can do so by omitting the custom `--aks-custom-headers` tag. You can also choose to add more specialized GPU node pools as per below.
--
-### Use the AKS specialized GPU image on existing clusters (preview)
-
-Configure a new node pool to use the AKS specialized GPU image. Use the `--aks-custom-headers` flag flag for the GPU agent nodes on your new node pool to use the AKS specialized GPU image.
-
-```azurecli
-az aks nodepool add --name gpu --cluster-name myAKSCluster --resource-group myResourceGroup --node-vm-size Standard_NC6 --node-count 1 --aks-custom-headers UseGPUDedicatedVHD=true
-```
-
-If you want to create a node pool using the regular AKS images, you can do so by omitting the custom `--aks-custom-headers` tag.
-
-> [!NOTE]
-> If your GPU sku requires generation 2 virtual machines, you can create doing:
-> ```azurecli
-> az aks nodepool add --name gpu --cluster-name myAKSCluster --resource-group myResourceGroup --node-vm-size Standard_NC6s_v2 --node-count 1 --aks-custom-headers UseGPUDedicatedVHD=true,usegen2vm=true
-> ```
- ## Confirm that GPUs are schedulable With your AKS cluster created, confirm that GPUs are schedulable in Kubernetes. First, list the nodes in your cluster using the [kubectl get nodes][kubectl-get] command:
With your AKS cluster created, confirm that GPUs are schedulable in Kubernetes.
```console $ kubectl get nodes
-NAME STATUS ROLES AGE VERSION
-aks-nodepool1-28993262-0 Ready agent 13m v1.12.7
+NAME STATUS ROLES AGE VERSION
+aks-gpunp-28993262-0 Ready agent 13m v1.20.7
``` Now use the [kubectl describe node][kubectl-describe] command to confirm that the GPUs are schedulable. Under the *Capacity* section, the GPU should list as `nvidia.com/gpu: 1`.
Now use the [kubectl describe node][kubectl-describe] command to confirm that th
The following condensed example shows that a GPU is available on the node named *aks-nodepool1-18821093-0*: ```console
-$ kubectl describe node aks-nodepool1-28993262-0
+$ kubectl describe node aks-gpunp-28993262-0
-Name: aks-nodepool1-28993262-0
+Name: aks-gpunp-28993262-0
Roles: agent Labels: accelerator=nvidia [...] Capacity:
- attachable-volumes-azure-disk: 24
- cpu: 6
- ephemeral-storage: 101584140Ki
- hugepages-1Gi: 0
- hugepages-2Mi: 0
- memory: 57713784Ki
- nvidia.com/gpu: 1
- pods: 110
-Allocatable:
- attachable-volumes-azure-disk: 24
- cpu: 5916m
- ephemeral-storage: 93619943269
- hugepages-1Gi: 0
- hugepages-2Mi: 0
- memory: 51702904Ki
+[...]
nvidia.com/gpu: 1
- pods: 110
-System Info:
- Machine ID: b0cd6fb49ffe4900b56ac8df2eaa0376
- System UUID: 486A1C08-C459-6F43-AD6B-E9CD0F8AEC17
- Boot ID: f134525f-385d-4b4e-89b8-989f3abb490b
- Kernel Version: 4.15.0-1040-azure
- OS Image: Ubuntu 16.04.6 LTS
- Operating System: linux
- Architecture: amd64
- Container Runtime Version: docker://1.13.1
- Kubelet Version: v1.12.7
- Kube-Proxy Version: v1.12.7
-PodCIDR: 10.244.0.0/24
-ProviderID: azure:///subscriptions/<guid>/resourceGroups/MC_myResourceGroup_myAKSCluster_eastus/providers/Microsoft.Compute/virtualMachines/aks-nodepool1-28993262-0
-Non-terminated Pods: (9 in total)
- Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
- - - -
- kube-system nvidia-device-plugin-daemonset-bbjlq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m39s
- [...] ```
spec:
limits: nvidia.com/gpu: 1 restartPolicy: OnFailure
+ tolerations:
+ - key: "sku"
+ operator: "Equal"
+ value: "gpu"
+ effect: "NoSchedule"
``` Use the [kubectl apply][kubectl-apply] command to run the job. This command parses the manifest file and creates the defined Kubernetes objects:
Accuracy at step 490: 0.9494
Adding run metadata for 499 ```
+## Use Container Insights to monitor GPU usage
+
+The following metrics are available for [Container Insights with AKS][aks-container-insights] to monitor GPU usage.
+
+| Metric name | Metric dimension (tags) | Description |
+|-|-|-|
+| containerGpuDutyCycle | `container.azm.ms/clusterId`, `container.azm.ms/clusterName`, `containerName`, `gpuId`, `gpuModel`, `gpuVendor` | Percentage of time over the past sample period (60 seconds) during which GPU was busy/actively processing for a container. Duty cycle is a number between 1 and 100. |
+| containerGpuLimits | `container.azm.ms/clusterId`, `container.azm.ms/clusterName`, `containerName` | Each container can specify limits as one or more GPUs. It is not possible to request or limit a fraction of a GPU. |
+| containerGpuRequests | `container.azm.ms/clusterId`, `container.azm.ms/clusterName`, `containerName` | Each container can request one or more GPUs. It is not possible to request or limit a fraction of a GPU. |
+| containerGpumemoryTotalBytes | `container.azm.ms/clusterId`, `container.azm.ms/clusterName`, `containerName`, `gpuId`, `gpuModel`, `gpuVendor` | Amount of GPU Memory in bytes available to use for a specific container. |
+| containerGpumemoryUsedBytes | `container.azm.ms/clusterId`, `container.azm.ms/clusterName`, `containerName`, `gpuId`, `gpuModel`, `gpuVendor` | Amount of GPU Memory in bytes used by a specific container. |
+| nodeGpuAllocatable | `container.azm.ms/clusterId`, `container.azm.ms/clusterName`, `gpuVendor` | Number of GPUs in a node that can be used by Kubernetes. |
+| nodeGpuCapacity | `container.azm.ms/clusterId`, `container.azm.ms/clusterName`, `gpuVendor` | Total Number of GPUs in a node. |
+ ## Clean up resources To remove the associated Kubernetes objects created in this article, use the [kubectl delete job][kubectl delete] command as follows:
For information on using Azure Kubernetes Service with Azure Machine Learning, s
[az-group-create]: /cli/azure/group#az_group_create [az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[aks-quickstart]: kubernetes-walkthrough.md
[aks-spark]: spark-job.md [gpu-skus]: ../virtual-machines/sizes-gpu.md [install-azure-cli]: /cli/azure/install-azure-cli [azureml-aks]: ../machine-learning/how-to-deploy-azure-kubernetes-service.md [azureml-gpu]: ../machine-learning/how-to-deploy-inferencing-gpus.md
-[azureml-triton]: ../machine-learning/how-to-deploy-with-triton.md
+[azureml-triton]: ../machine-learning/how-to-deploy-with-triton.md
+[aks-container-insights]: monitor-aks.md#container-insights
aks Kubelet Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubelet-logs.md
First, create an SSH connection with the node on which you need to view *kubelet
## Get kubelet logs
-Once you have connected to the node, run the following command to pull the *kubelet* logs:
+Once you have connected to the node via `kubectl debug`, run the following command to pull the *kubelet* logs:
```console
-sudo journalctl -u kubelet -o cat
+chroot /host
+journalctl -u kubelet -o cat
```
+> [!NOTE]
+> You don't need to use `sudo journalctl` since you are already `root` on the node.
> [!NOTE] > For Windows nodes, the log data is in `C:\k` and can be viewed using the *more* command:
If you need additional troubleshooting information from the Kubernetes master, s
[aks-quickstart-cli]: kubernetes-walkthrough.md [aks-quickstart-portal]: kubernetes-walkthrough-portal.md [aks-master-logs]: monitor-aks-reference.md#resource-logs
-[azure-container-logs]: ../azure-monitor/containers/container-insights-overview.md
+[azure-container-logs]: ../azure-monitor/containers/container-insights-overview.md
aks Operator Best Practices Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-identity.md
To access other Azure services, like Cosmos DB, Key Vault, or Blob Storage, the
With pod-managed identities for Azure resources, you automatically request access to services through Azure AD. Pod-managed identities is now currently in preview for AKS. Please refer to the [Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview)](./use-azure-ad-pod-identity.md) documentation to get started.
+Azure Active Directory Pod Identity supports 2 modes of operation:
+
+1. Standard Mode: In this mode, the following 2 components are deployed to the AKS cluster:
+ * [Managed Identity Controller(MIC)](https://azure.github.io/aad-pod-identity/docs/concepts/mic/): A Kubernetes controller that watches for changes to pods, [AzureIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentity/) and [AzureIdentityBinding](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentitybinding/) through the Kubernetes API Server. When it detects a relevant change, the MIC adds or deletes [AzureAssignedIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureassignedidentity/) as needed. Specifically, when a pod is scheduled, the MIC assigns the managed identity on Azure to the underlying VMSS used by the node pool during the creation phase. When all pods using the identity are deleted, it removes the identity from the VMSS of the node pool, unless the same managed identity is used by other pods. The MIC takes similar actions when AzureIdentity or AzureIdentityBinding are created or deleted.
+ * [Node Managed Identity (NMI)](https://azure.github.io/aad-pod-identity/docs/concepts/nmi/): is a pod that runs as a DaemonSet on each node in the AKS cluster. NMI intercepts security token requests to the [Azure Instance Metadata Service](/azure/virtual-machines/linux/instance-metadata-service?tabs=linux) on each node, redirect them to itself and validates if the pod has access to the identity it's requesting a token for and fetch the token from the Azure Active Directory tenant on behalf of the application.
+2. Managed Mode: In this mode, there is only NMI. The identity needs to be manually assigned and managed by the user. For more information, see [Pod Identity in Managed Mode](https://azure.github.io/aad-pod-identity/docs/configure/pod_identity_in_managed_mode/). In this mode, when you use the [az aks pod-identity add](/cli/azure/aks/pod-identity?view=azure-cli-latest#az_aks_pod_identity_add) command to add a pod identity to an Azure Kubernetes Service (AKS) cluster, it creates the [AzureIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentity/) and [AzureIdentityBinding](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentitybinding/) in the namespace specified by the `--namespace` parameter, while the AKS resource provider assigns the managed identity specified by the `--identity-resource-id` parameter to virtual machine scale set (VMSS) of each node pool in the AKS cluster.
+
+> [!NOTE]
+> If you instead decide to install the Azure Active Directory Pod Identity using the [AKS cluster add-on](/azure/aks/use-azure-ad-pod-identity), the setup will use the `managed` mode.
+
+The `managed` mode provides the following advantages over the `standard`:
+
+1. Identity assignment on the VMSS of a node pool can take up 40-60s. In case of cronjobs or applications that require access to the identity and canΓÇÖt tolerate the assignment delay, itΓÇÖs best to use `managed` mode as the identity is pre-assigned to the VMSS of the node pool, manually or via the [az aks pod-identity add](/cli/azure/aks/pod-identity?view=azure-cli-latest#az_aks_pod_identity_add) command.
+2. In `standard` mode, MIC requires write permissions on the VMSS used by the AKS cluster and `Managed Identity Operator` permission on the user-assigned managed identities. While running in `managed mode`, since there is no MIC, the role assignments are not required.
+ Instead of manually defining credentials for pods, pod-managed identities request an access token in real time, using it to access only their assigned services. In AKS, there are two components that handle the operations to allow pods to use managed identities: * **The Node Management Identity (NMI) server** is a pod that runs as a DaemonSet on each node in the AKS cluster. The NMI server listens for pod requests to Azure services. * **The Azure Resource Provider** queries the Kubernetes API server and checks for an Azure identity mapping that corresponds to a pod.
-When pods request access to an Azure service, network rules redirect the traffic to the NMI server.
+When pods request a security token from Azure Active Directory to access to an Azure service, network rules redirect the traffic to the NMI server.
1. The NMI server: * Identifies pods requesting access to Azure services based on their remote address. * Queries the Azure Resource Provider.
For more information about cluster operations in AKS, see the following best pra
[aks-best-practices-scheduler]: operator-best-practices-scheduler.md [aks-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md [aks-best-practices-cluster-isolation]: operator-best-practices-cluster-isolation.md
-[azure-ad-rbac]: azure-ad-rbac.md
+[azure-ad-rbac]: azure-ad-rbac.md
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
The Public DNS option can be leveraged to simplify routing options for your Priv
![Public DNS](https://user-images.githubusercontent.com/50749048/124776520-82629600-df0d-11eb-8f6b-71c473b6bd01.png)
-1. By specifying `--enable-public-fqdn` when you provision a private cluster, you create an additional A record for the new FQDN in the AKS public DNS zone. The agentnode still uses the A record in the private zone to resolve the IP address of the private endpoint for communication to the API server.
+1. By specifying `--enable-public-fqdn` when you provision a private AKS cluster, AKS creates an additional A record for its FQDN in Azure public DNS. The agent nodes still use the A record in the private DNS zone to resolve the private IP address of the private endpoint for communication to the API server.
-2. If you use both `--enable-public-fqdn` and `--private-dns-zone none`, the cluster public FQDN and private FQDN have the same value. The value is in the AKS public DNS zone `hcp.{REGION}.azmk8s.io`. It's a breaking change for the private DNS zone mode cluster.
+2. If you use both `--enable-public-fqdn` and `--private-dns-zone none`, the cluster will only have a public FQDN. When using this option, no Private DNS Zone is created or used for the name resolution of the FQDN of the API Server. The IP of the API is still private and not publicly routable.
### Register the `EnablePrivateClusterPublicFQDN` preview feature
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-ad-pod-identity.md
Create an AKS cluster with Azure CNI and pod-managed identity enabled. The follo
az group create --name myResourceGroup --location eastus az aks create -g myResourceGroup -n myAKSCluster --enable-pod-identity --network-plugin azure ```
+> [!NOTE]
+> Azure Active Directory Pod Identity supports 2 modes of operation:
+>
+> 1. Standard Mode: In this mode, the following 2 components are deployed to the AKS cluster:
+> * [Managed Identity Controller(MIC)](https://azure.github.io/aad-pod-identity/docs/concepts/mic/): A Kubernetes controller that watches for changes to pods, [AzureIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentity/) and [AzureIdentityBinding](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentitybinding/) through the Kubernetes API Server. When it detects a relevant change, the MIC adds or deletes [AzureAssignedIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureassignedidentity/) as needed. Specifically, when a pod is scheduled, the MIC assigns the managed identity on Azure to the underlying VMSS used by the node pool during the creation phase. When all pods using the identity are deleted, it removes the identity from the VMSS of the node pool, unless the same managed identity is used by other pods. The MIC takes similar actions when AzureIdentity or AzureIdentityBinding are created or deleted.
+> * [Node Managed Identity (NMI)](https://azure.github.io/aad-pod-identity/docs/concepts/nmi/): is a pod that runs as a DaemonSet on each node in the AKS cluster. NMI intercepts security token requests to the [Azure Instance Metadata Service](/azure/virtual-machines/linux/instance-metadata-service?tabs=linux) on each node, redirect them to itself and validates if the pod has access to the identity it's requesting a token for and fetch the token from the Azure Active Directory tenant on behalf of the application.
+> 2. Managed Mode: In this mode, there is only NMI. The identity needs to be manually assigned and managed by the user. For more information, see [Pod Identity in Managed Mode](https://azure.github.io/aad-pod-identity/docs/configure/pod_identity_in_managed_mode/).
+>
+>When you install the Azure Active Directory Pod Identity via Helm chart or YAML manifest as shown in the [Installation Guide](https://azure.github.io/aad-podidentity/docs/getting-started/installation/), you can choose between the `standard` and `managed` mode. If you instead decide to install the Azure Active Directory Pod Identity using the [AKS cluster add-on](/azure/aks/use-azure-ad-pod-identity) as shown in this article, the setup will use the `managed` mode.
Use [az aks get-credentials][az-aks-get-credentials] to sign in to your AKS cluster. This command also downloads and configures the `kubectl` client certificate on your development computer.
aks Windows Container Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/windows-container-cli.md
Title: Create a Windows Server container on an AKS cluster by using Azure CLI
description: Learn how to quickly create a Kubernetes cluster, deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using the Azure CLI. Previously updated : 07/16/2020 Last updated : 08/06/2021 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
az aks create \
--generate-ssh-keys \ --windows-admin-username $WINDOWS_USERNAME \ --vm-set-type VirtualMachineScaleSets \
- --kubernetes-version 1.20.2 \
+ --kubernetes-version 1.20.7 \
--network-plugin azure ```
az aks nodepool add \
The above command creates a new node pool named *npwin* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`.
-### Add a Windows Server node pool with `containerd` (preview)
+## Optional: Using `containerd` with Windows Server node pools (preview)
Beginning in Kubernetes version 1.20 and greater, you can specify `containerd` as the container runtime for Windows Server 2019 node pools. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-You will need the *aks-preview* Azure CLI extension. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+You will need the *aks-preview* Azure CLI extension version 0.5.24 or greater. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
```azurecli-interactive # Install the aks-preview extension
az extension add --name aks-preview
az extension update --name aks-preview ```
+> [!IMPORTANT]
+> When using `containerd` with Windows Server 2019 node pools:
+> - Both the control plane and Windows Server 2019 node pools must use Kubernetes version 1.20 or greater.
+> - When creating or updating a node pool to run Windows Server containers, the default value for *node-vm-size* is *Standard_D2s_v3* which was minimum recommended size for Windows Server 2019 node pools prior to Kubernetes 1.20. The minimum recommended size for Windows Server 2019 node pools using `containerd` is *Standard_D4s_v3*. When setting the *node-vm-size* parameter, please check the list of [restricted VM sizes][restricted-vm-sizes].
+> - It is highly recommended that you use [taints or labels][aks-taints] with your Windows Server 2019 node pools running `containerd` and tolerations or node selectors with your deployments to guarantee your workloads are scheduled correctly.
+ Register the `UseCustomizedWindowsContainerRuntime` feature flag using the [az feature register][az-feature-register] command as shown in the following example: ```azurecli
When ready, refresh the registration of the Microsoft.ContainerService resource
az provider register --namespace Microsoft.ContainerService ```
-Use `az aks nodepool add` command to add an additional node pool that can run Windows Server containers with the `containerd` runtime.
+### Add a Windows Server node pool with `containerd` (preview)
+
+Use the `az aks nodepool add` command to add an additional node pool that can run Windows Server containers with the `containerd` runtime.
> [!NOTE] > If you do not specify the *WindowsContainerRuntime=containerd* custom header, the node pool will use Docker as the container runtime.
az aks nodepool add \
--os-type Windows \ --name npwcd \ --node-vm-size Standard_D4s_v3 \
- --kubernetes-version 1.20.2 \
+ --kubernetes-version 1.20.5 \
--aks-custom-headers WindowsContainerRuntime=containerd \ --node-count 1 ``` The above command creates a new Windows Server node pool using `containerd` as the runtime named *npwcd* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`.
-> [!IMPORTANT]
-> When using `containerd` with Windows Server 2019 node pools:
-> - Both the control plane and Windows Server 2019 node pools must use Kubernetes version 1.20 or greater.
-> - Existing Windows Server 2019 node pools using Docker as the container runtime can't be upgraded to use `containerd`. You must create a new node pool.
-> - When creating a node pool to run Windows Server containers, the default value for *node-vm-size* is *Standard_D2s_v3* which was minimum recommended size for Windows Server 2019 node pools prior to Kubernetes 1.20. The minimum recommended size for Windows Server 2019 node pools using `containerd` is *Standard_D4s_v3*. When setting the *node-vm-size* parameter, please check the list of [restricted VM sizes][restricted-vm-sizes].
-> - It is highly recommended that you use [taints or labels][aks-taints] with your Windows Server 2019 node pools running `containerd` and tolerations or node selectors with your deployments to guarantee your workloads are scheduled correctly.
+### Upgrade an existing Windows Server node pool to `containerd` (preview)
+
+Use the `az aks nodepool upgrade` command to upgrade a specific node pool from Docker to `containerd`.
+
+```azurecli
+az aks nodepool upgrade \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name npwd \
+ --kubernetes-version 1.20.7 \
+ --aks-custom-headers WindowsContainerRuntime=containerd
+```
+
+The above command upgrades a node pool named *npwd* to the `containerd` runtime.
+
+To upgrade all existing node pools in a cluster to use the `containerd` runtime for all Windows Server node pools:
+
+```azurecli
+az aks upgrade \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --kubernetes-version 1.20.7 \
+ --aks-custom-headers WindowsContainerRuntime=containerd
+```
+
+The above command upgrades all Windows Server node pools in the *myAKSCluster* to use the `containerd` runtime.
+
+> [!NOTE]
+> After upgrading all existing Windows Server node pools to use the `containerd` runtime, Docker will still be the default runtime when adding new Windows Server node pools.
## Connect to the cluster
The following example output shows the all the nodes in the cluster. Make sure t
```output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
-aks-nodepool1-12345678-vmss000000 Ready agent 34m v1.20.2 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
-aks-nodepool1-12345678-vmss000001 Ready agent 34m v1.20.2 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
-aksnpwcd123456 Ready agent 9m6s v1.20.2 10.240.0.97 <none> Windows Server 2019 Datacenter 10.0.17763.1879 containerd://1.4.4+unknown
-aksnpwin987654 Ready agent 25m v1.20.2 10.240.0.66 <none> Windows Server 2019 Datacenter 10.0.17763.1879 docker://19.3.14
+aks-nodepool1-12345678-vmss000000 Ready agent 34m v1.20.7 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aks-nodepool1-12345678-vmss000001 Ready agent 34m v1.20.7 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aksnpwcd123456 Ready agent 9m6s v1.20.7 10.240.0.97 <none> Windows Server 2019 Datacenter 10.0.17763.1879 containerd://1.4.4+unknown
+aksnpwin987654 Ready agent 25m v1.20.7 10.240.0.66 <none> Windows Server 2019 Datacenter 10.0.17763.1879 docker://19.3.14
``` > [!NOTE]
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-java.md
You can use a startup script to perform actions before a web app starts. The sta
3. Make the required configuration changes. 4. Indicate that configuration was successfully completed.
+For Windows sites, create a file named `startup.cmd` or `startup.ps1` in the `wwwroot` directory. This will automatically be executed before the Tomcat server starts.
+ Here's a PowerShell script that completes these steps: ```powershell
Product support for the [Azure-supported Azul Zulu JDK](https://www.azul.com/dow
Visit the [Azure for Java Developers](/java/azure/) center to find Azure quickstarts, tutorials, and Java reference documentation. - [App Service Linux FAQ](faq-app-service-linux.yml)-- [Environment variables and app settings reference](reference-app-settings.md)
+- [Environment variables and app settings reference](reference-app-settings.md)
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-staging-slots.md
When you swap two slots (usually from a staging slot into the production slot),
At any point of the swap operation, all work of initializing the swapped apps happens on the source slot. The target slot remains online while the source slot is being prepared and warmed up, regardless of where the swap succeeds or fails. To swap a staging slot with the production slot, make sure that the production slot is always the target slot. This way, the swap operation doesn't affect your production app.
+> [!NOTE]
+> The instances in your former production instances (those that will be swapped into staging after this swap operation) will be recycled quickly in the last step of the swap process. In case you have any long running operations in your application, they will be abandoned, when the workers recycle. This also applies to function apps. Therefore your application code should be written in a fault tolerant way.
+ ### Which settings are swapped? [!INCLUDE [app-service-deployment-slots-settings](../../includes/app-service-deployment-slots-settings.md)]
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/manage-backup.md
The following database solutions are supported with backup feature:
* Backups of TLS enabled Azure Database for PostgreSQL is not supported. If a backup is configured, you will encounter backup failures. * In-app MySQL databases are automatically backed up without any configuration. If you make manually settings for in-app MySQL databases, such as adding connection strings, the backups may not work correctly. * Using a firewall enabled storage account as the destination for your backups is not supported. If a backup is configured, you will encounter backup failures.
-* Currently, you can't use the Backup and Restore feature with the Azure App Service VNet Integration feature.
* Currently, you can't use the Backup and Restore feature with Azure storage accounts that are configured to use Private Endpoint. <a name="manualbackup"></a>
automation Automation Child Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-child-runbooks.md
The following example starts a child runbook with parameters and then waits for
Disable-AzContextAutosave -Scope Process # Connect to Azure with Run As account
-$ServicePrincipalConnection = Get-AutomationConnection -Name 'AzureRunAsConnection'
+$ServicePrincipalConnection = Get-AzAutomationConnection -Name 'AzureRunAsConnection'
Connect-AzAccount ` -ServicePrincipal `
azure-arc Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-overview.md
Azure Arc-enabled SQL Managed Instance is an Azure SQL data service that can be
Azure Arc-enabled SQL Managed Instance has near 100% compatibility with the latest SQL Server database engine, and enables existing SQL Server customers to lift and shift their applications to Azure Arc data services with minimal application and database changes while maintaining data sovereignty. At the same time, SQL Managed Instance includes built-in management capabilities that drastically reduce management overhead.
-To learn more about these capabilities, you can also refer to this Data Exposed episode.
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/What-is-Azure-Arc-Enabled-SQL-Managed-Instance--Data-Exposed/player?format=ny]
+To learn more about these capabilities, watch these introductory videos.
+
+### Azure Arc enabled SQL Managed Instance - indirect connected mode
+
+> [!VIDEO https://channel9.msdn.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-disconnected-mode/player?format=ny]
+
+### Azure Arc enabled SQL Managed Instance - direct connected mode
+
+> [!VIDEO https://channel9.msdn.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-connected-mode/player?format=ny]
## Next steps
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/overview.md
Currently, the following Azure Arc-enabled data services are available:
- SQL Managed Instance - PostgreSQL Hyperscale (preview)
+For an introduction to how Azure Arc enabled data services supports your hybrid work environment, see this introductory video:
+
+> [!VIDEO https://channel9.msdn.com/Shows//Inside-Azure-for-IT/Choose-the-right-data-solution-for-your-hybrid-environment/player?format=ny]
## Always current
azure-functions Create First Function Arc Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-arc-cli.md
In Azure Functions, a function project is the unit of deployment and execution f
cd LocalFunctionProj ```
- This folder contains various files for the project, including configurations files named [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md). By default, the *local.settings.json* file is excluded from source control in the *.gitignore* file. This exclusion is because the file can contain secrets that are downloaded from Azure.
+ This folder contains various files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). By default, the *local.settings.json* file is excluded from source control in the *.gitignore* file. This exclusion is because the file can contain secrets that are downloaded from Azure.
1. Add a function to your project by using the following command, where the `--name` argument is the unique name of your function (HttpExample) and the `--template` argument specifies the function's trigger (HTTP).
azure-functions Create First Function Arc Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-arc-custom-container.md
In Azure Functions, a function project is the context for one or more individual
cd LocalFunctionProj ```
- This folder contains the Dockerfile other files for the project, including configurations files named [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md). By default, the *local.settings.json* file is excluded from source control in the *.gitignore* file. This exclusion is because the file can contain secrets that are downloaded from Azure.
+ This folder contains the Dockerfile other files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). By default, the *local.settings.json* file is excluded from source control in the *.gitignore* file. This exclusion is because the file can contain secrets that are downloaded from Azure.
1. Open the generated `Dockerfile` and locate the `3.0` tag for the base image. If there's a `3.0` tag, replace it with a `3.0.15885` tag. For example, in a JavaScript application, the Docker file should be modified to have `FROM mcr.microsoft.com/azure-functions/node:3.0.15885`. This version of the base image supports deployment to an Azure Arc-enabled Kubernetes cluster.
azure-functions Create First Function Cli Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-csharp.md
In Azure Functions, a function project is a container for one or more individual
cd LocalFunctionProj ```
- This folder contains various files for the project, including configurations files named [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
+ This folder contains various files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
1. Add a function to your project by using the following command, where the `--name` argument is the unique name of your function (HttpExample) and the `--template` argument specifies the function's trigger (HTTP).
azure-functions Create First Function Cli Java Uiex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-java-uiex.md
In Azure Functions, a function project is a container for one or more individual
<summary><strong>What's created in the LocalFunctionProj folder?</strong></summary> This folder contains various files for the project, such as *Function.java*, *FunctionTest.java*, and *pom.xml*. There are also configurations files named
-[local.settings.json](functions-run-local.md#local-settings-file) and
+[local.settings.json](functions-develop-local.md#local-settings-file) and
[host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
azure-functions Create First Function Cli Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-java.md
In Azure Functions, a function project is a container for one or more individual
cd fabrikam-functions ```
- This folder contains various files for the project, including configurations files named [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
+ This folder contains various files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
### (Optional) Examine the file contents
azure-functions Create First Function Cli Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-node.md
In Azure Functions, a function project is a container for one or more individual
cd LocalFunctionProj ```
- This folder contains various files for the project, including configurations files named [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
+ This folder contains various files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
1. Add a function to your project by using the following command, where the `--name` argument is the unique name of your function (HttpExample) and the `--template` argument specifies the function's trigger (HTTP).
azure-functions Create First Function Cli Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-powershell.md
In Azure Functions, a function project is a container for one or more individual
cd LocalFunctionProj ```
- This folder contains various files for the project, including configurations files named [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
+ This folder contains various files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
1. Add a function to your project by using the following command, where the `--name` argument is the unique name of your function (HttpExample) and the `--template` argument specifies the function's trigger (HTTP).
azure-functions Create First Function Cli Python Uiex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-python-uiex.md
In this section, you create a local <abbr title="A logical container for one or
<details> <summary><strong>What's created in the LocalFunctionProj folder?</strong></summary>
- This folder contains various files for the project, including configurations files named [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
+ This folder contains various files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
</details> 1. Add a function to your project by using the following command:
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-python.md
In Azure Functions, a function project is a container for one or more individual
cd LocalFunctionProj ```
- This folder contains various files for the project, including configurations files named [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
+ This folder contains various files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
1. Add a function to your project by using the following command, where the `--name` argument is the unique name of your function (HttpExample) and the `--template` argument specifies the function's trigger (HTTP).
azure-functions Create First Function Cli Typescript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-typescript.md
In Azure Functions, a function project is a container for one or more individual
cd LocalFunctionProj ```
- This folder contains various files for the project, including configurations files named [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
+ This folder contains various files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
1. Add a function to your project by using the following command, where the `--name` argument is the unique name of your function (HttpExample) and the `--template` argument specifies the function's trigger (HTTP).
azure-functions Dotnet Isolated Process Developer Howtos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-developer-howtos.md
In Azure Functions, a function project is a container for one or more individual
cd LocalFunctionProj ```
- This folder contains various files for the project, including the [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md) configurations files. Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
+ This folder contains various files for the project, including the [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md) configurations files. Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
1. Add a function to your project by using the following command, where the `--name` argument is the unique name of your function (HttpExample) and the `--template` argument specifies the function's trigger (HTTP).
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-guide.md
When running out-of-process, your .NET functions can take advantage of the follo
A .NET isolated function project is basically a .NET console app project that targets .NET 5.0. The following are the basic files required in any .NET isolated project: + [host.json](functions-host-json.md) file.
-+ [local.settings.json](functions-run-local.md#local-settings-file) file.
++ [local.settings.json](functions-develop-local.md#local-settings-file) file. + C# project file (.csproj) that defines the project and dependencies. + Program.cs file that's the entry point for the app.
azure-functions Durable Functions Create First Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-create-first-csharp.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
| Select a template for your project's first function | Skip for now | | | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. |
-Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-run-local.md#local-settings-file) configuration files.
+Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files.
## Add functions to the app
azure-functions Durable Functions Instance Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-instance-management.md
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
### Azure Functions Core Tools
-You can also start an instance directly by using the [Azure Functions Core Tools](../functions-run-local.md) `durable start-new` command. It takes the following parameters:
+You can also start an instance directly by using the [`func durable start-new` command](../functions-core-tools-reference.md#func-durable-start-new) in Core Tools, which takes the following parameters:
* **`function-name` (required)**: Name of the function to start. * **`input` (optional)**: Input to the function, either inline or through a JSON file. For files, add a prefix to the path to the file with `@`, such as `@path/to/file.json`.
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.Ht
### Azure Functions Core Tools
-It's also possible to get the status of an orchestration instance directly by using the [Azure Functions Core Tools](../functions-run-local.md) `durable get-runtime-status` command.
+It's also possible to get the status of an orchestration instance directly by using the [`func durable get-runtime-status` command](../functions-core-tools-reference.md#func-durable-get-runtime-status) in Core Tools.
> [!NOTE]
-> The Core Tools commands are currently only supported when using the default [Azure Storage provider](durable-functions-storage-providers.md) for persisting runtime state.
+> Core Tools commands are currently only supported when using the default [Azure Storage provider](durable-functions-storage-providers.md) for persisting runtime state.
The `durable get-runtime-status` command takes the following parameters:
See [Start instances](#javascript-function-json) for the function.json configura
### Azure Functions Core Tools
-It's also possible to query instances directly, by using the [Azure Functions Core Tools](../functions-run-local.md) `durable get-instances` command.
+It's also possible to query instances directly, by using the [`func durable get-instances` command](../functions-core-tools-reference.md#func-durable-get-instances) in Core Tools.
> [!NOTE] > The Core Tools commands are currently only supported when using the default [Azure Storage provider](durable-functions-storage-providers.md) for persisting runtime state.
A terminated instance will eventually transition into the `Terminated` state. Ho
### Azure Functions Core Tools
-You can also terminate an orchestration instance directly, by using the [Azure Functions Core Tools](../functions-run-local.md) `durable terminate` command.
+You can also terminate an orchestration instance directly, by using the [`func durable terminate` command](../functions-core-tools-reference.md#func-durable-terminate) in Core Tools.
> [!NOTE] > The Core Tools commands are currently only supported when using the default [Azure Storage provider](durable-functions-storage-providers.md) for persisting runtime state.
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.Ht
### Azure Functions Core Tools
-You can also raise an event to an orchestration instance directly, by using the [Azure Functions Core Tools](../functions-run-local.md) `durable raise-event` command.
+You can also raise an event to an orchestration instance directly, by using the [`func durable raise-event` command](../functions-core-tools-reference.md#func-durable-raise-event) in Core Tools.
> [!NOTE] > The Core Tools commands are currently only supported when using the default [Azure Storage provider](durable-functions-storage-providers.md) for persisting runtime state.
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.Ht
### Azure Functions Core Tools
-You can also rewind an orchestration instance directly by using the [Azure Functions Core Tools](../functions-run-local.md) `durable rewind` command.
+You can also rewind an orchestration instance directly by using the [`func durable rewind` command](../functions-core-tools-reference.md#func-durable-rewind) in Core Tools.
> [!NOTE] > The Core Tools commands are currently only supported when using the default [Azure Storage provider](durable-functions-storage-providers.md) for persisting runtime state.
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.Ht
### Azure Functions Core Tools
-You can purge an orchestration instance's history by using the [Azure Functions Core Tools](../functions-run-local.md) `durable purge-history` command. Similar to the second C# example in the preceding section, it purges the history for all orchestration instances created during a specified time interval. You can further filter purged instances by runtime status.
+You can purge an orchestration instance's history by using the [`func durable purge-history` command](../functions-core-tools-reference.md#func-durable-purge-history) in Core Tools. Similar to the second C# example in the preceding section, it purges the history for all orchestration instances created during a specified time interval. You can further filter purged instances by runtime status.
> [!NOTE] > The Core Tools commands are currently only supported when using the default [Azure Storage provider](durable-functions-storage-providers.md) for persisting runtime state.
func durable purge-history --created-before 2021-11-14T19:35:00.0000000Z --runti
## Delete a task hub
-Using the [Azure Functions Core Tools](../functions-run-local.md) `durable delete-task-hub` command, you can delete all storage artifacts associated with a particular task hub, including Azure storage tables, queues, and blobs.
+Using the [`func durable delete-task-hub` command](../functions-core-tools-reference.md#func-durable-delete-task-hub) in Core Tools, you can delete all storage artifacts associated with a particular task hub, including Azure storage tables, queues, and blobs.
> [!NOTE] > The Core Tools commands are currently only supported when using the default [Azure Storage provider](durable-functions-storage-providers.md) for persisting runtime state.
azure-functions Quickstart Js Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/quickstart-js-vscode.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
| Select a template for your project's first function | Skip for now | | | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. |
-Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-run-local.md#local-settings-file) configuration files.
+Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files.
A package.json file is also created in the root folder.
azure-functions Quickstart Powershell Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/quickstart-powershell-vscode.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
| Select a template for your project's first function | Skip for now | | | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. |
-Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-run-local.md#local-settings-file) configuration files.
+Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files.
A package.json file is also created in the root folder.
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/quickstart-python-vscode.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
| Select a template for your project's first function | Skip for now | | | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. |
-Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-run-local.md#local-settings-file) configuration files.
+Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files.
A *requirements.txt* file is also created in the root folder. It specifies the Python packages needed to run your function app.
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-app-settings.md
Last updated 09/22/2018
# App settings reference for Azure Functions
-App settings in a function app contain global configuration options that affect all functions for that function app. When you run locally, these settings are accessed as local [environment variables](functions-run-local.md#local-settings-file). This article lists the app settings that are available in function apps.
+App settings in a function app contain global configuration options that affect all functions for that function app. When you run locally, these settings are accessed as local [environment variables](functions-develop-local.md#local-settings-file). This article lists the app settings that are available in function apps.
[!INCLUDE [Function app settings](../../includes/functions-app-settings.md)]
-There are other global configuration options in the [host.json](functions-host-json.md) file and in the [local.settings.json](functions-run-local.md#local-settings-file) file.
+There are other global configuration options in the [host.json](functions-host-json.md) file and in the [local.settings.json](functions-develop-local.md#local-settings-file) file.
> [!NOTE] > You can use application settings to override host.json setting values without having to change the host.json file itself. This is helpful for scenarios where you need to configure or modify specific host.json settings for a specific environment. This also lets you change host.json settings without having to republish your project. To learn more, see the [host.json reference article](functions-host-json.md#override-hostjson-values). Changes to function app settings require your function app to be restarted.
The value for this key is supplied in the format `<DESTINATION>:<VERBOSITY>`, wh
[!INCLUDE [functions-scale-controller-logging](../../includes/functions-scale-controller-logging.md)]
+## SCM\_LOGSTREAM\_TIMEOUT
+
+Controls the timeout, in seconds, when connected to streaming logs. The default value is 7200 (2 hours).
+
+|Key|Sample value|
+|-|-|
+|SCM_LOGSTREAM_TIMEOUT|1800|
+
+The above sample value of `1800` sets a timeout of 30 minutes. To learn more, see [Enable streaming logs](functions-run-local.md#enable-streaming-logs).
+ ## WEBSITE\_CONTENTAZUREFILECONNECTIONSTRING Connection string for storage account where the function app code and configuration are stored in event-driven scaling plans running on Windows. For more information, see [Create a function app](functions-infrastructure-as-code.md#windows).
azure-functions Functions Bindings Register https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-register.md
The following table lists the currently available versions of the default *Micro
> [!NOTE] > While you can a specify custom version range in host.json, we recommend you use a version value from this table.
-### <a name="explicitly-install-extensions"></a>Explicitly install extensions
+### Explicitly install extensions
+If you aren't able to use extension bundles, you can use Azure Functions Core Tools locally to install the specific extension packages required by your project.
+
+> [!IMPORTANT]
+> You can't explicitly install extensions in a function app that is using extension bundles. Remove the `extensionBundle` section in *host.json* before explicitly installing extensions.
+
+The following items describe some reasons you might need to install extensions manually:
+
+* You need to access a specific version of an extension not available in a bundle.
+* You need to access a custom extension not available in a bundle.
+* You need to access a specific combination of extensions not available in a single bundle.
+
+> [!NOTE]
+> To manually install extensions by using Core Tools, you must have the [.NET Core 2.x SDK](https://dotnet.microsoft.com/download) installed. The .NET Core SDK is used by Azure Functions Core Tools to install extensions from NuGet. You don't need to know .NET to use Azure Functions extensions.
+
+When you explicitly install extensions, a .NET project file named extensions.csproj is added to the root of your project. This file defines the set of NuGet packages required by your functions. While you can work with the [NuGet package references](/nuget/consume-packages/package-references-in-project-files) in this file, Core Tools lets you install extensions without having to manually edit the file.
+
+There are several ways to use Core Tools to install the required extensions in your local project.
+
+#### Install all extensions
+
+Use the following command to automatically add all extension packages used by the bindings in your local project:
+
+```command
+func extensions install
+```
+
+The command reads the *function.json* file to see which packages you need, installs them, and rebuilds the extensions project (extensions.csproj). It adds any new bindings at the current version but does not update existing bindings. Use the `--force` option to update existing bindings to the latest version when installing new ones. To learn more, see the [`func extensions install` command](functions-core-tools-reference.md#func-extensions-install).
+
+If your function app uses bindings that Core Tools does not recognize, you must manually install the specific extension.
+
+#### Install a specific extension
+
+Use the following command to install a specific extension package at a specific version, in this case the Storage extension:
+
+```command
+func extensions install --package Microsoft.Azure.WebJobs.Extensions.Storage --version 4.0.2
+```
+
+To learn more, see the [`func extensions install` command](functions-core-tools-reference.md#func-extensions-install).
## <a name="local-csharp"></a>Install extensions from NuGet in .NET languages
azure-functions Functions Core Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-core-tools-reference.md
+
+ Title: Azure Functions Core Tools reference
+description: Reference documentation that supports the Azure Functions Core Tools (func.exe).
+ Last updated : 07/13/2021++
+# Azure Functions Core Tools reference
+
+This article provides reference documentation for the Azure Functions Core Tools, which lets you develop, manage, and deploy Azure Functions projects from your local computer. To learn more about using Core Tools, see [Work with Azure Functions Core Tools](functions-run-local.md).
+
+Core Tools commands are organized into the following contexts, each providing a unique set of actions.
+
+| Command context | Description |
+| -- | -- |
+| [`func`](#func-init) | Commands used to create and run functions on your local computer. |
+| [`func azure`](#func-azure-functionapp-fetch-app-settings) | Commands for working with Azure resources, including publishing. |
+| [`func durable`](#func-durable-delete-task-hub) | Commands for working with [Durable Functions](./durable/durable-functions-overview.md). |
+| [`func extensions`](#func-extensions-install) | Commands for installing and managing extensions. |
+| [`func kubernetes`](#func-kubernetes-deploy) | Commands for working with Kubernetes and Azure Functions. |
+| [`func settings`](#func-settings-decrypt) | Commands for managing environment settings for the local Functions host. |
+| `func templates` | Commands for listing available function templates. |
+
+Before using the commands in this article, you must [install the Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).
+
+## func init
+
+Creates a new Functions project in a specific language.
+
+```command
+func init <PROJECT_FOLDER>
+```
+
+When you supply `<PROJECT_FOLDER>`, the project is created in a new folder with this name. Otherwise, the current folder is used.
+
+`func init` supports the following options, which are version 3.x/2.x-only, unless otherwise noted:
+
+| Option | Description |
+| | -- |
+| **`--csx`** | Creates .NET functions as C# script, which is the version 1.x behavior. Valid only with `--worker-runtime dotnet`. |
+| **`--docker`** | Creates a Dockerfile for a container using a base image that is based on the chosen `--worker-runtime`. Use this option when you plan to publish to a custom Linux container. |
+| **`--docker-only`** | Adds a Dockerfile to an existing project. Prompts for the worker-runtime if not specified or set in local.settings.json. Use this option when you plan to publish an existing project to a custom Linux container. |
+| **`--force`** | Initialize the project even when there are existing files in the project. This setting overwrites existing files with the same name. Other files in the project folder aren't affected. |
+| **`--language`** | Initializes a language-specific project. Currently supported when `--worker-runtime` set to `node`. Options are `typescript` and `javascript`. You can also use `--worker-runtime javascript` or `--worker-runtime typescript`. |
+| **`--managed-dependencies`** | Installs managed dependencies. Currently, only the PowerShell worker runtime supports this functionality. |
+| **`--source-control`** | Controls whether a git repository is created. By default, a repository isn't created. When `true`, a repository is created. |
+| **`--worker-runtime`** | Sets the language runtime for the project. Supported values are: `csharp`, `dotnet`, `dotnet-isolated`, `javascript`,`node` (JavaScript), `powershell`, `python`, and `typescript`. For Java, use [Maven](functions-reference-java.md#create-java-functions). To generate a language-agnostic project with just the project files, use `custom`. When not set, you're prompted to choose your runtime during initialization. |
+|
+
+> [!NOTE]
+> When you use either `--docker` or `--dockerfile` options, Core Tools automatically create the Dockerfile for C#, JavaScript, Python, and PowerShell functions. For Java functions, you must manually create the Dockerfile. Use the Azure Functions [image list](https://github.com/Azure/azure-functions-docker) to find the correct base image for your container that runs Azure Functions.
+
+## func logs
+
+Gets logs for functions running in a Kubernetes cluster.
+
+```
+func logs --platform kubernetes --name <APP_NAME>
+```
+
+The `func logs` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--platform`** | Hosting platform for the function app. Supported options: `kubernetes`. |
+| **`--name`** | Function app name in Azure. |
+
+To learn more, see [Azure Functions on Kubernetes with KEDA](functions-kubernetes-keda.md).
+
+## func new
+
+Creates a new function in the current project based on a template.
+
+```
+func new
+```
+
+The `func new` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--authLevel`** | Lets you set the authorization level for an HTTP trigger. Supported values are: `function`, `anonymous`, `admin`. Authorization isn't enforced when running locally. For more information, see the [HTTP binding article](functions-bindings-http-webhook-trigger.md#authorization-keys). |
+| **`--csx`** | (Version 2.x and later versions.) Generates the same C# script (.csx) templates used in version 1.x and in the portal. |
+| **`--language`**, **`-l`**| The template programming language, such as C#, F#, or JavaScript. This option is required in version 1.x. In version 2.x and later versions, you don't use this option because the language is defined by the worker runtime. |
+| **`--name`**, **`-n`** | The function name. |
+| **`--template`**, **`-t`** | Use the `func templates list` command to see the complete list of available templates for each supported language. |
+
+To learn more, see [Create a function](functions-run-local.md#create-func).
+
+## func run
+
+*Version 1.x only.*
+
+Enables you to invoke a function directly, which is similar to running a function using the **Test** tab in the Azure portal. This action is only supported in version 1.x. For later versions, use `func start` and [call the function endpoint directly](functions-run-local.md#passing-test-data-to-a-function).
+
+```command
+func run
+```
+
+The `func run` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--content`** | Inline content passed to the function. |
+| **`--debug`** | Attach a debugger to the host process before running the function.|
+| **`--file`** | The file name to use as content.|
+| **`--no-interactive`** | Doesn't prompt for input, which is useful for automation scenarios.|
+| **`--timeout`** | Time to wait (in seconds) until the local Functions host is ready.|
+
+For example, to call an HTTP-triggered function and pass content body, run the following command:
+
+```
+func run MyHttpTrigger --content '{\"name\": \"Azure\"}'
+```
+
+## func start
+
+Starts the local runtime host and loads the function project in the current folder.
+
+The specific command depends on the [runtime version](functions-versions.md).
+
+# [v2.x+](#tab/v2)
+
+```command
+func start
+```
+
+`func start` supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--cert`** | The path to a .pfx file that contains a private key. Only supported with `--useHttps`. |
+| **`--cors`** | A comma-separated list of CORS origins, with no spaces. |
+| **`--cors-credentials`** | Allow cross-origin authenticated requests using cookies and the Authentication header. |
+| **`--dotnet-isolated-debug`** | When set to `true`, pauses the .NET worker process until a debugger is attached from the .NET isolated project being debugged. |
+| **`--enable-json-output`** | Emits console logs as JSON, when possible. |
+| **`--enableAuth`** | Enable full authentication handling pipeline. |
+| **`--functions`** | A space-separated list of functions to load. |
+| **`--language-worker`** | Arguments to configure the language worker. For example, you may enable debugging for language worker by providing [debug port and other required arguments](https://github.com/Azure/azure-functions-core-tools/wiki/Enable-Debugging-for-language-workers). |
+| **`--no-build`** | Don't build the current project before running. For .NET class projects only. The default is `false`. |
+| **`--password`** | Either the password or a file that contains the password for a .pfx file. Only used with `--cert`. |
+| **`--port`** | The local port to listen on. Default value: 7071. |
+| **`--timeout`** | The timeout for the Functions host to start, in seconds. Default: 20 seconds.|
+| **`--useHttps`** | Bind to `https://localhost:{port}` rather than to `http://localhost:{port}`. By default, this option creates a trusted certificate on your computer.|
+
+With the project running, you can [verify individual function endpoints](functions-run-local.md#passing-test-data-to-a-function).
+
+# [v1.x](#tab/v1)
+
+```command
+func host start
+```
+
+`func start` supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--cors`** | A comma-separated list of CORS origins, with no spaces. |
+| **`--port`** | The local port to listen on. Default value: 7071. |
+| **`--pause-on-error`** | Pause for more input before exiting the process. Used only when launching Core Tools from an integrated development environment (IDE).|
+| **`--script-root`** | Used to specify the path to the root of the function app that is to be run or deployed. This is used for compiled projects that generate project files into a subfolder. For example, when you build a C# class library project, the host.json, local.settings.json, and function.json files are generated in a *root* subfolder with a path like `MyProject/bin/Debug/netstandard2.0`. In this case, set the prefix as `--script-root MyProject/bin/Debug/netstandard2.0`. This is the root of the function app when running in Azure. |
+| **`--timeout`** | The timeout for the Functions host to start, in seconds. Default: 20 seconds.|
+| **`--useHttps`** | Bind to `https://localhost:{port}` rather than to `http://localhost:{port}`. By default, this option creates a trusted certificate on your computer.|
+
+In version 1.x, you can also use the [`func run` command](#func-run) to run a specific function and pass test data to it.
+++
+## func azure functionapp fetch-app-settings
+
+Gets settings from a specific function app.
+
+```command
+func azure functionapp fetch-app-settings <APP_NAME>
+```
+
+For an example, see [Get your storage connection strings](functions-run-local.md#get-your-storage-connection-strings).
+
+Settings are downloaded into the local.settings.json file for the project. On-screen values are masked for security. You can protect settings in the local.settings.json file by [enabling local encryption](#func-settings-encrypt).
+
+## func azure functionapp list-functions
+
+Returns a list of the functions in the specified function app.
+
+```command
+func azure functionapp list-functions <APP_NAME>
+```
+## func azure functionapp logstream
+
+Connects the local command prompt to streaming logs for the function app in Azure.
+
+```command
+func azure functionapp logstream <APP_NAME>
+```
+
+The default timeout for the connection is 2 hours. You can change the timeout by adding an app setting named [SCM_LOGSTREAM_TIMEOUT](functions-app-settings.md#scm_logstream_timeout), with a timeout value in seconds. Not yet supported for Linux apps in the Consumption plan. For these apps, use the `--browser` option to view logs in the portal.
+
+The `deploy` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--browser`** | Open Azure Application Insights Live Stream for the function app in the default browser. |
+
+To learn more, see [Enable streaming logs](functions-run-local.md#enable-streaming-logs).
+
+## func azure functionapp publish
+
+Deploys a Functions project to an existing function app resource in Azure.
+
+```command
+func azure functionapp publish <FunctionAppName>
+```
+
+For more information, see [Deploy project files](functions-run-local.md#project-file-deployment).
+
+The following publish options apply, based on version:
+
+# [v2.x+](#tab/v2)
+
+| Option | Description |
+| | -- |
+| **`--additional-packages`** | List of packages to install when building native dependencies. For example: `python3-dev libevent-dev`. |
+| **`--build`**, **`-b`** | Performs build action when deploying to a Linux function app. Accepts: `remote` and `local`. |
+| **`--build-native-deps`** | Skips generating the `.wheels` folder when publishing Python function apps. |
+| **`--csx`** | Publish a C# script (.csx) project. |
+| **`--force`** | Ignore pre-publishing verification in certain scenarios. |
+| **`--dotnet-cli-params`** | When publishing compiled C# (.csproj) functions, the core tools calls `dotnet build --output bin/publish`. Any parameters passed to this will be appended to the command line. |
+|**`--list-ignored-files`** | Displays a list of files that are ignored during publishing, which is based on the `.funcignore` file. |
+| **`--list-included-files`** | Displays a list of files that are published, which is based on the `.funcignore` file. |
+| **`--no-build`** | Project isn't built during publishing. For Python, `pip install` isn't performed. |
+| **`--nozip`** | Turns the default `Run-From-Package` mode off. |
+| **`--overwrite-settings -y`** | Suppress the prompt to overwrite app settings when `--publish-local-settings -i` is used.|
+| **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using the Microsoft Azure Storage Emulator, first change the app setting to an [actual storage connection](functions-run-local.md#get-your-storage-connection-strings). |
+| **`--publish-settings-only`**, **`-o`** | Only publish settings and skip the content. Default is prompt. |
+| **`--slot`** | Optional name of a specific slot to which to publish. |
+
+# [v1.x](#tab/v1)
+
+| Option | Description |
+| | -- |
+| **`--overwrite-settings -y`** | Suppress the prompt to overwrite app settings when `--publish-local-settings -i` is used.|
+| **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using the Microsoft Azure Storage Emulator, first change the app setting to an [actual storage connection](functions-run-local.md#get-your-storage-connection-strings). |
+++
+## func azure storage fetch-connection-string
+
+Gets the connection string for the specified Azure Storage account.
+
+```command
+func azure storage fetch-connection-string <STORAGE_ACCOUNT_NAME>
+```
+
+## func deploy
+
+Deploys a function app in a custom Linux container to a Kubernetes cluster without KEDA.
+
+```command
+func deploy --name <FUNCTION_APP> --platform kubernetes --registry <DOCKER_USER>
+```
+
+This command builds your project as a custom container and publishes it to a Kubernetes cluster using a default scaler or using KNative. To publish to a cluster using KEDA for dynamic scale, instead use the [`func kubernetes deploy` command](#func-kubernetes-deploy). Custom containers must have a Dockerfile. To create an app with a Dockerfile, use the `--dockerfile` option with the [`func init` command](#func-init).
+
+The `deploy` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--config`** | Sets an optional deployment configuration file. |
+| **`--max`** | Optionally, sets the maximum number of function app instances to deploy to. |
+| **`--min`** | Optionally, sets the minimum number of function app instances to deploy to. |
+| **`--name`** | Function app name (required). |
+| **`--platform`** | Hosting platform for the function app (required). Valid options are: `kubernetes` and `knative`.|
+| **`--registry`** | The name of a Docker Registry the current user signed-in to (required). |
+
+Core Tools uses the local Docker CLI to build and publish the image.
+
+Make sure your Docker is already installed locally. Run the `docker login` command to connect to your account.
+
+## func durable delete-task-hub
+
+Deletes all storage artifacts in the Durable Functions task hub.
+
+```command
+func durable delete-task-hub
+```
+
+The `delete-task-hub` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--connection-string-setting`** | Optional name of the setting containing the storage connection string to use. |
+| **`--task-hub-name`** | Optional name of the Durable Task Hub to use. |
+
+To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#delete-a-task-hub).
+
+## func durable get-history
+
+Returns the history of the specified orchestration instance.
+
+```command
+func durable get-history --id <INSTANCE_ID>
+```
+
+The `get-history` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--id`** | Specifies the ID of an orchestration instance (required). |
+| **`--connection-string-setting`** | Optional name of the setting containing the storage connection string to use. |
+| **`--task-hub-name`** | Optional name of the Durable Task Hub to use. |
+
+To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools-1).
+
+## func durable get-instances
+
+Returns the status of all orchestration instances. Supports paging using the `top` parameter.
+
+```command
+func durable get-instances
+```
+
+The `get-instances` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--continuation-token`** | Optional token that indicates a specific page/section of the requests to return. |
+| **`--connection-string-setting`** | Optional name of the app setting that contains the storage connection string to use. |
+| **`--created-after`** | Optionally, get the instances created after this date/time (UTC). All ISO 8601 formatted datetimes are accepted. |
+| **`--created-before`** | Optionally, get the instances created before a specific date/time (UTC). All ISO 8601 formatted datetimes are accepted. |
+| **`--runtime-status`** | Optionally, get the instances whose status match a specific status, including `running`, `completed`, and `failed`. You can provide one or more space-separated statues. |
+| **`--top`** | Optionally limit the number of records returned in a given request. |
+| **`--task-hub-name`** | Optional name of the Durable Functions task hub to use. |
+
+To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools-2).
+
+## func durable get-runtime-status
+
+Returns the status of the specified orchestration instance.
+
+```command
+func durable get-runtime-status --id <INSTANCE_ID>
+```
+
+The `get-runtime-status` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--connection-string-setting`** | Optional name of the setting containing the storage connection string to use. |
+| **`--id`** | Specifies the ID of an orchestration instance (required). |
+| **`--show-input`** | When set, the response contains the input of the function. |
+| **`--show-output`** | When set, the response contains the execution history. |
+| **`--task-hub-name`** | Optional name of the Durable Functions task hub to use. |
+
+To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools-1).
+
+## func durable purge-history
+
+Purge orchestration instance state, history, and blob storage for orchestrations older than the specified threshold.
+
+```command
+func durable purge-history
+```
+
+The `purge-history` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--connection-string-setting`** | Optional name of the setting containing the storage connection string to use. |
+| **`--created-after`** | Optionally delete the history of instances created after this date/time (UTC). All ISO 8601 formatted datetime values are accepted. |
+| **`--created-before`** | Optionally delete the history of instances created before this date/time (UTC). All ISO 8601 formatted datetime values are accepted.|
+| **`--runtime-status`** | Optionally delete the history of instances whose status match a specific status, including `completd`, `terminated`, `canceled`, and `failed`. You can provide one or more space-separated statues. If you don't include `--runtime-status`, instance history is deleted regardless of status.|
+| **`--task-hub-name`** | Optional name of the Durable Functions task hub to use. |
+
+To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools-7).
+
+## func durable raise-event
+
+Raises an event to the specified orchestration instance.
+
+```command
+func durable raise-event --event-name <EVENT_NAME> --event-data <DATA>
+```
+
+The `raise-event` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--connection-string-setting`** | Optional name of the setting containing the storage connection string to use. |
+| **`--event-data`** | Data to pass to the event, either inline or from a JSON file (required). For files, prefix the path to the file with an ampersand (`@`), such as `@path/to/file.json`. |
+| **`--event-name`** | Name of the event to raise (required).
+| **`--id`** | Specifies the ID of an orchestration instance (required). |
+| **`--task-hub-name`** | Optional name of the Durable Functions task hub to use. |
+
+To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools-5).
+
+## func durable rewind
+
+Rewinds the specified orchestration instance.
+
+```command
+func durable rewind --id <INSTANCE_ID> --reason <REASON>
+```
+
+The `rewind` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--connection-string-setting`** | Optional name of the setting containing the storage connection string to use. |
+| **`--id`** | Specifies the ID of an orchestration instance (required). |
+| **`--reason`** | Reason for rewinding the orchestration (required).|
+| **`--task-hub-name`** | Optional name of the Durable Functions task hub to use. |
+
+To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools-6).
+
+## func durable start-new
+
+Starts a new instance of the specified orchestrator function.
+
+```command
+func durable start-new --id <INSTANCE_ID> --function-name <FUNCTION_NAME> --input <INPUT>
+```
+
+The `start-new` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--connection-string-setting`** | Optional name of the setting containing the storage connection string to use. |
+| **`--function-name`** | Name of the orchestrator function to start (required).|
+| **`--id`** | Specifies the ID of an orchestration instance (required). |
+| **`--input`** | Input to the orchestrator function, either inline or from a JSON file (required). For files, prefix the path to the file with an ampersand (`@`), such as `@path/to/file.json`. |
+| **`--task-hub-name`** | Optional name of the Durable Functions task hub to use. |
+
+To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools).
+
+## func durable terminate
+
+Stops the specified orchestration instance.
+
+```command
+func durable terminate --id <INSTANCE_ID> --reason <REASON>
+```
+
+The `terminate` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--connection-string-setting`** | Optional name of the setting containing the storage connection string to use. |
+| **`--id`** | Specifies the ID of an orchestration instance (required). |
+| **`--reason`** | Reason for stopping the orchestration (required). |
+| **`--task-hub-name`** | Optional name of the Durable Functions task hub to use. |
+
+To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools-4).
+
+## func extensions install
+
+Installs Functions extensions in a non-C# class library project.
+
+When possible, you should instead use extension bundles. To learn more, see [Extension bundles](functions-bindings-register.md#extension-bundles).
+
+For C# class library and .NET isolated projects, instead use standard NuGet package installation methods, such as `dotnet add package`.
+
+The `install` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--configPath`** | Path of the directory containing extensions.csproj file.|
+| **`--csx`** | Supports C# scripting (.csx) projects. |
+| **`--force`** | Update the versions of existing extensions. |
+| **`--output`** | Output path for the extensions. |
+| **`--package`** | Identifier for a specific extension package. When not specified, all referenced extensions are installed, as with `func extensions sync`.|
+| **`--source`** | NuGet feed source when not using NuGet.org.|
+| **`--version`** | Extension package version. |
+
+No action is taken when an extension bundle is defined in your host.json file.
+
+## func extensions sync
+
+Installs all extensions added to the function app.
+
+The `sync` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--configPath`** | Path of the directory containing extensions.csproj file.|
+| **`--csx`** | Supports C# scripting (.csx) projects. |
+| **`--output`** | Output path for the extensions. |
+
+Regenerates a missing extensions.csproj file. No action is taken when an extension bundle is defined in your host.json file.
+
+## func kubernetes deploy
+
+Deploys a Functions project as a custom docker container to a Kubernetes cluster using KEDA.
+
+```command
+func kubernetes deploy
+```
+
+This command builds your project as a custom container and publishes it to a Kubernetes cluster using KEDA for dynamic scale. To publish to a cluster using a default scaler or using KNative, instead use the [`func deploy` command](#func-deploy). Custom containers must have a Dockerfile. To create an app with a Dockerfile, use the `--dockerfile` option with the [`func init` command](#func-init).
+
+The following Kubernetes deployment options are available:
+
+| Option | Description |
+| | -- |
+| **`--dry-run`** | Optionally displays the deployment template, without execution. |
+| **`--config-map-name`** | Optional name of an existing config map with [function app settings](functions-how-to-use-azure-function-app-settings.md#settings) to use in the deployment. Requires `--use-config-map`. The default behavior is to create settings based on the `Values` object in the [local.settings.json file].|
+| **`--cooldown-period`** | The cool-down period (in seconds) after all triggers are no longer active before the deployment scales back down to zero, with a default of 300 s. |
+| **`--ignore-errors`** | Continues the deployment after a resource returns an error. The default behavior is to stop on error. |
+| **`--image-name`** | The name of the image to use for the pod deployment and from which to read functions. |
+| **`--keda-version`** | Sets the version of KEDA to install. Valid options are: `v1` and `v2` (default). |
+| **`--keys-secret-name`** | The name of a Kubernetes Secrets collection to use for storing [function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). |
+| **`--max-replicas`** | Sets the maximum replica count for to which the Horizontal Pod Autoscaler (HPA) scales. |
+| **`--min-replicas`** | Sets the minimum replica count below which HPA won't scale. |
+| **`--mount-funckeys-as-containervolume`** | Mounts the [function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys) as a container volume. |
+| **`--name`** | The name used for the deployment and other artifacts in Kubernetes. |
+| **`--namespace`** | Sets the Kubernetes namespace to which to deploy, which defaults to the default namespace. |
+| **`--no-docker`** | Functions are read from the current directory instead of from an image. Requires mounting the image filesystem. |
+| **`--registry`** | When set, a Docker build is run and the image is pushed to a registry of that name. You can't use `--registry` with `--image-name`. For Docker, use your username. |
+| **`--polling-interval`** | The polling interval (in seconds) for checking non-HTTP triggers, with a default of 30s. |
+| **`--pull-secret`** | The secret used to access private registry credentials. |
+| **`--secret-name`** | The name of an existing Kubernetes Secrets collection that contains [function app settings](functions-how-to-use-azure-function-app-settings.md#settings) to use in the deployment. The default behavior is to create settings based on the `Values` object in the [local.settings.json file]. |
+| **`--show-service-fqdn`** | Displays the URLs of HTTP triggers with the Kubernetes FQDN instead of the default behavior of using an IP address. |
+| **`--service-type`** | Sets the type of Kubernetes Service. Supported values are: `ClusterIP`, `NodePort`, and `LoadBalancer` (default). |
+| **`--use-config-map`** | Use a `ConfigMap` object (v1) instead of a `Secret` object (v1) to configure [function app settings](functions-how-to-use-azure-function-app-settings.md#settings). The map name is set using `--config-map-name`.|
+
+Core Tools uses the local Docker CLI to build and publish the image.
+
+Make sure your Docker is already installed locally. Run the `docker login` command to connect to your account.
+
+To learn more, see [Deploying a function app to Kubernetes](functions-kubernetes-keda.md#deploying-a-function-app-to-kubernetes).
+
+## func kubernetes install
+
+Installs KEDA in a Kubernetes cluster.
+
+```command
+func kubernetes install
+```
+
+Installs KEDA to the cluster defined in the kubectl config file.
+
+The `install` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--dry-run`** | Displays the deployment template, without execution. |
+| **`--keda-version`** | Sets the version of KEDA to install. Valid options are: `v1` and `v2` (default). |
+| **`--namespace`** | Supports installation to a specific Kubernetes namespace. When not set, the default namespace is used. |
+
+To learn more, see [Managing KEDA and functions in Kubernetes](functions-kubernetes-keda.md#managing-keda-and-functions-in-kubernetes).
+
+## func kubernetes remove
+
+Removes KEDA from the Kubernetes cluster defined in the kubectl config file.
+
+```command
+func kubernetes remove
+```
+
+Removes KEDA from the cluster defined in the kubectl config file.
+
+The `remove` action supports the following options:
+
+| Option | Description |
+| | -- |
+| **`--namespace`** | Supports uninstall from a specific Kubernetes namespace. When not set, the default namespace is used. |
+
+To learn more, see [Uninstalling KEDA from Kubernetes](functions-kubernetes-keda.md#uninstalling-keda-from-kubernetes).
+
+## func settings add
+
+Adds a new setting to the `Values` collection in the [local.settings.json file].
+
+```command
+func settings add <SETTING_NAME> <VALUE>
+```
+
+Replace `<SETTING_NAME>` with the name of the app setting and `<VALUE>` with the value of the setting.
+
+The `add` action supports the following option:
+
+| Option | Description |
+| | -- |
+| **`--connectionString`** | Adds the name-value pair to the `ConnectionStrings` collection instead of the `Values` collection. Only use the `ConnectionStrings` collection when required by certain frameworks. To learn more, see [local.settings.json file]. |
+
+## func settings decrypt
+
+Decrypts previously encrypted values in the `Values` collection in the [local.settings.json file].
+
+```command
+func settings decrypt
+```
+
+Connection string values in the `ConnectionStrings` collection are also decrypted. In local.settings.json, `IsEncrypted` is also set to `false`. Encrypt local settings to reduce the risk of leaking valuable information from local.settings.json. In Azure, application settings are always stored encrypted.
+
+## func settings delete
+
+Removes an existing setting from the `Values` collection in the [local.settings.json file].
+
+```command
+func settings delete <SETTING_NAME>
+```
+
+Replace `<SETTING_NAME>` with the name of the app setting and `<VALUE>` with the value of the setting.
+
+The `delete` action supports the following option:
+
+| Option | Description |
+| | -- |
+| **`--connectionString`** | Removes the name-value pair from the `ConnectionStrings` collection instead of from the `Values` collection. |
+
+## func settings encrypt
+
+Encrypts the values of individual items in the `Values` collection in the [local.settings.json file].
+
+```command
+func settings encrypt
+```
+
+Connection string values in the `ConnectionStrings` collection are also encrypted. In local.settings.json, `IsEncrypted` is also set to `true`, which specifies that the local runtime decrypts settings before using them. Encrypt local settings to reduce the risk of leaking valuable information from local.settings.json. In Azure, application settings are always stored encrypted.
+
+## func settings list
+
+Outputs a list of settings in the `Values` collection in the [local.settings.json file].
+
+```command
+func settings list
+```
+
+Connection strings from the `ConnectionStrings` collection are also output. By default, values are masked for security. You can use the `--showValue` option to display the actual value.
+
+The `list` action supports the following option:
+
+| Option | Description |
+| | -- |
+| **`--showValue`** | Shows the actual unmasked values in the output. |
+
+## func templates list
+
+Lists the available function (trigger) templates.
+
+The `list` action supports the following option:
+
+| Option | Description |
+| | -- |
+| **`--language`** | Language for which to filter returned templates. Default is to return all languages. |
+
+[local.settings.json file]: functions-develop-local.md#local-settings-file
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-develop-local.md
Last updated 09/04/2018
While you're able to develop and test Azure Functions in the [Azure portal], many developers prefer a local development experience. Functions makes it easy to use your favorite code editor and development tools to create and test functions on your local computer. Your local functions can connect to live Azure services, and you can debug them on your local computer using the full Functions runtime.
+This article provides links to specific development environments for your preferred language. It also provides some shared guidance for local development, such as working with the [local.settings.json file](#local-settings-file).
+ ## Local development environments The way in which you develop functions on your local computer depends on your [language](supported-languages.md) and tooling preferences. The environments in the following table support local development: |Environment |Languages |Description| |--|||
-|[Visual Studio Code](functions-develop-vs-code.md)| [C# (class library)](functions-dotnet-class-library.md), [C# script (.csx)](functions-reference-csharp.md), [JavaScript](functions-reference-node.md), [PowerShell](./create-first-function-vs-code-powershell.md), [Python](functions-reference-python.md) | The [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) adds Functions support to VS Code. Requires the Core Tools. Supports development on Linux, MacOS, and Windows, when using version 2.x of the Core Tools. To learn more, see [Create your first function using Visual Studio Code](./create-first-function-vs-code-csharp.md). |
-| [Command prompt or terminal](functions-run-local.md) | [C# (class library)](functions-dotnet-class-library.md), [C# script (.csx)](functions-reference-csharp.md), [JavaScript](functions-reference-node.md), [PowerShell](functions-reference-powershell.md), [Python](functions-reference-python.md) | [Azure Functions Core Tools] provides the core runtime and templates for creating functions, which enable local development. Version 2.x supports development on Linux, MacOS, and Windows. All environments rely on Core Tools for the local Functions runtime. |
-| [Visual Studio 2019](functions-develop-vs.md) | [C# (class library)](functions-dotnet-class-library.md) | The Azure Functions tools are included in the **Azure development** workload of [Visual Studio 2019](https://www.visualstudio.com/vs/) and later versions. Lets you compile functions in a class library and publish the .dll to Azure. Includes the Core Tools for local testing. To learn more, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). |
-| [Maven](./create-first-function-cli-java.md) (various) | [Java](functions-reference-java.md) | Integrates with Core Tools to enable development of Java functions. Version 2.x supports development on Linux, MacOS, and Windows. To learn more, see [Create your first function with Java and Maven](./create-first-function-cli-java.md). Also supports development using [Eclipse](functions-create-maven-eclipse.md) and [IntelliJ IDEA](functions-create-maven-intellij.md) |
+|[Visual Studio Code](functions-develop-vs-code.md)| [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-developer-howtos.md?pivots=development-environment-vscode)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](./create-first-function-vs-code-powershell.md)<br/>[Python](functions-reference-python.md) | The [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) adds Functions support to VS Code. Requires the Core Tools. Supports development on Linux, macOS, and Windows, when using version 2.x of the Core Tools. To learn more, see [Create your first function using Visual Studio Code](./create-first-function-vs-code-csharp.md). |
+| [Command prompt or terminal](functions-run-local.md) | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-developer-howtos.md?pivots=development-environment-cli)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](functions-reference-powershell.md)<br/>[Python](functions-reference-python.md) | [Azure Functions Core Tools] provides the core runtime and templates for creating functions, which enable local development. Version 2.x supports development on Linux, macOS, and Windows. All environments rely on Core Tools for the local Functions runtime. |
+| [Visual Studio 2019](functions-develop-vs.md) | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-developer-howtos.md?pivots=development-environment-vs) | The Azure Functions tools are included in the **Azure development** workload of [Visual Studio 2019](https://www.visualstudio.com/vs/) and later versions. Lets you compile functions in a class library and publish the .dll to Azure. Includes the Core Tools for local testing. To learn more, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). |
+| [Maven](./create-first-function-cli-java.md) (various) | [Java](functions-reference-java.md) | Maven archetype supports Core Tools to enable development of Java functions. Version 2.x supports development on Linux, macOS, and Windows. To learn more, see [Create your first function with Java and Maven](./create-first-function-cli-java.md). Also supports development using [Eclipse](functions-create-maven-eclipse.md) and [IntelliJ IDEA](functions-create-maven-intellij.md) |
[!INCLUDE [Don't mix development environments](../../includes/functions-mixed-dev-environments.md)] Each of these local development environments lets you create function app projects and use predefined Functions templates to create new functions. Each uses the Core Tools so that you can test and debug your functions against the real Functions runtime on your own machine just as you would any other app. You can also publish your function app project from any of these environments to Azure.
+## Local settings file
+
+The local.settings.json file stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you're running your project locally.
+
+> [!IMPORTANT]
+> Because the local.settings.json may contain secrets, such as connection strings, you should never store it in a remote repository. Tools that support Functions provide ways to synchronize settings in the local.settings.json file with the [app settings](functions-how-to-use-azure-function-app-settings.md#settings) in the function app to which your project is deployed.
+
+The local settings file has this structure:
+
+```json
+{
+ "IsEncrypted": false,
+ "Values": {
+ "FUNCTIONS_WORKER_RUNTIME": "<language worker>",
+ "AzureWebJobsStorage": "<connection-string>",
+ "MyBindingConnection": "<binding-connection-string>",
+ "AzureWebJobs.HttpExample.Disabled": "true"
+ },
+ "Host": {
+ "LocalHttpPort": 7071,
+ "CORS": "*",
+ "CORSCredentials": false
+ },
+ "ConnectionStrings": {
+ "SQLConnectionString": "<sqlclient-connection-string>"
+ }
+}
+```
+
+These settings are supported when you run projects locally:
+
+| Setting | Description |
+| | -- |
+| **`IsEncrypted`** | When this setting is set to `true`, all values are encrypted with a local machine key. Used with `func settings` commands. Default value is `false`. You might want to encrypt the local.settings.json file on your local computer when it contains secrets, such as service connection strings. The host automatically decrypts settings when it runs. Use the `func settings decrypt` command before trying to read locally encrypted settings. |
+| **`Values`** | Collection of application settings used when a project is running locally. These key-value (string-string) pairs correspond to application settings in your function app in Azure, like [`AzureWebJobsStorage`]. Many triggers and bindings have a property that refers to a connection string app setting, like `Connection` for the [Blob storage trigger](functions-bindings-storage-blob-trigger.md#configuration). For these properties, you need an application setting defined in the `Values` array. See the subsequent table for a list of commonly used settings. <br/>Values must be strings and not JSON objects or arrays. Setting names can't include a colon (`:`) or a double underline (`__`). Double underline characters are reserved by the runtime, and the colon is reserved to support [dependency injection](functions-dotnet-dependency-injection.md#working-with-options-and-settings). |
+| **`Host`** | Settings in this section customize the Functions host process when you run projects locally. These settings are separate from the host.json settings, which also apply when you run projects in Azure. |
+| **`LocalHttpPort`** | Sets the default port used when running the local Functions host (`func host start` and `func run`). The `--port` command-line option takes precedence over this setting. For example, when running in Visual Studio IDE, you may change the port number by navigating to the "Project Properties -> Debug" window and explicitly specifying the port number in a `host start --port <your-port-number>` command that can be supplied in the "Application Arguments" field. |
+| **`CORS`** | Defines the origins allowed for [cross-origin resource sharing (CORS)](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing). Origins are supplied as a comma-separated list with no spaces. The wildcard value (\*) is supported, which allows requests from any origin. |
+| **`CORSCredentials`** | When set to `true`, allows `withCredentials` requests. |
+| **`ConnectionStrings`** | A collection. Don't use this collection for the connection strings used by your function bindings. This collection is used only by frameworks that typically get connection strings from the `ConnectionStrings` section of a configuration file, like [Entity Framework](/ef/ef6/). Connection strings in this object are added to the environment with the provider type of [System.Data.SqlClient](/dotnet/api/system.data.sqlclient). Items in this collection aren't published to Azure with other app settings. You must explicitly add these values to the `Connection strings` collection of your function app settings. If you're creating a [`SqlConnection`](/dotnet/api/system.data.sqlclient.sqlconnection) in your function code, you should store the connection string value with your other connections in **Application Settings** in the portal. |
+
+The following application settings can be included in the **`Values`** array when running locally:
+
+| Setting | Values | Description |
+|--|--|--|
+|**`AzureWebJobsStorage`**| Storage account connection string, or<br/>`UseDevelopmentStorage=true`| Contains the connection string for an Azure storage account. Required when using triggers other than HTTP. For more information, see the [`AzureWebJobsStorage`] reference.<br/>When you have the [Azure Storage Emulator](../storage/common/storage-use-emulator.md) installed locally and you set [`AzureWebJobsStorage`] to `UseDevelopmentStorage=true`, Core Tools uses the emulator. The emulator is useful during development, but you should test with an actual storage connection before deployment.|
+|**`AzureWebJobs.<FUNCTION_NAME>.Disabled`**| `true`\|`false` | To disable a function when running locally, add `"AzureWebJobs.<FUNCTION_NAME>.Disabled": "true"` to the collection, where `<FUNCTION_NAME>` is the name of the function. To learn more, see [How to disable functions in Azure Functions](disable-function.md#localsettingsjson) |
+|**`FUNCTIONS_WORKER_RUNTIME`** | `dotnet`<br/>`node`<br/>`java`<br/>`powershell`<br/>`python`| Indicates the targeted language of the Functions runtime. Required for version 2.x and higher of the Functions runtime. This setting is generated for your project by Core Tools. To learn more, see the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) reference.|
+| **`FUNCTIONS_WORKER_RUNTIME_VERSION`** | `~7` |Indicates that PowerShell 7 be used when running locally. If not set, then PowerShell Core 6 is used. This setting is only used when running locally. When running in Azure, the PowerShell runtime version is determined by the `powerShellVersion` site configuration setting, which can be [set in the portal](functions-reference-powershell.md#changing-the-powershell-version). |
+ ## Next steps + To learn more about local development of compiled C# functions using Visual Studio 2019, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md).
-+ To learn more about local development of functions using VS Code on a Mac, Linux, or Windows computer, see [Deploy Azure Functions from VS Code](/azure/developer/javascript/tutorial-vscode-serverless-node-01).
++ To learn more about local development of functions using VS Code on a Mac, Linux, or Windows computer, see the Visual Studio Code getting started article for your preferred language:
+ + [C# class library](create-first-function-vs-code-csharp.md)
+ + [C# isolated process (.NET 5.0)](dotnet-isolated-process-developer-howtos.md?pivots=development-environment-vscode)
+ + [Java](create-first-function-vs-code-java.md)
+ + [JavaScript](create-first-function-vs-code-node.md)
+ + [PowerShell](create-first-function-vs-code-powershell.md)
+ + [Python](create-first-function-vs-code-python.md)
+ + [TypeScript](create-first-function-vs-code-typescript.md)
+ To learn more about developing functions from the command prompt or terminal, see [Work with Azure Functions Core Tools](functions-run-local.md). <!-- LINKS --> [Azure Functions Core Tools]: https://www.npmjs.com/package/azure-functions-core-tools [Azure portal]: https://portal.azure.com
-[Node.js]: https://docs.npmjs.com/getting-started/installing-node#osx-or-windows
+[Node.js]: https://docs.npmjs.com/getting-started/installing-node#osx-or-windows
+[`AzureWebJobsStorage`]: functions-app-settings.md#azurewebjobsstorage
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-develop-vs-code.md
The project template creates a project in your chosen language and installs requ
* **host.json**: Lets you configure the Functions host. These settings apply when you're running functions locally and when you're running them in Azure. For more information, see [host.json reference](functions-host-json.md).
-* **local.settings.json**: Maintains settings used when you're running functions locally. These settings are used only when you're running functions locally. For more information, see [Local settings file](#local-settings-file).
+* **local.settings.json**: Maintains settings used when you're running functions locally. These settings are used only when you're running functions locally. For more information, see [Local settings file](#local-settings).
>[!IMPORTANT] >Because the local.settings.json file can contain secrets, you need to exclude it from your project source control.
When running functions in Azure, the extension uses your Azure account to automa
### Run functions locally
-The local runtime is the same runtime that hosts your function app in Azure. Local settings are read from the [local.settings.json file](#local-settings-file). To run your Functions project locally, you must meet [additional requirements](#run-local-requirements).
+The local runtime is the same runtime that hosts your function app in Azure. Local settings are read from the [local.settings.json file](#local-settings). To run your Functions project locally, you must meet [additional requirements](#run-local-requirements).
#### Configure the project to run locally
To set the storage account connection string:
3. Repeat the previous step to add unique keys to the **Values** array for any other connections required by your functions.
-For more information, see [Local settings file](#local-settings-file).
+For more information, see [Local settings file](#local-settings).
#### <a name="debugging-functions-locally"></a>Debug functions locally
The Azure Functions extension provides a useful graphical interface in the area
| **Connect to GitHub Repository** | Connects your function app to a GitHub repository. | | **Copy Function URL** | Gets the remote URL of an HTTP-triggered function that's running in Azure. To learn more, see [Get the URL of the deployed function](#get-the-url-of-the-deployed-function). | | **Create function app in Azure** | Creates a new function app in your subscription in Azure. To learn more, see the section on how to [publish to a new function app in Azure](#publish-to-azure). |
-| **Decrypt Settings** | Decrypts [local settings](#local-settings-file) that have been encrypted by **Azure Functions: Encrypt Settings**. |
+| **Decrypt Settings** | Decrypts [local settings](#local-settings) that have been encrypted by **Azure Functions: Encrypt Settings**. |
| **Delete Function App** | Removes a function app from your subscription in Azure. When there are no other apps in the App Service plan, you're given the option to delete that too. Other resources, like storage accounts and resource groups, aren't deleted. To remove all resources, you should instead [delete the resource group](functions-add-output-binding-storage-queue-vs-code.md#clean-up-resources). Your local project isn't affected. | |**Delete Function** | Removes an existing function from a function app in Azure. Because this deletion doesn't affect your local project, instead consider removing the function locally and then [republishing your project](#republish-project-files). | | **Delete Proxy** | Removes an Azure Functions proxy from your function app in Azure. To learn more about proxies, see [Work with Azure Functions Proxies](functions-proxies.md). |
The Azure Functions extension provides a useful graphical interface in the area
| **Disconnect from Repo** | Removes the [continuous deployment](functions-continuous-deployment.md) connection between a function app in Azure and a source control repository. | | **Download Remote Settings** | Downloads settings from the chosen function app in Azure into your local.settings.json file. If the local file is encrypted, it's decrypted, updated, and encrypted again. If there are settings that have conflicting values in the two locations, you're prompted to choose how to proceed. Be sure to save changes to your local.settings.json file before you run this command. | | **Edit settings** | Changes the value of an existing function app setting in Azure. This command doesn't affect settings in your local.settings.json file. |
-| **Encrypt settings** | Encrypts individual items in the `Values` array in the [local settings](#local-settings-file). In this file, `IsEncrypted` is also set to `true`, which specifies that the local runtime will decrypt settings before using them. Encrypt local settings to reduce the risk of leaking valuable information. In Azure, application settings are always stored encrypted. |
+| **Encrypt settings** | Encrypts individual items in the `Values` array in the [local settings](#local-settings). In this file, `IsEncrypted` is also set to `true`, which specifies that the local runtime will decrypt settings before using them. Encrypt local settings to reduce the risk of leaking valuable information. In Azure, application settings are always stored encrypted. |
| **Execute Function Now** | Manually starts a function using admin APIs. This command is used for testing, both locally during debugging and against functions running in Azure. When triggering a function in Azure, the extension first automatically obtains an admin key, which it uses to call the remote admin APIs that start functions in Azure. The body of the message sent to the API depends on the type of trigger. Timer triggers don't require you to pass any data. | | **Initialize Project for Use with VS Code** | Adds the required Visual Studio Code project files to an existing Functions project. Use this command to work with a project that you created by using Core Tools. | | **Install or Update Azure Functions Core Tools** | Installs or updates [Azure Functions Core Tools], which is used to run functions locally. |
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-develop-vs.md
After you create an Azure Functions project, the project template creates a C# p
* **host.json**: Lets you configure the Functions host. These settings apply both when running locally and in Azure. For more information, see [host.json reference](functions-host-json.md).
-* **local.settings.json**: Maintains settings used when running functions locally. These settings aren't used when running in Azure. For more information, see [Local settings file](#local-settings-file).
+* **local.settings.json**: Maintains settings used when running functions locally. These settings aren't used when running in Azure. For more information, see [Local settings file](#local-settings).
>[!IMPORTANT] >Because the local.settings.json file can contain secrets, you must exclude it from your project source control. Ensure the **Copy to Output Directory** setting for this file is set to **Copy if newer**.
In C# class library functions, the bindings used by the function are defined by
![Create a Queue storage trigger function](./media/functions-develop-vs/functions-vstools-create-queuetrigger.png)
- This trigger example uses a connection string with a key named `QueueStorage`. Define this connection string setting in the [local.settings.json file](functions-run-local.md#local-settings-file).
+ This trigger example uses a connection string with a key named `QueueStorage`. Define this connection string setting in the [local.settings.json file](functions-develop-local.md#local-settings-file).
4. Examine the newly added class. You see a static `Run()` method that's attributed with the `FunctionName` attribute. This attribute indicates that the method is the entry point for the function.
As with triggers, input and output bindings are added to your function as bindin
For more information, see [C# class library with Visual Studio](./functions-bindings-register.md#local-csharp). Find the binding-specific NuGet package requirements in the reference article for the binding. For example, find package requirements for the Event Hubs trigger in the [Event Hubs binding reference article](functions-bindings-event-hubs.md).
-3. If there are app settings that the binding needs, add them to the `Values` collection in the [local setting file](functions-run-local.md#local-settings-file).
+3. If there are app settings that the binding needs, add them to the `Values` collection in the [local setting file](functions-develop-local.md#local-settings-file).
The function uses these values when it runs locally. When the function runs in the function app in Azure, it uses the [function app settings](#function-app-settings).
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-dotnet-class-library.md
When running on Linux in a Premium or dedicated (App Service) plan, you pin your
In Visual Studio, the **Azure Functions** project template creates a C# class library project that contains the following files: * [host.json](functions-host-json.md) - stores configuration settings that affect all functions in the project when running locally or in Azure.
-* [local.settings.json](functions-run-local.md#local-settings-file) - stores app settings and connection strings that are used when running locally. This file contains secrets and isn't published to your function app in Azure. Instead, [add app settings to your function app](functions-develop-vs.md#function-app-settings).
+* [local.settings.json](functions-develop-local.md#local-settings-file) - stores app settings and connection strings that are used when running locally. This file contains secrets and isn't published to your function app in Azure. Instead, [add app settings to your function app](functions-develop-vs.md#function-app-settings).
When you build the project, a folder structure that looks like the following example is generated in the build output directory:
namespace functionapp0915
In this example, the custom metric data gets aggregated by the host before being sent to the customMetrics table. To learn more, see the [GetMetric](../azure-monitor/app/api-custom-events-metrics.md#getmetric) documentation in Application Insights.
-When running locally, you must add the `APPINSIGHTS_INSTRUMENTATIONKEY` setting, with the Application Insights key, to the [local.settings.json](functions-run-local.md#local-settings-file) file.
+When running locally, you must add the `APPINSIGHTS_INSTRUMENTATIONKEY` setting, with the Application Insights key, to the [local.settings.json](functions-develop-local.md#local-settings-file) file.
# [v1.x](#tab/v1)
azure-functions Functions Host Json V1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-host-json-v1.md
The *host.json* metadata file contains global configuration options that affect
Other function app configuration options are managed in your [app settings](functions-app-settings.md).
-Some host.json settings are only used when running locally in the [local.settings.json](functions-run-local.md#local-settings-file) file.
+Some host.json settings are only used when running locally in the [local.settings.json](functions-develop-local.md#local-settings-file) file.
## Sample host.json file
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-host-json.md
The *host.json* metadata file contains global configuration options that affect
> [!NOTE] > This article is for Azure Functions 2.x and later versions. For a reference of host.json in Functions 1.x, see [host.json reference for Azure Functions 1.x](functions-host-json-v1.md).
-Other function app configuration options are managed in your [app settings](functions-app-settings.md) (for deployed apps) or your [local.settings.json](functions-run-local.md#local-settings-file) file (for local development).
+Other function app configuration options are managed in your [app settings](functions-app-settings.md) (for deployed apps) or your [local.settings.json](functions-develop-local.md#local-settings-file) file (for local development).
Configurations in host.json related to bindings are applied equally to each function in the function app.
azure-functions Functions How To Use Azure Function App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOUR
[!INCLUDE [functions-environment-variables](../../includes/functions-environment-variables.md)]
-When you develop a function app locally, you must maintain local copies of these values in the local.settings.json project file. To learn more, see [Local settings file](functions-run-local.md#local-settings-file).
+When you develop a function app locally, you must maintain local copies of these values in the local.settings.json project file. To learn more, see [Local settings file](functions-develop-local.md#local-settings-file).
## Hosting plan type
azure-functions Functions Kubernetes Keda https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-kubernetes-keda.md
Kubernetes-based Functions provides the Functions runtime in a [Docker container
## Managing KEDA and functions in Kubernetes
-To run Functions on your Kubernetes cluster, you must install the KEDA component. You can install this component using [Azure Functions Core Tools](functions-run-local.md).
+To run Functions on your Kubernetes cluster, you must install the KEDA component. You can install this component in one of the following ways:
-### Installing with Helm
++ Azure Functions Core Tools: using the [`func kubernetes install` command](functions-core-tools-reference.md#func-kubernetes-install).
-There are various ways to install KEDA in any Kubernetes cluster including Helm. Deployment options are documented on the [KEDA site](https://keda.sh/docs/deploy/).
++ Helm: there are various ways to install KEDA in any Kubernetes cluster, including Helm. Deployment options are documented on the [KEDA site](https://keda.sh/docs/deploy/). ## Deploying a function app to Kubernetes
-You can deploy any function app to a Kubernetes cluster running KEDA. Since your functions run in a Docker container, your project needs a `Dockerfile`. If it doesn't already have one, you can add a Dockerfile by running the following command at the root of your Functions project:
+You can deploy any function app to a Kubernetes cluster running KEDA. Since your functions run in a Docker container, your project needs a Dockerfile. You can create a Dockerfile by using the [`--docker` option][func init] when calling `func init` to create the project. If you forgot to do this, you can always call `func init` again from the root of your Functions project, this time using the [`--docker-only` option][func init], as shown in the following example.
-> [!NOTE]
-> The Core Tools automatically create the Dockerfile for Azure Functions written in .NET, Node, Python, or PowerShell. For function apps written in Java, the Dockerfile must be created manually. Use the Azure Functions [image list](https://github.com/Azure/azure-functions-docker) to find the correct image to base the Azure Function.
-
-```cli
+```command
func init --docker-only ```
-To build an image and deploy your functions to Kubernetes, run the following command:
+To learn more about Dockerfile generation, see the [`func init`][func init] reference.
-> [!NOTE]
-> The Core Tools will leverage the docker CLI to build and publish the image. Be sure to have docker installed already and connected to your account with `docker login`.
+To build an image and deploy your functions to Kubernetes, run the following command:
-```cli
+```command
func kubernetes deploy --name <name-of-function-deployment> --registry <container-registry-username> ```
-> Replace `<name-of-function-deployment>` with the name of your function app.
+In this example, replace `<name-of-function-deployment>` with the name of your function app.
+
+The deploy command does the following:
-The deploy command executes a series of actions:
1. The Dockerfile created earlier is used to build a local image for the function app.
-2. The local image is tagged and pushed to the container registry where the user is logged in.
-3. A manifest is created and applied to the cluster that defines a Kubernetes `Deployment` resource, a `ScaledObject` resource, and `Secrets`, which includes environment variables imported from your `local.settings.json` file.
+1. The local image is tagged and pushed to the container registry where the user is logged in.
+1. A manifest is created and applied to the cluster that defines a Kubernetes `Deployment` resource, a `ScaledObject` resource, and `Secrets`, which includes environment variables imported from your `local.settings.json` file.
+
+To learn more, see the [`func kubernetes deploy` command](functions-core-tools-reference.md#func-kubernetes-deploy).
### Deploying a function app from a private registry
The above flow works for private registries as well. If you are pulling your co
After deploying you can remove a function by removing the associated `Deployment`, `ScaledObject`, an `Secrets` created.
-```cli
+```command
kubectl delete deploy <name-of-function-deployment> kubectl delete ScaledObject <name-of-function-deployment> kubectl delete secret <name-of-function-deployment>
kubectl delete secret <name-of-function-deployment>
## Uninstalling KEDA from Kubernetes
-Steps to uninstall KEDA are documented [on the KEDA site](https://keda.sh/docs/deploy/).
+You can remove KEDA from your cluster in one of the following ways:
+++ Azure Functions Core Tools: using the [`func kubernetes remove` command](functions-core-tools-reference.md#func-kubernetes-remove).+++ Helm: see the uninstall steps [on the KEDA site](https://keda.sh/docs/deploy/). ## Supported triggers in KEDA
For more information, see the following resources:
* [Create a function using a custom image](functions-create-function-linux-custom-image.md) * [Code and test Azure Functions locally](functions-develop-local.md) * [How the Azure Function Consumption plan works](functions-scale.md)+
+[func init]: functions-core-tools-reference.md#func-init
azure-functions Functions Machine Learning Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-machine-learning-tensorflow.md
In Azure Functions, a function project is a container for one or more individual
func init --worker-runtime python ```
- After initialization, the *start* folder contains various files for the project, including configurations files named [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
+ After initialization, the *start* folder contains various files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
> [!TIP] > Because a function project is tied to a specific runtime, all the functions in the project must be written with the same language.
azure-functions Functions Reference Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-java.md
You can put more than one function in a project. Avoid putting your functions in
Use the Java annotations included in the [com.microsoft.azure.functions.annotation.*](/java/api/com.microsoft.azure.functions.annotation) package to bind input and outputs to your methods. For more information, see the [Java reference docs](/java/api/com.microsoft.azure.functions.annotation). > [!IMPORTANT]
-> You must configure an Azure Storage account in your [local.settings.json](./functions-run-local.md#local-settings-file) to run Azure Blob storage, Azure Queue storage, or Azure Table storage triggers locally.
+> You must configure an Azure Storage account in your [local.settings.json](./functions-develop-local.md#local-settings-file) to run Azure Blob storage, Azure Queue storage, or Azure Table storage triggers locally.
Example:
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-node.md
In this example, it is important to note that although an object is being export
When started with the `--inspect` parameter, a Node.js process listens for a debugging client on the specified port. In Azure Functions 2.x, you can specify arguments to pass into the Node.js process that runs your code by adding the environment variable or App Setting `languageWorkers:node:arguments = <args>`.
-To debug locally, add `"languageWorkers:node:arguments": "--inspect=5858"` under `Values` in your [local.settings.json](./functions-run-local.md#local-settings-file) file and attach a debugger to port 5858.
+To debug locally, add `"languageWorkers:node:arguments": "--inspect=5858"` under `Values` in your [local.settings.json](./functions-develop-local.md#local-settings-file) file and attach a debugger to port 5858.
When debugging using VS Code, the `--inspect` parameter is automatically added using the `port` value in the project's launch.json file.
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-powershell.md
Write-Host $env:WEBSITE_SITE_NAME
[!INCLUDE [Function app settings](../../includes/functions-app-settings.md)]
-When running locally, app settings are read from the [local.settings.json](functions-run-local.md#local-settings-file) project file.
+When running locally, app settings are read from the [local.settings.json](functions-develop-local.md#local-settings-file) project file.
## Concurrency
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-python.md
The recommended folder structure for a Python Functions project looks like the f
``` The main project folder (<project_root>) can contain the following files:
-* *local.settings.json*: Used to store app settings and connection strings when running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-run-local.md#local-settings-file).
+* *local.settings.json*: Used to store app settings and connection strings when running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file).
* *requirements.txt*: Contains the list of Python packages the system installs when publishing to Azure. * *host.json*: Contains global configuration options that affect all functions in a function app. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md). * *.vscode/*: (Optional) Contains store VSCode configuration. To learn more, see [VSCode setting](https://code.visualstudio.com/docs/getstarted/settings).
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info(f'My app setting value:{my_app_setting_value}') ```
-For local development, application settings are [maintained in the local.settings.json file](functions-run-local.md#local-settings-file).
+For local development, application settings are [maintained in the local.settings.json file](functions-develop-local.md#local-settings-file).
## Python version
You can use a Python worker extension library in your Python functions by follow
1. Add the extension package in the requirements.txt file for your project. 1. Install the library into your app. 1. Add the application setting `PYTHON_ENABLE_WORKER_EXTENSIONS`:
- + Locally: add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-run-local.md?tabs=python#local-settings-file)
+ + Locally: add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-develop-local.md#local-settings-file)
+ Azure: add `PYTHON_ENABLE_WORKER_EXTENSIONS=1` to your [app settings](functions-how-to-use-azure-function-app-settings.md#settings). 1. Import the extension module into your function trigger. 1. Configure the extension instance, if needed. Configuration requirements should be called-out in the extension's documentation.
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference.md
Having issues with errors coming from the bindings? Review the [Azure Functions
Your function project references connection information by name from its configuration provider. It does not directly accept the connection details, allowing them to be changed across environments. For example, a trigger definition might include a `connection` property. This might refer to a connection string, but you cannot set the connection string directly in a `function.json`. Instead, you would set `connection` to the name of an environment variable that contains the connection string.
-The default configuration provider uses environment variables. These might be set by [Application Settings](./functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) when running in the Azure Functions service, or from the [local settings file](functions-run-local.md#local-settings-file) when developing locally.
+The default configuration provider uses environment variables. These might be set by [Application Settings](./functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) when running in the Azure Functions service, or from the [local settings file](functions-develop-local.md#local-settings-file) when developing locally.
### Connection values
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-run-local.md
Developing functions on your local computer and publishing them to Azure using C
> * [Install the Core Tools and dependencies.](#v2) > * [Create a function app project from a language-specific template.](#create-a-local-functions-project) > * [Register trigger and binding extensions.](#register-extensions)
-> * [Define Storage and other connections.](#local-settings-file)
+> * [Define Storage and other connections.](#local-settings)
> * [Create a function from a trigger and language-specific template.](#create-func) > * [Run the function locally.](#start) > * [Publish the project to Azure.](#publish)
The following steps use [APT](https://wiki.debian.org/Apt) to install Core Tools
+### Version 1.x
+
+If you need to install version 1.x of the Core Tools, which runs only on Windows, see the [GitHub repository](https://github.com/Azure/azure-functions-core-tools/blob/v1.x/README.md#installing) for more information.
+ ## Create a local Functions project
-A Functions project directory contains the files [host.json](functions-host-json.md) and [local.settings.json](#local-settings-file), along with subfolders that contain the code for individual functions. This directory is the equivalent of a function app in Azure. To learn more about the Functions folder structure, see the [Azure Functions developers guide](functions-reference.md#folder-structure).
+A Functions project directory contains the following files and folders, regardless of language:
+
+| File name | Description |
+| | |
+| host.json | To learn more, see the [host.json reference](functions-host-json.md). |
+| local.settings.json | Settings used by Core Tools when running locally, including app settings. To learn more, see [local settings](#local-settings). |
+| .gitignore | Prevents the local.settings.json file from being accidentally published to a Git repository. To learn more, see [local settings](#local-settings)|
+| .vscode\extensions.json | Settings file used when opening the project folder in Visual Studio Code. |
-Version 3.x/2.x requires you to select a default language for your project when it is initialized. In version 3.x/2.x, all functions added use default language templates. In version 1.x, you specify the language each time you create a function.
+To learn more about the Functions project folder, see the [Azure Functions developers guide](functions-reference.md#folder-structure).
In the terminal window or from a command prompt, run the following command to create the project and local Git repository:
In the terminal window or from a command prompt, run the following command to cr
func init MyFunctionProj ```
->[!IMPORTANT]
-> Java uses a Maven archetype to create the local Functions project, along with your first HTTP triggered function. Use the following command to create your Java project: `mvn archetype:generate -DarchetypeGroupId=com.microsoft.azure -DarchetypeArtifactId=azure-functions-archetype`. For an example using the Maven archetype, see the [Command line quickstart](./create-first-function-cli-java.md).
+This example creates a Functions project in a new `MyFunctionProj` folder. You are prompted to choose a default language for your project.
-When you provide a project name, a new folder with that name is created and initialized. Otherwise, the current folder is initialized.
-In version 3.x/2.x, when you run the command you must choose a runtime for your project.
+The following considerations apply to project initialization:
-<pre>
-Select a worker runtime:
-dotnet
-node
-python
-powershell
-</pre>
++ If you don't provide the `--worker-runtime` option in the command, you're prompted to choose your language. For more information, see the [func init reference](functions-core-tools-reference.md#func-init).
-Use the up/down arrow keys to choose a language, then press Enter. If you plan to develop JavaScript or TypeScript functions, choose **node**, and then select the language. TypeScript has [some additional requirements](functions-reference-node.md#typescript).
++ When you don't provide a project name, the current folder is initialized.
-The output looks like the following example for a JavaScript project:
++ If you plan to publish your project to a custom Linux container, use the `--dockerfile` option to make sure that a Dockerfile is generated for your project. To learn more, see [Create a function on Linux using a custom image](functions-create-function-linux-custom-image.md).
-<pre>
-Select a worker runtime: node
-Writing .gitignore
-Writing host.json
-Writing local.settings.json
-Writing C:\myfunctions\myMyFunctionProj\.vscode\extensions.json
-Initialized empty Git repository in C:/myfunctions/myMyFunctionProj/.git/
-</pre>
+Certain languages may have additional considerations:
-`func init` supports the following options, which are version 3.x/2.x-only, unless otherwise noted:
-
-| Option | Description |
-| | -- |
-| **`--csx`** | Creates .NET functions as C# script, which is the version 1.x behavior. Valid only with `--worker-runtime dotnet`. |
-| **`--docker`** | Creates a Dockerfile for a container using a base image that is based on the chosen `--worker-runtime`. Use this option when you plan to publish to a custom Linux container. |
-| **`--docker-only`** | Adds a Dockerfile to an existing project. Prompts for the worker-runtime if not specified or set in local.settings.json. Use this option when you plan to publish an existing project to a custom Linux container. |
-| **`--force`** | Initialize the project even when there are existing files in the project. This setting overwrites existing files with the same name. Other files in the project folder aren't affected. |
-| **`--language`** | Initializes a language specific project. Currently supported when `--worker-runtime` set to `node`. Options are `typescript` and `javascript`. You can also use `--worker-runtime javascript` or `--worker-runtime typescript`. |
-| **`--managed-dependencies`** | Installs managed dependencies. Currently, only the PowerShell worker runtime supports this functionality. |
-| **`--source-control`** | Controls whether a git repository is created. By default, a repository isn't created. When `true`, a repository is created. |
-| **`--worker-runtime`** | Sets the language runtime for the project. Supported values are: `csharp`, `dotnet`, `javascript`,`node` (JavaScript), `powershell`, `python`, and `typescript`. For Java, use [Maven](functions-reference-java.md#create-java-functions).When not set, you're prompted to choose your runtime during initialization. |
-|
-> [!IMPORTANT]
-> By default, version 2.x and later versions of the Core Tools create function app projects for the .NET runtime as [C# class projects](functions-dotnet-class-library.md) (.csproj). These C# projects, which can be used with Visual Studio or Visual Studio Code, are compiled during testing and when publishing to Azure. If you instead want to create and work with the same C# script (.csx) files created in version 1.x and in the portal, you must include the `--csx` parameter when you create and deploy functions.
+# [C\#](#tab/csharp)
+++ By default, version 2.x and later versions of the Core Tools create function app projects for the .NET runtime as [C# class projects](functions-dotnet-class-library.md) (.csproj). Version 3.x also supports creating functions that [run on .NET 5.0 in an isolated process](dotnet-isolated-process-guide.md). These C# projects, which can be used with Visual Studio or Visual Studio Code, are compiled during debugging and when publishing to Azure. +++ Use the `--csx` parameter if you want to work locally with C# script (.csx) files. These are the same files you get when you create functions in the Azure portal and when using version 1.x of Core Tools. To learn more, see the [func init reference](functions-core-tools-reference.md#func-init).+
+# [Java](#tab/java)
+++ Java uses a Maven archetype to create the local Functions project, along with your first HTTP triggered function. Instead of using `func init` and `func new`, you should follow the steps in the [Command line quickstart](./create-first-function-cli-java.md). +
+# [JavaScript](#tab/node)
+++ To use a `--worker-runtime` value of `node`, specify the `--language` as `javascript`. +
+# [PowerShell](#tab/powershell)
+
+There are no additional considerations for PowerShell.
+
+# [Python](#tab/python)
+++ You should run all commands, including `func init`, from inside a virtual environment. To learn more, see [Create and activate a virtual environment](create-first-function-cli-python.md#create-venv).+
+# [TypeScript](#tab/ts)
+++ To use a `--worker-runtime` value of `node`, specify the `--language` as `javascript`.+++ See the [TypeScript section in the JavaScript developer reference](functions-reference-node.md#typescript) for `func init` behaviors specific to TypeScript. +
+
## Register extensions
-With the exception of HTTP and timer triggers, Functions bindings in runtime version 2.x and higher are implemented as extension packages. HTTP bindings and timer triggers don't require extensions.
+Starting with runtime version 2.x, Functions bindings are implemented as .NET extension (NuGet) packages. For compiled C# projects, you simply reference the NuGet extension packages for the specific triggers and bindings you are using. HTTP bindings and timer triggers don't require extensions.
-To reduce incompatibilities between the various extension packages, Functions lets you reference an extension bundle in your host.json project file. If you choose not to use extension bundles, you also need to install .NET Core 2.x SDK locally and maintain an extensions.csproj with your functions project.
+To improve the development experience for non-C# projects, Functions lets you reference a versioned extension bundle in your host.json project file. [Extension bundles](functions-bindings-register.md#extension-bundles) makes all extensions available to your app and removes the chance of having package compatibility issues between extensions. Extension bundles also removes the requirement of installing the .NET Core 2.x SDK and having to deal with the extensions.csproj file.
-In version 2.x and beyond of the Azure Functions runtime, you have to explicitly register the extensions for the binding types used in your functions. You can choose to install binding extensions individually, or you can add an extension bundle reference to the host.json project file. Extension bundles removes the chance of having package compatibility issues when using multiple binding types. It is the recommended approach for registering binding extensions. Extension bundles also removes the requirement of installing the .NET Core 2.x SDK.
+Extension bundles is the recommended approach for functions projects other than C# complied projects. For these projects, the extension bundle setting is generated in the _host.json_ file during initialization. If this works for you, you can skip this entire section.
### Use extension bundles [!INCLUDE [Register extensions](../../includes/functions-extension-bundles.md)]
-To learn more, see [Register Azure Functions binding extensions](functions-bindings-register.md#extension-bundles). You should add extension bundles to the host.json before you add bindings to the function.json file.
+ When supported by your language, extension bundles should already be enabled after you call `func init`. You should add extension bundles to the host.json before you add bindings to the function.json file. To learn more, see [Register Azure Functions binding extensions](functions-bindings-register.md#extension-bundles).
### Explicitly install extensions
+There may be cases in a non-.NET project when you can't use extension bundles, such as when you need to target a specific version of an extension not in the bundle. In these rare cases, you can use Core Tools to install locally the specific extension packages required by your project. To learn more, see [Explicitly install extensions](functions-bindings-register.md#explicitly-install-extensions).
[!INCLUDE [functions-local-settings-file](../../includes/functions-local-settings-file.md)]
-By default, these settings are not migrated automatically when the project is published to Azure. Use the `--publish-local-settings` switch [when you publish](#publish) to make sure these settings are added to the function app in Azure. Note that values in **ConnectionStrings** are never published.
+By default, these settings are not migrated automatically when the project is published to Azure. Use the [`--publish-local-settings` option][func azure functionapp publish] when you publish to make sure these settings are added to the function app in Azure. Values in the `ConnectionStrings` section are never published.
The function app settings values can also be read in your code as environment variables. For more information, see the Environment variables section of these language-specific reference topics:
When no valid storage connection string is set for [`AzureWebJobsStorage`] and t
### Get your storage connection strings
-Even when using the Microsoft Azure Storage Emulator for development, you may want to test with an actual storage connection. Assuming you have already [created a storage account](../storage/common/storage-account-create.md), you can get a valid storage connection string in one of the following ways:
+Even when using the Microsoft Azure Storage Emulator for development, you may want to run locally with an actual storage connection. Assuming you have already [created a storage account](../storage/common/storage-account-create.md), you can get a valid storage connection string in one of several ways:
-- From the [Azure portal], search for and select **Storage accounts**.
- ![Select Storage accounts from Azure portal](./media/functions-run-local/select-storage-accounts.png)
+# [Portal](#tab/portal)
+
+1. From the [Azure portal], search for and select **Storage accounts**.
+
+ ![Select Storage accounts from Azure portal](./media/functions-run-local/select-storage-accounts.png)
- Select your storage account, select **Access keys** in **Settings**, then copy one of the **Connection string** values.
- ![Copy connection string from Azure portal](./media/functions-run-local/copy-storage-connection-portal.png)
+1. Select your storage account, select **Access keys** in **Settings**, then copy one of the **Connection string** values.
-- Use [Azure Storage Explorer](https://storageexplorer.com/) to connect to your Azure account. In the **Explorer**, expand your subscription, expand **Storage Accounts**, select your storage account, and copy the primary or secondary connection string.
+ ![Copy connection string from Azure portal](./media/functions-run-local/copy-storage-connection-portal.png)
- ![Copy connection string from Storage Explorer](./media/functions-run-local/storage-explorer.png)
+# [Core Tools](#tab/azurecli)
-+ Use Core Tools from the project root to download the connection string from Azure with one of the following commands:
+From the project root, use one of the following commands to download the connection string from Azure:
+ Download all settings from an existing function app:
Even when using the Microsoft Azure Storage Emulator for development, you may wa
func azure storage fetch-connection-string <StorageAccountName> ```
- When you aren't already signed in to Azure, you're prompted to do so. These commands overwrite any existing settings in the local.settings.json file.
+ When you aren't already signed in to Azure, you're prompted to do so. These commands overwrite any existing settings in the local.settings.json file. To learn more, see the [`func azure functionapp fetch-app-settings`](functions-core-tools-reference.md#func-azure-functionapp-fetch-app-settings) and [`func azure storage fetch-connection-string`](functions-core-tools-reference.md#func-azure-storage-fetch-connection-string) commands.
-## <a name="create-func"></a>Create a function
+# [Storage Explorer](#tab/storageexplorer)
-To create a function, run the following command:
+1. Run [Azure Storage Explorer](https://storageexplorer.com/).
-```
-func new
-```
+1. In the **Explorer**, expand your subscription, then expand **Storage Accounts**.
-In version 3.x/2.x, when you run `func new` you are prompted to choose a template in the default language of your function app, then you are also prompted to choose a name for your function. In version 1.x, you are also prompted to choose the language.
+1. Select your storage account and copy the primary or secondary connection string.
-<pre>
-Select a language: Select a template:
-Blob trigger
-Cosmos DB trigger
-Event Grid trigger
-HTTP trigger
-Queue trigger
-SendGrid
-Service Bus Queue trigger
-Service Bus Topic trigger
-Timer trigger
-</pre>
+ ![Copy connection string from Storage Explorer](./media/functions-run-local/storage-explorer.png)
-Function code is generated in a subfolder with the provided function name, as you can see in the following queue trigger output:
+
-<pre>
-Select a language: Select a template: Queue trigger
-Function name: [QueueTriggerJS] MyQueueTrigger
-Writing C:\myfunctions\myMyFunctionProj\MyQueueTrigger\index.js
-Writing C:\myfunctions\myMyFunctionProj\MyQueueTrigger\readme.md
-Writing C:\myfunctions\myMyFunctionProj\MyQueueTrigger\sample.dat
-Writing C:\myfunctions\myMyFunctionProj\MyQueueTrigger\function.json
-</pre>
+## <a name="create-func"></a>Create a function
-You can also specify these options in the command using the following arguments:
+To create a function in an existing project, run the following command:
-| Argument | Description |
-| | -- |
-| **`--csx`** | (Version 2.x and later versions.) Generates the same C# script (.csx) templates used in version 1.x and in the portal. |
-| **`--language`**, **`-l`**| The template programming language, such as C#, F#, or JavaScript. This option is required in version 1.x. In version 2.x and later versions, do not use this option or choose a language that matches the worker runtime. |
-| **`--name`**, **`-n`** | The function name. |
-| **`--template`**, **`-t`** | Use the `func templates list` command to see the complete list of available templates for each supported language. |
+```
+func new
+```
+In version 3.x/2.x, when you run `func new` you are prompted to choose a template in the default language of your function app. Next, you're prompted to choose a name for your function. In version 1.x, you are also required to choose the language.
-For example, to create a JavaScript HTTP trigger in a single command, run:
+You can also specify the function name and template in the `func new` command. The following example uses the `--template` option to create an HTTP trigger named `MyHttpTrigger`:
``` func new --template "Http Trigger" --name MyHttpTrigger ```
-To create a queue-triggered function in a single command, run:
+This example creates a Queue Storage trigger named `MyQueueTrigger`:
```
-func new --template "Queue Trigger" --name QueueTriggerJS
+func new --template "Queue Trigger" --name MyQueueTrigger
```
+To learn more, see the [`func new` command](functions-core-tools-reference.md#func-new).
+ ## <a name="start"></a>Run functions locally
-To run a Functions project, run the Functions host. The host enables triggers for all functions in the project. The start command varies, depending on your project language.
+To run a Functions project, you run the Functions host from the root directory of your project. The host enables triggers for all functions in the project. The [`start` command](functions-core-tools-reference.md#func-start) varies depending on your project language.
# [C\#](#tab/csharp)
mvn azure-functions:run
func start ``` +
+# [PowerShell](#tab/powershell)
+
+```
+func start
+```
+ # [Python](#tab/python) ```
npm start
>[!NOTE]
-> Version 1.x of the Functions runtime instead requires `func host start`.
-
-`func start` supports the following options:
-
-| Option | Description |
-| | -- |
-| **`--no-build`** | Do no build current project before running. For dotnet projects only. Default is set to false. Not supported for version 1.x. |
-| **`--cors-credentials`** | Allow cross-origin authenticated requests (i.e. cookies and the Authentication header) Not supported for version 1.x. |
-| **`--cors`** | A comma-separated list of CORS origins, with no spaces. |
-| **`--language-worker`** | Arguments to configure the language worker. For example, you may enable debugging for language worker by providing [debug port and other required arguments](https://github.com/Azure/azure-functions-core-tools/wiki/Enable-Debugging-for-language-workers). Not supported for version 1.x. |
-| **`--cert`** | The path to a .pfx file that contains a private key. Only used with `--useHttps`. Not supported for version 1.x. |
-| **`--password`** | Either the password or a file that contains the password for a .pfx file. Only used with `--cert`. Not supported for version 1.x. |
-| **`--port`**, **`-p`** | The local port to listen on. Default value: 7071. |
-| **`--pause-on-error`** | Pause for additional input before exiting the process. Used only when launching Core Tools from an integrated development environment (IDE).|
-| **`--script-root`**, **`--prefix`** | Used to specify the path to the root of the function app that is to be run or deployed. This is used for compiled projects that generate project files into a subfolder. For example, when you build a C# class library project, the host.json, local.settings.json, and function.json files are generated in a *root* subfolder with a path like `MyProject/bin/Debug/netstandard2.0`. In this case, set the prefix as `--script-root MyProject/bin/Debug/netstandard2.0`. This is the root of the function app when running in Azure. |
-| **`--timeout`**, **`-t`** | The timeout for the Functions host to start, in seconds. Default: 20 seconds.|
-| **`--useHttps`** | Bind to `https://localhost:{port}` rather than to `http://localhost:{port}`. By default, this option creates a trusted certificate on your computer.|
-
-When the Functions host starts, it outputs the URL of HTTP-triggered functions:
+> Version 1.x of the Functions runtime instead requires `func host start`. To learn more, see [Azure Functions Core Tools reference](functions-core-tools-reference.md?tabs=v1#func-start).
+
+When the Functions host starts, it outputs the URL of HTTP-triggered functions, like in the following example:
<pre> Found the following functions:
curl --request POST http://localhost:7071/api/MyHttpTrigger --data "{'name':'Azu
```
-You can make GET requests from a browser passing data in the query string. For all other HTTP methods, you must use cURL, Fiddler, Postman, or a similar HTTP testing tool.
+You can make GET requests from a browser passing data in the query string. For all other HTTP methods, you must use cURL, Fiddler, Postman, or a similar HTTP testing tool that supports POST requests.
#### Non-HTTP triggered functions
-For all kinds of functions other than HTTP triggers and webhooks and Event Grid triggers, you can test your functions locally by calling an administration endpoint. Calling this endpoint with an HTTP POST request on the local server triggers the function.
+For all functions other than HTTP and Event Grid triggers, you can test your functions locally using REST by calling a special endpoint called an _administration endpoint_. Calling this endpoint with an HTTP POST request on the local server triggers the function.
To test Event Grid triggered functions locally, see [Local testing with viewer web app](functions-bindings-event-grid-trigger.md#local-testing-with-viewer-web-app).
curl --request POST -H "Content-Type:application/json" --data "{'input':'sample
```
-#### Using the `func run` command (version 1.x only)
+When you call an administrator endpoint on your function app in Azure, you must provide an access key. To learn more, see [Function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
->[!IMPORTANT]
-> The `func run` command is only supported in version 1.x of the tools. For more information, see the topic [How to target Azure Functions runtime versions](set-runtime-version.md).
+## <a name="publish"></a>Publish to Azure
-In version 1.x, you can also invoke a function directly by using `func run <FunctionName>` and provide input data for the function. This command is similar to running a function using the **Test** tab in the Azure portal.
+The Azure Functions Core Tools supports three types of deployment:
-`func run` supports the following options:
+| Deployment type | Command | Description |
+| -- | -- | -- |
+| Project files | [`func azure functionapp publish`](functions-core-tools-reference.md#func-azure-functionapp-publish) | Deploys function project files directly to your function app using [zip deployment](functions-deployment-technologies.md#zip-deploy). |
+| Custom container | `func deploy` | Deploys your project to a Linux function app as a custom Docker container. |
+| Kubernetes cluster | `func kubernetes deploy` | Deploys your Linux function app as a customer Docker container to a Kubernetes cluster. |
-| Option | Description |
-| | -- |
-| **`--content`**, **`-c`** | Inline content. |
-| **`--debug`**, **`-d`** | Attach a debugger to the host process before running the function.|
-| **`--timeout`**, **`-t`** | Time to wait (in seconds) until the local Functions host is ready.|
-| **`--file`**, **`-f`** | The file name to use as content.|
-| **`--no-interactive`** | Does not prompt for input. Useful for automation scenarios.|
+### Before you publish
-For example, to call an HTTP-triggered function and pass content body, run the following command:
+>[!IMPORTANT]
+>You must have the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) installed locally to be able to publish to Azure from Core Tools.
-```
-func run MyHttpTrigger -c '{\"name\": \"Azure\"}'
-```
+A project folder may contain language-specific files and directories that shouldn't be published. Excluded items are listed in a .funcignore file in the root project folder.
-## <a name="publish"></a>Publish to Azure
+You must have already [created a function app in your Azure subscription](functions-cli-samples.md#create), to which you'll deploy your code. Projects that require compilation should be built so that the binaries can be deployed.
-The Azure Functions Core Tools supports two types of deployment: deploying function project files directly to your function app via [Zip Deploy](functions-deployment-technologies.md#zip-deploy) and [deploying a custom Docker container](functions-deployment-technologies.md#docker-container). You must have already [created a function app in your Azure subscription](functions-cli-samples.md#create), to which you'll deploy your code. Projects that require compilation should be built so that the binaries can be deployed.
+To learn how to create a function app from the command prompt or terminal window using the Azure CLI or Azure PowerShell, see [Create a Function App for serverless execution](./scripts/functions-cli-create-serverless.md).
>[!IMPORTANT]
->You must have the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) installed locally to be able to publish to Azure from Core Tools.
+> When you create a function app in the Azure portal, it uses version 3.x of the Function runtime by default. To make the function app use version 1.x of the runtime, follow the instructions in [Run on version 1.x](functions-versions.md#creating-1x-apps).
+> You can't change the runtime version for a function app that has existing functions.
-A project folder may contain language-specific files and directories that shouldn't be published. Excluded items are listed in a .funcignore file in the root project folder.
### <a name="project-file-deployment"></a>Deploy project files
To publish your local code to a function app in Azure, use the `publish` command
func azure functionapp publish <FunctionAppName> ```
->[!IMPORTANT]
-> Java uses Maven to publish your local project to Azure. Use the following command to publish to Azure: `mvn azure-functions:deploy`. Azure resources are created during initial deployment.
+The following considerations apply to this kind of deployment:
-This command publishes to an existing function app in Azure. You'll get an error if you try to publish to a `<FunctionAppName>` that doesn't exist in your subscription. To learn how to create a function app from the command prompt or terminal window using the Azure CLI or Azure PowerShell, see [Create a Function App for serverless execution](./scripts/functions-cli-create-serverless.md). By default, this command uses [remote build](functions-deployment-technologies.md#remote-build) and deploys your app to [run from the deployment package](run-functions-from-deployment-package.md). To disable this recommended deployment mode, use the `--nozip` option.
++ Publishing overwrites existing files in the function app.
->[!IMPORTANT]
-> When you create a function app in the Azure portal, it uses version 3.x of the Function runtime by default. To make the function app use version 1.x of the runtime, follow the instructions in [Run on version 1.x](functions-versions.md#creating-1x-apps).
-> You can't change the runtime version for a function app that has existing functions.
++ Use the [`--publish-local-settings` option][func azure functionapp publish] to automatically create app settings in your function app based on values in the local.settings.json file.
-The following publish options apply for all versions:
++ A [remote build](functions-deployment-technologies.md#remote-build) is performed on compiled projects. This can be controlled by using the [`--no-build` option][func azure functionapp publish].
-| Option | Description |
-| | -- |
-| **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you are using the Microsoft Azure Storage Emulator, first change the app setting to an [actual storage connection](#get-your-storage-connection-strings). |
-| **`--overwrite-settings -y`** | Suppress the prompt to overwrite app settings when `--publish-local-settings -i` is used.|
++ Your project is deployed such that it [runs from the deployment package](run-functions-from-deployment-package.md). To disable this recommended deployment mode, use the [`--nozip` option][func azure functionapp publish].
-The following publish options are supported only for version 2.x and later versions:
++ Java uses Maven to publish your local project to Azure. Instead, use the following command to publish to Azure: `mvn azure-functions:deploy`. Azure resources are created during initial deployment.
-| Option | Description |
-| | -- |
-| **`--publish-settings-only`**, **`-o`** | Only publish settings and skip the content. Default is prompt. |
-|**`--list-ignored-files`** | Displays a list of files that are ignored during publishing, which is based on the .funcignore file. |
-| **`--list-included-files`** | Displays a list of files that are published, which is based on the .funcignore file. |
-| **`--nozip`** | Turns the default `Run-From-Package` mode off. |
-| **`--build-native-deps`** | Skips generating .wheels folder when publishing Python function apps. |
-| **`--build`**, **`-b`** | Performs build action when deploying to a Linux function app. Accepts: `remote` and `local`. |
-| **`--additional-packages`** | List of packages to install when building native dependencies. For example: `python3-dev libevent-dev`. |
-| **`--force`** | Ignore pre-publishing verification in certain scenarios. |
-| **`--csx`** | Publish a C# script (.csx) project. |
-| **`--no-build`** | Project isn't built during publishing. For Python, `pip install` isn't performed. |
-| **`--dotnet-cli-params`** | When publishing compiled C# (.csproj) functions, the core tools calls 'dotnet build --output bin/publish'. Any parameters passed to this will be appended to the command line. |
++ You'll get an error if you try to publish to a `<FunctionAppName>` that doesn't exist in your subscription.
-### Deploy custom container
+### Kubernetes cluster
-Azure Functions lets you deploy your function project in a [custom Docker container](functions-deployment-technologies.md#docker-container). For more information, see [Create a function on Linux using a custom image](functions-create-function-linux-custom-image.md). Custom containers must have a Dockerfile. To create an app with a Dockerfile, use the --dockerfile option on `func init`.
+Functions also lets you define your Functions project to run in a Docker container. Use the [`--docker` option][func init] of `func init` to generate a Dockerfile for your specific language. This file is then used when creating a container to deploy.
+Core Tools can be used to deploy your project as a custom container image to a Kubernetes cluster. The command you use depends on the type of scaler used in the cluster.
+
+The following command uses the Dockerfile to generate a container and deploy it to a Kubernetes cluster.
+
+# [KEDA](#tab/keda)
+
+```command
+func kubernetes deploy --name <DEPLOYMENT_NAME> --registry <REGISTRY_USERNAME>
```
-func deploy
+
+To learn more, see [Deploying a function app to Kubernetes](functions-kubernetes-keda.md#deploying-a-function-app-to-kubernetes).
+
+# [Default/KNative](#tab/default)
+
+```command
+func deploy --name <FUNCTION_APP> --platform kubernetes --registry <REGISTRY_USERNAME>
```
-The following custom container deployment options are available:
+In the example above, replace `<FUNCTION_APP>` with the name of the function app in Azure and `<REGISTRY_USERNAME>` with your registry account name, such as you Docker username. The container is built locally and pushed to your Docker registry account with an image name based on `<FUNCTION_APP>`. You must have the Docker command line tools installed.
+
+To learn more, see the [`func deploy` command](functions-core-tools-reference.md#func-deploy).
++
-| Option | Description |
-| | -- |
-| **`--registry`** | The name of a Docker Registry the current user signed-in to. |
-| **`--platform`** | Hosting platform for the function app. Valid options are `kubernetes` |
-| **`--name`** | Function app name. |
-| **`--max`** | Optionally, sets the maximum number of function app instances to deploy to. |
-| **`--min`** | Optionally, sets the minimum number of function app instances to deploy to. |
-| **`--config`** | Sets an optional deployment configuration file. |
+To learn how to publish a custom container to Azure without Kubernetes, see [Create a function on Linux using a custom container](functions-create-function-linux-custom-image.md).
## Monitoring functions
To file a bug or feature request, [open a GitHub issue](https://github.com/azure
[`FUNCTIONS_WORKER_RUNTIME`]: functions-app-settings.md#functions_worker_runtime [`AzureWebJobsStorage`]: functions-app-settings.md#azurewebjobsstorage [extension bundles]: functions-bindings-register.md#extension-bundles
+[func azure functionapp publish]: functions-core-tools-reference.md?tabs=v2#func-azure-functionapp-publish
+[func init]: functions-core-tools-reference.md?tabs=v2#func-init
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-versions.md
In version 2.x, the following changes were made:
* To improve monitoring, the WebJobs dashboard in the portal, which used the [`AzureWebJobsDashboard`](functions-app-settings.md#azurewebjobsdashboard) setting is replaced with Azure Application Insights, which uses the [`APPINSIGHTS_INSTRUMENTATIONKEY`](functions-app-settings.md#appinsights_instrumentationkey) setting. For more information, see [Monitor Azure Functions](functions-monitoring.md).
-* All functions in a function app must share the same language. When you create a function app, you must choose a runtime stack for the app. The runtime stack is specified by the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) value in application settings. This requirement was added to improve footprint and startup time. When developing locally, you must also include this setting in the [local.settings.json file](functions-run-local.md#local-settings-file).
+* All functions in a function app must share the same language. When you create a function app, you must choose a runtime stack for the app. The runtime stack is specified by the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) value in application settings. This requirement was added to improve footprint and startup time. When developing locally, you must also include this setting in the [local.settings.json file](functions-develop-local.md#local-settings-file).
* The default timeout for functions in an App Service plan is changed to 30 minutes. You can manually change the timeout back to unlimited by using the [functionTimeout](functions-host-json.md#functiontimeout) setting in host.json.
azure-functions Machine Learning Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/machine-learning-pytorch.md
In Azure Functions, a function project is a container for one or more individual
func init --worker-runtime python ```
- After initialization, the *start* folder contains various files for the project, including configurations files named [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
+ After initialization, the *start* folder contains various files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
> [!TIP] > Because a function project is tied to a specific runtime, all the functions in the project must be written with the same language.
azure-functions Manage Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/manage-connections.md
module.exports = async function (context) {
Your function code can use the .NET Framework Data Provider for SQL Server ([SqlClient](/dotnet/api/system.data.sqlclient)) to make connections to a SQL relational database. This is also the underlying provider for data frameworks that rely on ADO.NET, such as [Entity Framework](/ef/ef6/). Unlike [HttpClient](/dotnet/api/system.net.http.httpclient) and [DocumentClient](/dotnet/api/microsoft.azure.documents.client.documentclient) connections, ADO.NET implements connection pooling by default. But because you can still run out of connections, you should optimize connections to the database. For more information, see [SQL Server Connection Pooling (ADO.NET)](/dotnet/framework/data/adonet/sql-server-connection-pooling). > [!TIP]
-> Some data frameworks, such as Entity Framework, typically get connection strings from the **ConnectionStrings** section of a configuration file. In this case, you must explicitly add SQL database connection strings to the **Connection strings** collection of your function app settings and in the [local.settings.json file](functions-run-local.md#local-settings-file) in your local project. If you're creating an instance of [SqlConnection](/dotnet/api/system.data.sqlclient.sqlconnection) in your function code, you should store the connection string value in **Application settings** with your other connections.
+> Some data frameworks, such as Entity Framework, typically get connection strings from the **ConnectionStrings** section of a configuration file. In this case, you must explicitly add SQL database connection strings to the **Connection strings** collection of your function app settings and in the [local.settings.json file](functions-develop-local.md#local-settings-file) in your local project. If you're creating an instance of [SqlConnection](/dotnet/api/system.data.sqlclient.sqlconnection) in your function code, you should store the connection string value in **Application settings** with your other connections.
## Next steps
azure-functions Security Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/security-concepts.md
By default, keys are stored in a Blob storage container in the account provided
||||| |Different storage account | `AzureWebJobsSecretStorageSas` | `<BLOB_SAS_URL>` | Stores keys in Blob storage of a second storage account, based on the provided SAS URL. Keys are encrypted before being stored using a secret unique to your function app. | |File system | `AzureWebJobsSecretStorageType` | `files` | Keys are persisted on the file system, encrypted before storage using a secret unique to your function app. |
-|Azure Key Vault | `AzureWebJobsSecretStorageType`<br/>`AzureWebJobsSecretStorageKeyVaultName` | `keyvault`<br/>`<VAULT_NAME>` | The vault must have an access policy corresponding to the system-assigned managed identity of the hosting resource. The access policy should grant the identity the following secret permissions: `Get`,`Set`, `List`, and `Delete`. <br/>When running locally, the developer identity is used, and settings must be in the [local.settings.json file](functions-run-local.md#local-settings-file). |
+|Azure Key Vault | `AzureWebJobsSecretStorageType`<br/>`AzureWebJobsSecretStorageKeyVaultName` | `keyvault`<br/>`<VAULT_NAME>` | The vault must have an access policy corresponding to the system-assigned managed identity of the hosting resource. The access policy should grant the identity the following secret permissions: `Get`,`Set`, `List`, and `Delete`. <br/>When running locally, the developer identity is used, and settings must be in the [local.settings.json file](functions-develop-local.md#local-settings-file). |
|Kubernetes Secrets |`AzureWebJobsSecretStorageType`<br/>`AzureWebJobsKubernetesSecretName` (optional) | `kubernetes`<br/>`<SECRETS_RESOURCE>` | Supported only when running the Functions runtime in Kubernetes. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read-only. In this case, the values must be generated before deployment. The Azure Functions Core Tools generates the values automatically when deploying to Kubernetes.| ### Authentication/authorization
For example, every function app requires an associated storage account, which is
App settings and connection strings are stored encrypted in Azure. They're decrypted only before being injected into your app's process memory when the app starts. The encryption keys are rotated regularly. If you prefer to instead manage the secure storage of your secrets, the app setting should instead be references to Azure Key Vault.
-You can also encrypt settings by default in the local.settings.json file when developing functions on your local computer. To learn more, see the `IsEncrypted` property in the [local settings file](functions-run-local.md#local-settings-file).
+You can also encrypt settings by default in the local.settings.json file when developing functions on your local computer. To learn more, see the `IsEncrypted` property in the [local settings file](functions-develop-local.md#local-settings-file).
#### Key Vault references
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-web-apps.md
Title: Monitor Azure app services performance | Microsoft Docs description: Application performance monitoring for Azure app services. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/04/2021 Last updated : 08/05/2021
There are two ways to enable application monitoring for Azure App Services hoste
* This approach is much more customizable, but it requires the following approaches: SDK [for .NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](./opencensus-python.md), and a standalone agent for [Java](./java-in-process-agent.md). This method, also means you have to manage the updates to the latest version of the packages yourself.
- * If you need to make custom API calls to track events/dependencies not captured by default with agent-based monitoring, you would need to use this method. Check out the [API for custom events and metrics article](./api-custom-events-metrics.md) to learn more. This is also currently the only supported option for Linux based workloads.
+ * If you need to make custom API calls to track events/dependencies not captured by default with agent-based monitoring, you would need to use this method. Check out the [API for custom events and metrics article](./api-custom-events-metrics.md) to learn more.
> [!NOTE] > If both agent-based monitoring and manual SDK-based instrumentation is detected, in .NET only the manual instrumentation settings will be honored, while in Java only the agent-based instrumentation will be emitting the telemetry. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) below.
There are two ways to enable application monitoring for Azure App Services hoste
# [ASP.NET](#tab/net) > [!NOTE]
-> The combination of APPINSIGHTS_JAVASCRIPT_ENABLED and urlCompression is not supported. For more info see the explanation in the [troubleshooting section](#troubleshooting).
-
+> The combination of APPINSIGHTS_JAVASCRIPT_ENABLED and urlCompression is not supported. For more info see the explanation in the [troubleshooting section](#appinsights_javascript_enabled-and-urlcompression-is-not-supported).
1. **Select Application Insights** in the Azure control panel for your app service.
There are two ways to enable application monitoring for Azure App Services hoste
# [ASP.NET Core](#tab/netcore)
+### Windows
+> [!IMPORTANT]
+> The following versions of ASP.NET Core are supported for auto-instrumentation on windows : ASP.NET Core 3.1, and 5.0. Versions 2.0, 2.1, 2.2, and 3.0 have been retired and are no longer supported. Please upgrade to a [supported version](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) of .NET Core for auto-instrumentation to work.
++
+Targeting the full framework from ASP.NET Core is **not supported** in Windows. Use [manual instrumentation](./asp-net-core.md) via code instead.
+
+In Windows, only Framework dependent deployment is supported.
+
+See the [enable monitoring section](#enable-monitoring ) below to begin setting up Application Insights with your App Service resource.
+
+### Linux
+ > [!IMPORTANT]
-> The following versions of ASP.NET Core are supported: ASP.NET Core 2.1, 3.1, and 5.0. Versions 2.0, 2.2, and 3.0 have been retired and are no longer supported. Please upgrade to a [supported version](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) of .NET Core for auto-instrumentation to work.
+> The following versions of ASP.NET Core are supported for auto-instrumentation on Linux: ASP.NET Core 3.1, 5.0, and 6.0 (preview). Versions 2.0, 2.1, 2.2, and 3.0 have been retired and are no longer supported. Please upgrade to a [supported version](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) of .NET Core for auto-instrumentation to work.
-Targeting the full framework from ASP.NET Core, self-contained deployment, and Linux based applications are currently **not supported** with agent/extension based monitoring. ([Manual instrumentation](./asp-net-core.md) via code will work in all of the previous scenarios.)
+> [!NOTE]
+> Linux auto-instrumentation App Services portal enablement is in Public Preview. These preview versions are provided without a service level agreement. Certain features might not be supported or might have constrained capabilities.
+
+Targeting the full framework from ASP.NET Core is **not supported** in Linux. Use [manual instrumentation](./asp-net-core.md) via code instead.
+
+ In Linux, Framework-dependent deployment and self-contained deployment are supported.
+
+See the [enable monitoring section](#enable-monitoring ) below to begin setting up Application Insights with your App Service resource.
+
+### Enable monitoring
1. **Select Application Insights** in the Azure control panel for your app service.
Targeting the full framework from ASP.NET Core, self-contained deployment, and L
>[!div class="mx-imgBorder"] >![Instrument your web app](./media/azure-web-apps/ai-create-new.png)
-2. After specifying which resource to use, you can choose how you want Application Insights to collect data per platform for your application. ASP.NET Core offers **Recommended collection** or **Disabled** for ASP.NET Core 2.1 and 3.1.
+2. After specifying which resource to use, you can choose how you want Application Insights to collect data per platform for your application. ASP.NET Core offers **Recommended collection** or **Disabled** for ASP.NET Core 3.1.
![Choose options per platform.](./media/azure-web-apps/choose-options-new-net-core.png) +++ # [Node.js](#tab/nodejs) You can monitor your Node.js apps running in Azure App Service without any code change, just with a couple of simple steps. Application insights for Node.js applications is integrated with App Service on Linux - both code-based and custom containers, and with App Service on Windows for code-based apps.
You can monitor your Node.js apps running in Azure App Service without any code
> [!div class="mx-imgBorder"] > ![Choose options per platform.](./media/azure-web-apps/app-service-node.png) ++ # [Java](#tab/java) You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required. Application Insights for Java is integrated with App Service on Linux - both code-based and custom containers, and with App Service on Windows - code-based apps. It is important to know how your application will be monitored. The integration adds [Application Insights Java 3.x](./java-in-process-agent.md) and you will get all the telemetry that it auto-collects.
In order to enable telemetry collection with Application Insights, only the Appl
|App setting name | Definition | Value | |--|:|-:|
-|ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` |
+|ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` for Windows or `~3` for Linux |
|XDT_MicrosoftApplicationInsights_Mode | In default mode, only essential features are enabled in order to insure optimal performance. | `default` or `recommended`. | |InstrumentationEngine_EXTENSION_VERSION | Controls if the binary-rewrite engine `InstrumentationEngine` will be turned on. This setting has performance implications and impacts cold start/startup time. | `~1` | |XDT_MicrosoftApplicationInsights_BaseExtensions | Controls if SQL & Azure table text will be captured along with the dependency calls. Performance warning: application cold start up time will be affected. This setting requires the `InstrumentationEngine`. | `~1` |
If the upgrade is done from a version prior to 2.5.1, check that the Application
## Troubleshooting
-Below is our step-by-step troubleshooting guide for extension/agent based monitoring for ASP.NET and ASP.NET Core based applications running on Azure App Services.
+### ASP.NET and ASP.NET CORE
+Below is our step-by-step troubleshooting guide for extension/agent based monitoring for ASP.NET and ASP.NET Core based applications running on Azure App Services.
-1. Check that the application is monitored via `ApplicationInsightsAgent`.
- * Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~2".
-2. Ensure that the application meets the requirements to be monitored.
- * Browse to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`
+#### Windows troubleshooting
+1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~2".
+2. Browse to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`.
![Screenshot of https://yoursitename.scm.azurewebsites/applicationinsights results page](./media/azure-web-apps/app-insights-sdk-status.png)
+
+ - Confirm that the `Application Insights Extension Status` is `Pre-Installed Site Extension, version 2.8.x.xxxx, is running.`
+
+ If it is not running, follow the [enable Application Insights monitoring instructions](#enable-application-insights).
- * Confirm that the `Application Insights Extension Status` is `Pre-Installed Site Extension, version 2.8.12.1527, is running.`
- * If it is not running, follow the [enable Application Insights monitoring instructions](#enable-application-insights)
+ - Confirm that the status source exists and looks like: `Status source D:\home\LogFiles\ApplicationInsights\status\status_RD0003FF0317B6_4248_1.json`
- * Confirm that the status source exists and looks like: `Status source D:\home\LogFiles\ApplicationInsights\status\status_RD0003FF0317B6_4248_1.json`
- * If a similar value is not present, it means the application is not currently running or is not supported. To ensure that the application is running, try manually visiting the application url/application endpoints, which will allow the runtime information to become available.
+ If a similar value is not present, it means the application is not currently running or is not supported. To ensure that the application is running, try manually visiting the application url/application endpoints, which will allow the runtime information to become available.
- * Confirm that `IKeyExists` is `true`
- * If it is `false`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey guid to your application settings.
+ - Confirm that `IKeyExists` is `true`
+ If it is `false`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey guid to your application settings.
- * Confirm that there are no entries for `AppAlreadyInstrumented`, `AppContainsDiagnosticSourceAssembly`, and `AppContainsAspNetTelemetryCorrelationAssembly`.
- * If any of these entries exist, remove the following packages from your application: `Microsoft.ApplicationInsights`, `System.Diagnostics.DiagnosticSource`, and `Microsoft.AspNet.TelemetryCorrelation`.
- * For ASP.NET Core apps only: in case your application refers to any Application Insights packages, for example if you have previously instrumented (or attempted to instrument) your app with the [ASP.NET Core SDK](./asp-net-core.md), enabling the App Service integration may not take effect and the data may not appear in Application Insights. To fix the issue, in portal turn on "Interop with Application Insights SDK" and you will start seeing the data in Application Insights
+ - **For ASP.NET apps only** confirm that there are no entries for `AppAlreadyInstrumented`, `AppContainsDiagnosticSourceAssembly`, and `AppContainsAspNetTelemetryCorrelationAssembly`.
+
+ If any of these entries exist, remove the following packages from your application: `Microsoft.ApplicationInsights`, `System.Diagnostics.DiagnosticSource`, and `Microsoft.AspNet.TelemetryCorrelation`.
+
+ - **For ASP.NET Core apps only**: in case your application refers to any Application Insights packages, for example if you have previously instrumented (or attempted to instrument) your app with the [ASP.NET Core SDK](./asp-net-core.md), enabling the App Service integration may not take effect and the data may not appear in Application Insights. To fix the issue, in portal turn on "Interop with Application Insights SDK" and you will start seeing the data in Application Insights.
> [!IMPORTANT] > This functionality is in preview
Below is our step-by-step troubleshooting guide for extension/agent based monito
> [!IMPORTANT] > If the application used Application Insights SDK to send any telemetry, such telemetry will be disabled ΓÇô in other words, custom telemetry - if any, such as for example any Track*() methods, and any custom settings, such as sampling, will be disabled.
+#### Linux troubleshooting
+
+1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~3".
+2. Navigate to */home\LogFiles\ApplicationInsights\status* and open *status_557de146e7fa_27_1.json*.
+
+ Confirm that `AppAlreadyInstrumented` is set to false, `AiHostingStartupLoaded` to true and `IKeyExists` to true.
+
+ Below is an example of the JSON file:
+
+ ```json
+ "AppType":".NETCoreApp,Version=v6.0",
+
+ "MachineName":"557de146e7fa",
+
+ "PID":"27",
+
+ "AppDomainId":"1",
+
+ "AppDomainName":"dotnet6demo",
+
+ "InstrumentationEngineLoaded":false,
+
+ "InstrumentationEngineExtensionLoaded":false,
+
+ "HostingStartupBootstrapperLoaded":true,
+
+ "AppAlreadyInstrumented":false,
+
+ "AppDiagnosticSourceAssembly":"System.Diagnostics.DiagnosticSource, Version=6.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51",
+
+ "AiHostingStartupLoaded":true,
+
+ "IKeyExists":true,
+
+ "IKey":"00000000-0000-0000-0000-000000000000",
+
+ "ConnectionString":"InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://westus-0.in.applicationinsights.azure.com/"
+
+ ```
+
+ If `AppAlreadyInstrumented` is true this indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off.
+
+##### No Data for Linux
+
+1. List and identify the process that is hosting an app. Navigate to your terminal and on the command line type `ps ax`.
+
+ The output should be similar to:
+
+ ```bash
+ PID TTY STAT TIME COMMAND
+
+ 1 ? SNs 0:00 /bin/bash /opt/startup/startup.sh
+
+ 19 ? SNs 0:00 /usr/sbin/sshd
+
+ 27 ? SNLl 5:52 dotnet dotnet6demo.dll
+
+ 50 ? SNs 0:00 sshd: root@pts/0
+
+ 53 pts/0 SNs+ 0:00 -bash
+
+ 55 ? SNs 0:00 sshd: root@pts/1
+
+ 57 pts/1 SNs+ 0:00 -bash
+ ```
++
+1. Then list environment variables from app process. On the command line type `cat /proc/27/environ | tr '\0' '\n`.
+
+ The output should be similar to:
+
+ ```bash
+ ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=Microsoft.ApplicationInsights.StartupBootstrapper
+
+ DOTNET_STARTUP_HOOKS=/DotNetCoreAgent/2.8.39/StartupHook/Microsoft.ApplicationInsights.StartupHook.dll
+
+ APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://westus-0.in.applicationinsights.azure.com/
+
+ ```
+
+
+1. Validate that `ASPNETCORE_HOSTINGSTARTUPASSEMBLIES`, `DOTNET_STARTUP_HOOKS` and `APPLICATIONINSIGHTS_CONNECTION_STRING` are set.
++
+#### Default website deployed with web apps does not support automatic client-side monitoring
+
+When you create a web app with the `ASP.NET` or `ASP.NET Core` runtimes in Azure App Services it deploys a single static HTML page as a starter website. The static webpage also loads a ASP.NET managed web part in IIS. This allows for testing codeless server-side monitoring, but does not support automatic client-side monitoring.
+
+If you wish to test out codeless server and client-side monitoring for ASP.NET or ASP.NET Core in a Azure App Services web app we recommend following the official guides for [creating a ASP.NET Core web app](../../app-service/quickstart-dotnetcore.md) and [creating an ASP.NET Framework web app](../../app-service/quickstart-dotnetcore.md?tabs=netframework48) and then use the instructions in the current article to enable monitoring.
+ ### PHP and WordPress are not supported
If you use APPINSIGHTS_JAVASCRIPT_ENABLED=true in cases where content is encoded
- 500 URL rewrite error - 500.53 URL rewrite module error with message Outbound rewrite rules cannot be applied when the content of the HTTP response is encoded ('gzip').
-This is due to the APPINSIGHTS_JAVASCRIPT_ENABLED application setting being set to true and content-encoding being present at the same time. This scenario is not supported yet. The workaround is to remove APPINSIGHTS_JAVASCRIPT_ENABLED from your application settings. Unfortunately this means that if client/browser-side JavaScript instrumentation is still required, manual SDK references are needed for your webpages. Please follow the [instructions](https://github.com/Microsoft/ApplicationInsights-JS#snippet-setup-ignore-if-using-npm-setup) for manual instrumentation with the JavaScript SDK.
+This is due to the APPINSIGHTS_JAVASCRIPT_ENABLED application setting being set to true and content-encoding being present at the same time. This scenario is not supported yet. The workaround is to remove APPINSIGHTS_JAVASCRIPT_ENABLED from your application settings. Unfortunately this means that if client/browser-side JavaScript instrumentation is still required, manual SDK references are needed for your webpages. Follow the [instructions](https://github.com/Microsoft/ApplicationInsights-JS#snippet-setup-ignore-if-using-npm-setup) for manual instrumentation with the JavaScript SDK.
For the latest information on the Application Insights agent/extension, check out the [release notes](https://github.com/MohanGsk/ApplicationInsights-Home/blob/master/app-insights-web-app-extensions-releasenotes.md).
-### Default website deployed with web apps does not support automatic client-side monitoring
-
-When you create a web app with the `ASP.NET` or `ASP.NET Core` runtimes in Azure App Services it deploys a single static HTML page as a starter website. The static webpage also loads a ASP.NET managed web part in IIS. This allows for testing codeless server-side monitoring, but does not support automatic client-side monitoring.
-
-If you wish to test out codeless server and client-side monitoring for ASP.NET or ASP.NET Core in a Azure App Services web app we recommend following the official guides for [creating a ASP.NET Core web app](../../app-service/quickstart-dotnetcore.md) and [creating an ASP.NET Framework web app](../../app-service/quickstart-dotnetcore.md?tabs=netframework48) and then use the instructions in the current article to enable monitoring.
### Connection string and instrumentation key
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
To get you started, here are the recommended settings for the alert querying the
- Target: Select your Log Analytics resource - Criteria: - Signal name: Custom log search
- - Search query: `_LogOperation | where Operation == "Data collection Stopped" | where Detail contains "OverQuota"`
+ - Search query: `_LogOperation | where Operation =~ "Data collection stopped" | where Detail contains "OverQuota"`
- Based on: Number of results - Condition: Greater than - Threshold: 0
azure-netapp-files Azacsnap Tips https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-tips.md
In some cases, customers already have tools to protect SAP HANA and only want to
> In this example, this host is part of a 3 node Scale-Out system and all 3 boot volumes can be seen from this host. This means all 3 boot volumes can be snapshot from this host, and all 3 should be added to the configuration file in the next step. 1. Create a new configuration file as follows. The boot volume details must be in the OtherVolume stanza:
- ```output
+ ```bash
azacsnap -c configure --configuration new --configfile BootVolume.json ```
- ```bash
+ ```output
Building new config file Add comment to config file (blank entry to exit adding comments): Boot only config file. Add comment to config file (blank entry to exit adding comments):
azure-netapp-files Azure Netapp Files Configure Export Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-configure-export-policy.md
Previously updated : 05/07/2021 Last updated : 08/06/2021 # Configure export policy for NFS or dual-protocol volumes
-You can configure export policy to control access to an Azure NetApp Files volume that uses the NFS protocol (NFSv3 and NFSv4.1) or the dual protocol (NFSv3 and SMB).
+You can configure export policy to control access to an Azure NetApp Files volume that uses the NFS protocol (NFSv3 and NFSv4.1) or the dual protocol (NFSv3 and SMB, or NFSv4.1 and SMB).
You can create up to five export policy rules.
You can create up to five export policy rules.
![Export policy](../media/azure-netapp-files/azure-netapp-files-export-policy.png)
+ * **Chown Mode**: Modify the change ownership mode as needed to set the ownership management capabilities of files and directories. Two options are available:
+
+ * `Restricted` (default) - Only the root user can change the ownership of files and directories.
+ * `Unrestricted` - Non-root users can change the ownership for files and directories that they own.
+
+ Registration requirement and considerations apply for setting **`Chown Mode`**. Follow instructions in [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md).
+
+ ![Screenshot that shows the change ownership mode option.](../media/azure-netapp-files/chown-mode-export-policy.png)
+ ## Next steps * [Mount or unmount a volume](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md)
+* [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md)
* [Manage snapshots](azure-netapp-files-manage-snapshots.md)
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na ms.devlang: na Previously updated : 08/05/2021 Last updated : 08/06/2021 # Create an SMB volume for Azure NetApp Files
-Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3, or dual protocol (NFSv3 and SMB). A volume's capacity consumption counts against its pool's provisioned capacity.
+Azure NetApp Files supports creating volumes using NFS (NFSv3 or NFSv4.1), SMB3, or dual protocol (NFSv3 and SMB, or NFSv4.1 and SMB). A volume's capacity consumption counts against its pool's provisioned capacity.
This article shows you how to create an SMB3 volume. For NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volumes.md). For dual-protocol volumes, see [Create a dual-protocol volume](create-volumes-dual-protocol.md).
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
na ms.devlang: na Previously updated : 07/12/2021 Last updated : 08/06/2021 # Create an NFS volume for Azure NetApp Files
-Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3, or dual protocol (NFSv3 and SMB). A volume's capacity consumption counts against its pool's provisioned capacity.
+Azure NetApp Files supports creating volumes using NFS (NFSv3 or NFSv4.1), SMB3, or dual protocol (NFSv3 and SMB, or NFSv4.1 and SMB). A volume's capacity consumption counts against its pool's provisioned capacity.
This article shows you how to create an NFS volume. For SMB volumes, see [Create an SMB volume](azure-netapp-files-create-volumes-smb.md). For dual-protocol volumes, see [Create a dual-protocol volume](create-volumes-dual-protocol.md).
This article shows you how to create an NFS volume. For SMB volumes, see [Create
- It can contain only letters, numbers, or dashes (`-`). - The length must not exceed 80 characters.
- * Select the NFS version (**NFSv3** or **NFSv4.1**) for the volume.
+ * Select the **Version** (**NFSv3** or **NFSv4.1**) for the volume.
* If you are using NFSv4.1, indicate whether you want to enable **Kerberos** encryption for the volume.
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* If you want to enable Active Directory LDAP users and extended groups (up to 1024 groups) to access the volume, select the **LDAP** option. Follow instructions in [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) to complete the required configurations.
+ * Customize **Unix Permissions** as needed to specify change permissions for the mount path. The setting does not apply to the files under the mount path. The default setting is `0770`. This default setting grants read, write, and execute permissions to the owner and the group, but no permissions are granted to other users.
+ Registration requirement and considerations apply for setting **Unix Permissions**. Follow instructions in [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md).
+ * Optionally, [configure export policy for the NFS volume](azure-netapp-files-configure-export-policy.md). ![Specify NFS protocol](../media/azure-netapp-files/azure-netapp-files-protocol-nfs.png)
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) * [Mount or unmount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Configure export policy for an NFS volume](azure-netapp-files-configure-export-policy.md)
+* [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md).
* [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) * [Learn about virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md)
azure-netapp-files Configure Unix Permissions Change Ownership Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/configure-unix-permissions-change-ownership-mode.md
+
+ Title: Configure Unix permissions and change ownership mode for Azure NetApp Files NFS and dual-protocol volumes | Microsoft Docs
+description: Describes how to set the Unix permissions and the change ownership mode options for Azure NetApp Files NFS and dual-protocol volumes.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 08/06/2021++
+# Configure Unix permissions and change ownership mode for NFS and dual-protocol volumes
+
+For Azure NetApp Files NFS volumes or dual-protocol volumes with the `Unix` security style, you have the option to set the **Unix permissions** and the **change ownership mode** (**`Chown Mode`**) options. You can specify these settings during volume creation or after volume creation.
+
+## Unix permissions
+
+The Azure NetApp Files **Unix Permissions** functionality enables you to specify change permissions for the mount path. The setting does not apply to the files under the mount path.
+
+The Unix permissions setting is set to `0770` by default. This default setting grants read, write, and execute permissions to the owner and the group, but no permissions are granted to other users.
+
+ You can specify a custom Unix permissions value (for example, `0755`) to give the desired permission to the owner, group, or other users.
+
+## Change ownership mode
+
+The change ownership mode (**`Chown Mode`**) functionality enables you to set the ownership management capabilities of files and directories. You can specify or modify the setting under a volume's export policy. Two options for **`Chown Mode`** are available:
+
+* `Restricted` (default) - Only the root user can change the ownership of files and directories.
+* `Unrestricted` - Non-root users can change the ownership for files and directories that they own.
+
+## Considerations
+
+* The Unix permissions you specify apply only for the volume mount point (root directory).
+* You cannot modify the Unix permissions on source or destination volumes that are in a cross-region replication configuration.
+
+## Steps
+
+1. The Unix permissions and change ownership mode features are currently in preview. Before using these features for the first time, you need to register the features:
+
+ 1. Register the **Unix permissions** feature:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFUnixPermissions
+ ```
+
+ 2. Register the **change ownership mode** feature:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFChownMode
+ ```
+
+ 3. Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is `Registered` before continuing.
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFUnixPermissions
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFChownMode
+ ```
+
+ You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+
+2. You can specify the **Unix permissions** and change ownership mode (**`Chown Mode`**) settings under the **Protocol** tab when you [create an NFS volume](azure-netapp-files-create-volumes.md) or [create a dual-protocol volume](create-volumes-dual-protocol.md).
+
+ The following example shows the Create a Volume screen for an NFS volume.
+
+ ![Screenshots that shows the Create a Volume screen for NFS.](../media/azure-netapp-files/unix-permissions-create-nfs-volume.png)
+
+3. For existing NFS or dual-protocol volumes, you can set or modify **Unix permissions** and **change ownership mode** as follows:
+
+ 1. To modify Unix permissions, right-click the **volume**, and select **Edit**. In the Edit window that appears, specify a value for **Unix Permissions**.
+ ![Screenshots that shows the Edit screen for Unix permissions.](../media/azure-netapp-files/unix-permissions-edit.png)
+
+ 2. To modify the change ownership mode, click the **volume**, click **Export policy**, then modify the **`Chown Mode`** setting.
+ ![Screenshots that shows the Export Policy screen.](../media/azure-netapp-files/chown-mode-edit.png)
+
+## Next steps
+
+* [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
+* [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md)
+* [Configure export policy](azure-netapp-files-configure-export-policy.md)
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
Title: Create a dual-protocol (NFSv3 and SMB) volume for Azure NetApp Files | Microsoft Docs
-description: Describes how to create a volume that uses the dual protocol of NFSv3 and SMB with support for LDAP user mapping.
+ Title: Create a dual-protocol volume for Azure NetApp Files | Microsoft Docs
+description: Describes how to create a volume that uses the dual protocol (NFSv3 and SMB, or NFSv4.1 and SMB) with support for LDAP user mapping.
documentationcenter: ''
na ms.devlang: na Previously updated : 07/21/2021 Last updated : 08/06/2021
-# Create a dual-protocol (NFSv3 and SMB) volume for Azure NetApp Files
+# Create a dual-protocol volume for Azure NetApp Files
-Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3, or dual protocol. This article shows you how to create a volume that uses the dual protocol of NFSv3 and SMB with support for LDAP user mapping.
+Azure NetApp Files supports creating volumes using NFS (NFSv3 or NFSv4.1), SMB3, or dual protocol (NFSv3 and SMB, or NFSv4.1 and SMB). This article shows you how to create a volume that uses dual protocol with support for LDAP user mapping.
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volumes.md). To create SMB volumes, see [Create an SMB volume](azure-netapp-files-create-volumes-smb.md).
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
* Ensure that the NFS client is up to date and running the latest updates for the operating system. * Dual-protocol volumes support both Active Directory Domain Services (ADDS) and Azure Active Directory Domain Services (AADDS). * Dual-protocol volumes do not support the use of LDAP over TLS with AADDS. See [LDAP over TLS considerations](configure-ldap-over-tls.md#considerations).
-* The NFS version used by a dual-protocol volume is NFSv3. As such, the following considerations apply:
+* The NFS version used by a dual-protocol volume can be NFSv3 or NFSv4.1. The following considerations apply:
* Dual protocol does not support the Windows ACLS extended attributes `set/get` from NFS clients. * NFS clients cannot change permissions for the NTFS security style, and Windows clients cannot change permissions for UNIX-style dual-protocol volumes.
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
| Security style | Clients that can modify permissions | Permissions that clients can use | Resulting effective security style | Clients that can access files | |- |- |- |- |- |
- | `Unix` | NFS | NFSv3 mode bits | UNIX | NFS and Windows |
+ | `Unix` | NFS | NFSv3 or NFSv4.1 mode bits | UNIX | NFS and Windows |
| `Ntfs` | Windows | NTFS ACLs | NTFS |NFS and Windows| * The direction in which the name mapping occurs (Windows to UNIX, or UNIX to Windows) depends on which protocol is used and which security style is applied to a volume. A Windows client always requires a Windows-to-UNIX name mapping. Whether a user is applied to review permissions depends on the security style. Conversely, an NFS client only needs to use a UNIX-to-Windows name mapping if the NTFS security style is in use.
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
* If you have large topologies, and you use the `Unix` security style with a dual-protocol volume or LDAP with extended groups, Azure NetApp Files might not be able to access all servers in your topologies. If this situation occurs, contact your account team for assistance. <!-- NFSAAS-15123 --> * You don't need a server root CA certificate for creating a dual-protocol volume. It is required only if LDAP over TLS is enabled. - ## Create a dual-protocol volume 1. Click the **Volumes** blade from the Capacity Pools blade. Click **+ Add volume** to create a volume.
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
![Show advanced selection](../media/azure-netapp-files/volume-create-advanced-selection.png)
-3. Click **Protocol**, and then complete the following actions:
- * Select **dual-protocol (NFSv3 and SMB)** as the protocol type for the volume.
+3. Click the **Protocol** tab, and then complete the following actions:
+ * Select **Dual-protocol** as the protocol type for the volume.
+
+ * Specify the **Active Directory** connection to use.
* Specify a unique **Volume Path**. This path is used when you create mount targets. The requirements for the path are as follows:
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
- It can contain only letters, numbers, or dashes (`-`). - The length must not exceed 80 characters.
+ * Specify the **versions** to use for dual protocol: **NFSv4.1 and SMB**, or **NFSv3 and SMB**.
+
+ The feature to use **NFSv4.1 and SMB** dual protocol is currently in preview. If you are using this feature for the first time, you need to register the feature:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFDualProtocolNFSv4AndSMB
+ ```
+
+ Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFDualProtocolNFSv4AndSMB
+ ```
+ You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+ * Specify the **Security Style** to use: NTFS (default) or UNIX. * If you want to enable SMB3 protocol encryption for the dual-protocol volume, select **Enable SMB3 Protocol Encryption**.
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
You can also use [Azure CLI commands](/cli/azure/feature?preserve-view=true&view=azure-cli-latest) `az feature register` and `az feature show` to register the feature and display the registration status.
+ * If you selected NFSv4.1 and SMB for the dual-protocol volume versions, indicate whether you want to enable **Kerberos** encryption for the volume.
+
+ Additional configurations are required for Kerberos. Follow the instructions in [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md).
+
+ * Customize **Unix Permissions** as needed to specify change permissions for the mount path. The setting does not apply to the files under the mount path. The default setting is `0770`. This default setting grants read, write, and execute permissions to the owner and the group, but no permissions are granted to other users.
+ Registration requirement and considerations apply for setting **Unix Permissions**. Follow instructions in [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md).
+ * Optionally, [configure export policy for the volume](azure-netapp-files-configure-export-policy.md). ![Specify dual-protocol](../media/azure-netapp-files/create-volume-protocol-dual.png)
Follow instructions in [Configure an NFS client for Azure NetApp Files](configur
## Next steps
+* [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md)
* [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md)
+* [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md).
* [Configure ADDS LDAP over TLS for Azure NetApp Files](configure-ldap-over-tls.md) * [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) * [Troubleshoot SMB or dual-protocol volumes](troubleshoot-dual-protocol-volumes.md)
azure-netapp-files Troubleshoot Ldap Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/troubleshoot-ldap-volumes.md
This article describes resolutions to error conditions you might have when confi
* [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) * [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
-* [Create a dual-protocol (NFSv3 and SMB) volume for Azure NetApp Files](create-volumes-dual-protocol.md)
+* [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
na ms.devlang: na Previously updated : 08/05/2021 Last updated : 08/06/2021
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
-## June 2021
+## August 2021
+
+* [NFS `Chown Mode` export policy and UNIX export permissions](configure-unix-permissions-change-ownership-mode.md) (Preview)
+
+ You can now set the Unix permissions and the change ownership mode (`Chown Mode`) options on Azure NetApp Files NFS volumes or dual-protocol volumes with the Unix security style. You can specify these settings during volume creation or after volume creation.
+
+ The change ownership mode (`Chown Mode`) functionality enables you to set the ownership management capabilities of files and directories. You can specify or modify the setting under a volume's export policy. Two options for `Chown Mode` are available: *Restricted* (default), where only the root user can change the ownership of files and directories, and *Unrestricted*, where non-root users can change the ownership for files and directories that they own.
+
+ The Azure NetApp Files Unix Permissions functionality enables you to specify change permissions for the mount path.
+
+ These new features provide options to move access control of certain files and directories into the hands of the data user instead of the service operator.
+
+* [Dual-protocol (NFSv4.1 and SMB) volume](create-volumes-dual-protocol.md) (Preview)
+
+ Azure NetApp Files already supports dual-protocol access to NFSv3 and SMB volumes as of [July 2020](#july-2020). You can now create an Azure NetApp Files volume that allows simultaneous dual-protocol (NFSv4.1 and SMB) access with support for LDAP user mapping. This feature enables use cases where you might have a Linux-based workload using NFSv4.1 for its access, and the workload generates and stores data in an Azure NetApp Files volume. At the same time, your staff might need to use Windows-based clients and software to analyze the newly generated data from the same Azure NetApp Files volume. The simultaneous dual-protocol access feature removes the need to copy the workload-generated data to a separate volume with a different protocol for post-analysis, saving storage cost and operational time. This feature is free of charge (normal Azure NetApp Files storage cost still applies) and is generally available. Learn more from the [simultaneous dual-protocol NFSv4.1/SMB access](create-volumes-dual-protocol.md) documentation.
+
+## June 2021
* [Azure NetApp Files storage service add-ons](storage-service-add-ons.md)
azure-percept Audio Button Led Behavior https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/audio-button-led-behavior.md
Use the LED indicators to understand which state your device is in.
|L02|1x white, static on|Power on | |L02|1x white, 0.5 Hz flashing|Authentication in progress | |L01 & L02 & L03|3x blue, static on|Waiting for keyword|
-|L01 & L02 & L03|LED array flashing, 20fps |Listening or speaking|
-|L01 & L02 & L03|LED array racing, 20fps|Thinking|
+|L01 & L02 & L03|LED array flashing, 20 fps |Listening or speaking|
+|L01 & L02 & L03|LED array racing, 20 fps|Thinking|
|L01 & L02 & L03|3x red, static on |Mute|
+## Understanding Ear SoM LED indicators
+You can use LED indicators to understand which state your device is in. It takes around 4-5 minutes for the device to power on and the module to fully initialize. As it goes through initialization steps, you'll see:
+
+1. Center white LED on (static): the device is powered on.
+1. Center white LED on (blinking): authentication is in progress.
+1. Center white LED on (static): the device is authenticated but the keyword isn't configured.ΓÇï
+1. All three LEDs will change to blue once a demo was deployed and the device is ready to use.
++
+## Troubleshooting LED issues
+- **If the center LED is solid white**, try [using a template to create a voice assistant](./tutorial-no-code-speech.md).
+- **If the center LED is always blinking**, it indicates an authentication issue. Try these troubleshooting steps:
+ 1. Make sure that your USB-A and micro USB connections are secured
+ 1. Check to see if the [speech module is running](./troubleshoot-audio-accessory-speech-module.md#checking-runtime-status-of-the-speech-module)
+ 1. Restart the device
+ 1. [Collect logs](./troubleshoot-audio-accessory-speech-module.md#collecting-speech-module-logs) and attach them to a support request
+ 1. Check to see if your dev kit is running the latest software and apply an update if available.
+ ## Next steps For troubleshooting tips for your Azure Percept Audio device, see this [guide](./troubleshoot-audio-accessory-speech-module.md).
azure-percept Delete Voice Assistant Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/delete-voice-assistant-application.md
+
+ Title: Delete your Azure Percept Audio voice assistant application
+description: This article shows you how to delete a previously created voice assistant application.
++++ Last updated : 08/03/2021+++
+# Delete your voice assistant application
+
+These instructions will show you how to delete a voice assistant application from your Azure Percept Audio device.
+
+## Prerequisites
+
+- [A previously created voice assistant application](./tutorial-no-code-speech.md)
+- Your Azure Percept DK is powered on and the Azure Percept Audio accessory is connected via a USB cable.
+
+## Remove all voice assistant resources from the Azure portal
+
+Once you're done working with your voice assistant application, follow these steps to clean up the speech resources you deployed when creating the application.
+
+1. From the [Azure portal](https://portal.azure.com), select **Resource groups** from the left menu panel or type it into the search bar.
+
+ :::image type="content" source="./media/tutorial-no-code-speech/azure-portal.png" alt-text="Screenshot of Azure portal homepage showing left menu panel and Resource Groups.":::
+
+1. Select your resource group.
+
+1. Select all six resources that contain your application prefix and select the **Delete** icon on the top menu panel.
+
+ :::image type="content" source="./media/tutorial-no-code-speech/select-resources.png" alt-text="Screenshot of speech resources selected for deletion.":::
+
+1. To confirm deletion, type **yes** in the confirmation box, verify you've selected the correct resources, and select **Delete**.
+
+ :::image type="content" source="./media/tutorial-no-code-speech/delete-confirmation.png" alt-text="Screenshot of delete confirmation window.":::
+
+> [!WARNING]
+> This will remove any custom keywords created with the speech resources you are deleting, and the voice assistant demo will no longer function.
++
+## Next steps
+Now that you've removed your voice assistant application, try creating other applications on your Azure Percept DK by following these tutorials.
+- [Create a no-code vision solution](./tutorial-nocode-vision.md)
+- [Create a no-code voice assistant application](./tutorial-no-code-speech.md)
++
azure-percept Return To Voice Assistant Application Window https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/return-to-voice-assistant-application-window.md
+
+ Title: Return to your Azure Percept Audio voice assistant application window
+description: This article shows you how to return to a previously created voice assistant application window.
++++ Last updated : 08/03/2021+++
+# Return to your voice assistant application window in Azure Percept Studio
+
+This how-to guide shows you how to return to a previously created voice assistant application.
+
+## Prerequisites
+
+- [Create a voice assistant demo application](./tutorial-no-code-speech.md)
+- Your Azure Percept DK is powered on and the Azure Percept Audio accessory is connected via a USB cable.
+
+## Open your voice assistant application
+1. Go to [Azure Percept Studio](https://portal.azure.com/#blade/AzureEdgeDevices/Main/overview)
+1. Select **Devices** from the left menu pane.
+ :::image type="content" source="media/return-to-voice-assistant-demo-window/select-device.png" alt-text="select device from the left menu pane":::
+1. Select the device to which your voice assistant application was deployed.
+1. Select the **Speech** tab.
+ :::image type="content" source="media/return-to-voice-assistant-demo-window/speech-tab.png" alt-text="select the speech tab":::
+1. Under **Actions**, select **Test your voice assistant**
+ :::image type="content" source="media/return-to-voice-assistant-demo-window/actions-test-va.png" alt-text="select test your voice assistant under the Actions section":::
+
+## Next steps
+Now that your voice assistant application is open, try making some [more configurations](./how-to-manage-voice-assistant.md).
+
azure-percept Troubleshoot Audio Accessory Speech Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-audio-accessory-speech-module.md
Use the guidelines below to troubleshoot voice assistant application issues.
-## Understanding Ear SoM LED indicators
+## Checking runtime status of the speech module
-You can use LED indicators to understand which state your device is in. It takes around 4-5 minutes for the device to power on and the module to fully initialize. As it goes through initialization steps, you will see:
+Check if the runtime status of **azureearspeechclientmodule** shows as **running**. To locate the runtime status of your device modules, open the [Azure portal](https://portal.azure.com/) and navigate to **All resources** -> **[your IoT hub]** -> **IoT Edge** -> **[your device ID]**. Select the **Modules** tab to see the runtime status of all installed modules.
-1. Center white LED on (static): the device is powered on.
-1. Center white LED on (blinking): authentication is in progress.
-1. Center white LED on (static): the device is authenticated but keyword is not configured.ΓÇï
-1. All three LEDs will change to blue once a demo was deployed and the device is ready to use.
-For more reference, see this article about [Azure Percept Audio button and LED behavior](./audio-button-led-behavior.md).
+If the runtime status of **azureearspeechclientmodule** isn't listed as **running**, select **Set modules** -> **azureearspeechclientmodule**. On the **Module Settings** page, set **Desired Status** to **running** and select **Update**.
-### Troubleshooting LED issues
-- **If the center LED is solid white**, try [using a template to create a voice assistant](./tutorial-no-code-speech.md).-- **If the center LED is always blinking**, it indicates an authentication issue. Try these troubleshooting steps:
- - Make sure that your USB-A and micro USB connections are secured
- - Check to see if the [speech module is running](./troubleshoot-audio-accessory-speech-module.md#checking-runtime-status-of-the-speech-module)
- - Restart the device
- - [Collect logs](./troubleshoot-audio-accessory-speech-module.md#collecting-speech-module-logs) and attach them to a support request
- - Check to see if your dev kit is running the latest software and apply an update if available.
+## Voice assistant application doesn't load
+Try [deploying one of the voice assistant templates](./tutorial-no-code-speech.md). Deploying a template ensures that all the supporting resources needed for voice assistant applications get created.
-## Checking runtime status of the speech module
+## Voice assistant template doesn't get created
+Failure of when creating a voice assistant template is usually an issue with one of the supporting resources.
+1. [Delete all previously created voice assistant resources](./delete-voice-assistant-application.md).
+1. Deploy a new [voice assistant template](./tutorial-no-code-speech.md).
-Check if the runtime status of **azureearspeechclientmodule** shows as **running**. To locate the runtime status of your device modules, open the [Azure portal](https://portal.azure.com/) and navigate to **All resources** -> **[your IoT hub]** -> **IoT Edge** -> **[your device ID]**. Click the **Modules** tab to see the runtime status of all installed modules.
+## Voice assistant was created but doesn't respond to commands
+Follow the instructions on the [LED behavior and troubleshooting guide](audio-button-led-behavior.md) to troubleshoot this issue.
+## Voice assistant doesn't respond to custom keywords created in Speech Studio
+This may occur if the speech module is out of date. Follow these steps to update the speech module to the latest version:
-If the runtime status of **azureearspeechclientmodule** is not listed as **running**, click **Set modules** -> **azureearspeechclientmodule**. On the **Module Settings** page, set **Desired Status** to **running** and click **Update**.
+1. Select on **Devices** in the left-hand menu panel of the Azure Percept Studio homepage.
+1. Find and select your device.
-## Collecting speech module logs
+ :::image type="content" source="./media/tutorial-no-code-speech/devices.png" alt-text="Screenshot of device list in Azure Percept Studio.":::
+1. In the device window, select the **Speech** tab.
+1. Check the speech module version. If an update is available, you'll see an **Update** button next to the version number.
+1. Select **Update** to deploy the speech module update. The update process generally takes 2-3 minutes to complete.
+## Collecting speech module logs
To run these commands, [SSH into the dev kit](./how-to-ssh-into-percept-dk.md) and enter the commands into the SSH client prompt. Collect speech module logs:
scp [remote username]@[IP address]:[remote file path]/[file name].txt [local hos
- [Azure Percept Audio button and LED behavior](./audio-button-led-behavior.md) - [Create a voice assistant with Azure Percept DK and Azure Percept Audio](./tutorial-no-code-speech.md) - [Azure Percept DK general troubleshooting guide](./troubleshoot-dev-kit.md)
+- [How to return to a previously crated voice assistant application](return-to-voice-assistant-application-window.md)
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/doc-changes-updates-release-notes.md
The following features are enabled in the SQL Managed Instance deployment model
|[Procedure sp_send_dbmail may transiently fail when @query parameter is used](#procedure-sp_send_dbmail-may-transiently-fail-when--parameter-is-used)|Jan 2021|Has Workaround|| |[Distributed transactions can be executed after removing Managed Instance from Server Trust Group](#distributed-transactions-can-be-executed-after-removing-managed-instance-from-server-trust-group)|Oct 2020|Has Workaround|| |[Distributed transactions cannot be executed after Managed Instance scaling operation](#distributed-transactions-cannot-be-executed-after-managed-instance-scaling-operation)|Oct 2020|Has Workaround||
+|[Cannot create SQL Managed Instance with the same name as logical server previously deleted](#cannot-create-sql-managed-instance-with-the-same-name-as-logical-server-previously-deleted)|Aug 2020|Has Workaround||
|[BULK INSERT](/sql/t-sql/statements/bulk-insert-transact-sql)/[OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql) in Azure SQL and `BACKUP`/`RESTORE` statement in Managed Instance cannot use Azure AD Manage Identity to authenticate to Azure storage|Sep 2020|Has Workaround|| |[Service Principal cannot access Azure AD and AKV](#service-principal-cannot-access-azure-ad-and-akv)|Aug 2020|Has Workaround|| |[Restoring manual backup without CHECKSUM might fail](#restoring-manual-backup-without-checksum-might-fail)|May 2020|Resolved|June 2020|
END
Managed Instance scaling operations that include changing service tier or number of vCores will reset Server Trust Group settings on the backend and disable running [distributed transactions](./elastic-transactions-overview.md). As a workaround, delete and create new [Server Trust Group](../managed-instance/server-trust-group-overview.md) on Azure portal.
+### Cannot create SQL Managed Instance with the same name as logical server previously deleted
+
+After [logical server](./logical-servers.md) is deleted, there is a treshold period of 7 days before the name is released from the records. In that period, SQL Managed Instance with the same name cannot be created. As a workaround you would need to use different name for the SQL Managed Instance or create a support ticket for releasing a logical server name.
+ ### BULK INSERT and BACKUP/RESTORE statements should use SAS Key to access Azure storage Currently, it is not supported to use `DATABASE SCOPED CREDENTIAL` syntax with Managed Identity to authenticate to Azure storage. Microsoft recommends using a [shared access signature](../../storage/common/storage-sas-overview.md) for the [database scoped credential](/sql/t-sql/statements/create-credential-transact-sql#d-creating-a-credential-using-a-sas-token), when accessing Azure storage for bulk insert, `BACKUP` and `RESTORE` statements, or the `OPENROWSET` function. For example:
azure-sql Metrics Diagnostic Telemetry Logging Streaming Export Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure.md
azure-sql Serverless Tier Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/serverless-tier-overview.md
ms.devlang:
- Previously updated : 4/16/2021+ Last updated : 7/29/2021 # Azure SQL Database serverless [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
Unlike provisioned compute databases, memory from the SQL cache is reclaimed fro
In both serverless and provisioned compute databases, cache entries may be evicted if all available memory is used.
-Note that when CPU utilization is low, active cache utilization can remain high depending on the usage pattern and prevent memory reclamation. Also, there can be additional delays after user activity stops before memory reclamation occurs due to periodic background processes responding to prior user activity. For example, delete operations and QDS cleanup tasks generate ghost records that are marked for deletion, but are not physically deleted until the ghost cleanup process runs that can involve reading data pages into cache.
+When CPU utilization is low, active cache utilization can remain high depending on the usage pattern and prevent memory reclamation. Also, there can be other delays after user activity stops before memory reclamation occurs due to periodic background processes responding to prior user activity. For example, delete operations and Query Store cleanup tasks generate ghost records that are marked for deletion, but are not physically deleted until the ghost cleanup process runs. Ghost cleanup may involve reading additional data pages into cache.
#### Cache hydration
The SQL cache grows as data is fetched from disk in the same way and with the sa
Auto-pausing is triggered if all of the following conditions are true for the duration of the auto-pause delay: -- Number sessions = 0-- CPU = 0 for user workload running in the user pool
+- Number of sessions = 0
+- CPU = 0 for user workload running in the user resource pool
An option is provided to disable auto-pausing if desired.
-The following features do not support auto-pausing, but do support auto-scaling. If any of the following features are used, then auto-pausing must be disabled and the database will remain online regardless of the duration of database inactivity:
+The following features do not support auto-pausing, but do support auto-scaling. If any of the following features are used, then auto-pausing must be disabled and the database will remain online regardless of the duration of database inactivity:
-- Geo-replication (active geo-replication and auto-failover groups).-- Long-term backup retention (LTR).-- The sync database used in SQL data sync. Unlike sync databases, hub and member databases support auto-pausing.-- DNS aliasing-- The job database used in Elastic Jobs (preview).
+- Geo-replication ([active geo-replication](active-geo-replication-overview.md) and [auto-failover groups](auto-failover-group-overview.md)).
+- [Long-term backup retention](long-term-retention-overview.md) (LTR).
+- The sync database used in [SQL Data Sync](sql-data-sync-data-sql-server-sql-database.md). Unlike sync databases, hub and member databases support auto-pausing.
+- [DNS alias](dns-alias-overview.md) created for the logical server containing a serverless database.
+- [Elastic Jobs (preview)](elastic-jobs-overview.md), when the job database is a serverless database. Databases targeted by elastic jobs support auto-pausing, and will be resumed by job connections.
Auto-pausing is temporarily prevented during the deployment of some service updates which require the database be online. In such cases, auto-pausing becomes allowed again once the service update completes.
+#### Auto-pause troubleshooting
+
+If auto-pausing is enabled, but a database does not auto-pause after the delay period, and the features listed above are not used, the application or user sessions may be preventing auto-pausing. To see if there are any application or user sessions currently connected to the database, connect to the database using any client tool, and execute the following query:
+
+```sql
+SELECT session_id,
+ host_name,
+ program_name,
+ client_interface_name,
+ login_name,
+ status,
+ login_time,
+ last_request_start_time,
+ last_request_end_time
+FROM sys.dm_exec_sessions AS s
+INNER JOIN sys.dm_resource_governor_workload_groups AS wg
+ON s.group_id = wg.group_id
+WHERE s.session_id <> @@SPID
+ AND
+ (
+ (
+ wg.name like 'UserPrimaryGroup.DB%'
+ AND
+ TRY_CAST(RIGHT(wg.name, LEN(wg.name) - LEN('UserPrimaryGroup.DB') - 2) AS int) = DB_ID()
+ )
+ OR
+ wg.name = 'DACGroup'
+ );
+```
+
+> [!TIP]
+> After running the query, make sure to disconnect from the database. Otherwise, the open session used by the query will prevent auto-pausing.
+
+If the result set is non-empty, it indicates that there are sessions currently preventing auto-pausing.
+
+If the result set is empty, it is still possible that sessions were open, possibly for a short time, at some point earlier during the auto-pause delay period. To see if such activity has occurred during the delay period, you can use [Azure SQL Auditing](auditing-overview.md) and examine audit data for the relevant period.
+
+The presence of open sessions, with or without concurrent CPU utilization in the user resource pool, is the most common reason for a serverless database to not auto-pause as expected. Note that some [features](#auto-pausing) don't support auto-pausing, but do support auto-scaling.
+ ### Auto-resuming Auto-resuming is triggered if any of the following conditions are true at any time:
Auto-resuming is triggered if any of the following conditions are true at any ti
|Auto-tuning|Application and verification of auto-tuning recommendations such as auto-indexing| |Database copying|Create database as copy.<br>Export to a BACPAC file.| |SQL data sync|Synchronization between hub and member databases that run on a configurable schedule or are performed manually|
-|Modifying certain database metadata|Adding new database tags.<br>Changing max vCores, min vCores, or autopause delay.|
+|Modifying certain database metadata|Adding new database tags.<br>Changing max vCores, min vCores, or auto-pause delay.|
|SQL Server Management Studio (SSMS)|Using SSMS versions earlier than 18.1 and opening a new query window for any database in the server will resume any auto-paused database in the same server. This behavior does not occur if using SSMS version 18.1 or later.| Monitoring, management, or other solutions performing any of the operations listed above will trigger auto-resuming.
If a serverless database is paused, then the first login will resume the databas
### Latency
-The latency to auto-resume and auto-pause a serverless database is generally order of 1 minute to auto-resume and 1-10 minutes to auto-pause.
+The latency to auto-resume and auto-pause a serverless database is generally order of 1 minute to auto-resume and 1-10 minutes after the expiration of the delay period to auto-pause.
### Customer managed transparent data encryption (BYOK)
-If using [customer managed transparent data encryption](transparent-data-encryption-byok-overview.md) (BYOK) and the serverless database is auto-paused when key deletion or revocation occurs, then the database remains in the auto-paused state. In this case, after the database is next resumed, the database becomes inaccessible within approximately 10 minutes. Once the database becomes inaccessible, the recovery process is the same as for provisioned compute databases. If the serverless database is online when key deletion or revocation occurs, then the database also becomes inaccessible within approximately 10 minutes in the same way as with provisioned compute databases.
+If using [customer managed transparent data encryption](transparent-data-encryption-byok-overview.md) (BYOK) and the serverless database is auto-paused when key deletion or revocation occurs, then the database remains in the auto-paused state. In this case, after the database is next resumed, the database becomes inaccessible within approximately 10 minutes. Once the database becomes inaccessible, the recovery process is the same as for provisioned compute databases. If the serverless database is online when key deletion or revocation occurs, then the database also becomes inaccessible within approximately 10 minutes in the same way as with provisioned compute databases.
## Onboarding into serverless compute tier
Creating a new database or moving an existing database into a serverless compute
1. Specify the service objective. The service objective prescribes the service tier, hardware generation, and max vCores. For service objective options, see [serverless resource limits](resource-limits-vcore-single-databases.md#general-purposeserverless-computegen5)
-2. Optionally, specify the min vCores and autopause delay to change their default values. The following table shows the available values for these parameters.
+2. Optionally, specify the min vCores and auto-pause delay to change their default values. The following table shows the available values for these parameters.
|Parameter|Value choices|Default value| |||||
Creating a new database or moving an existing database into a serverless compute
The following examples create a new database in the serverless compute tier.
-#### Use the Azure portal
+#### Use Azure portal
See [Quickstart: Create a single database in Azure SQL Database using the Azure portal](single-database-create-quickstart.md).
New-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $serverName
-ComputeModel Serverless -Edition GeneralPurpose -ComputeGeneration Gen5 ` -MinVcore 0.5 -MaxVcore 2 -AutoPauseDelayInMinutes 720 ```
-#### Use the Azure CLI
+#### Use Azure CLI
```azurecli az sql db create -g $resourceGroupName -s $serverName -n $databaseName `
az sql db create -g $resourceGroupName -s $serverName -n $databaseName `
#### Use Transact-SQL (T-SQL)
-When using T-SQL, default values are applied for the min vcores and autopause delay.
+When using T-SQL, default values are applied for the min vcores and autopause delay. They can later be changed from the portal or via other management APIs (PowerShell, Azure CLI, REST API).
```sql CREATE DATABASE testdb
The following examples move a database from the provisioned compute tier into th
#### Use PowerShell - ```powershell Set-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $serverName -DatabaseName $databaseName ` -Edition GeneralPurpose -ComputeModel Serverless -ComputeGeneration Gen5 ` -MinVcore 1 -MaxVcore 4 -AutoPauseDelayInMinutes 1440 ```
-#### Use the Azure CLI
+#### Use Azure CLI
```azurecli az sql db update -g $resourceGroupName -s $serverName -n $databaseName ` --edition GeneralPurpose --min-capacity 1 --capacity 4 --family Gen5 --compute-model Serverless --auto-pause-delay 1440 ``` - #### Use Transact-SQL (T-SQL)
-When using T-SQL, default values are applied for the min vcores and auto-pause delay.
+When using T-SQL, default values are applied for the min vcores and auto-pause delay. They can later be changed from the portal or via other management APIs (PowerShell, Azure CLI, REST API).
```sql ALTER DATABASE testdb
A serverless database can be moved into a provisioned compute tier in the same w
Modifying the maximum or minimum vCores, and autopause delay, is performed by using the [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase) command in PowerShell using the `MaxVcore`, `MinVcore`, and `AutoPauseDelayInMinutes` arguments.
-### Use the Azure CLI
+### Use Azure CLI
Modifying the maximum or minimum vCores, and autopause delay, is performed by using the [az sql db update](/cli/azure/sql/db#az_sql_db_update) command in Azure CLI using the `capacity`, `min-capacity`, and `auto-pause-delay` arguments. - ## Monitoring ### Resources used and billed
The resources of a serverless database are encapsulated by app package, SQL inst
#### App package
-The app package is the outer most resource management boundary for a database, regardless of whether the database is in a serverless or provisioned compute tier. The app package contains the SQL instance and external services like full-text search that all together scope all user and system resources used by a database in SQL Database. The SQL instance generally dominates the overall resource utilization across the app package.
+The app package is the outer most resource management boundary for a database, regardless of whether the database is in a serverless or provisioned compute tier. The app package contains the SQL instance and external services like Full-text Search that all together scope all user and system resources used by a database in SQL Database. The SQL instance generally dominates the overall resource utilization across the app package.
#### User resource pool
-The user resource pool is the inner most resource management boundary for a database, regardless of whether the database is in a serverless or provisioned compute tier. The user resource pool scopes CPU and IO for user workload generated by DDL queries such as CREATE and ALTER and DML queries such as SELECT, INSERT, UPDATE, and DELETE. These queries generally represent the most substantial proportion of utilization within the app package.
+The user resource pool is an inner resource management boundary for a database, regardless of whether the database is in a serverless or provisioned compute tier. The user resource pool scopes CPU and IO for user workload generated by DDL queries such as CREATE and ALTER, DML queries such as INSERT, UPDATE, DELETE, and MERGE, and SELECT queries. These queries generally represent the most substantial proportion of utilization within the app package.
### Metrics
-Metrics for monitoring the resource usage of the app package and user pool of a serverless database are listed in the following table:
+Metrics for monitoring the resource usage of the app package and user resource pool of a serverless database are listed in the following table:
|Entity|Metric|Description|Units| ||||| |App package|app_cpu_percent|Percentage of vCores used by the app relative to max vCores allowed for the app.|Percentage| |App package|app_cpu_billed|The amount of compute billed for the app during the reporting period. The amount paid during this period is the product of this metric and the vCore unit price. <br><br>Values of this metric are determined by aggregating over time the maximum of CPU used and memory used each second. If the amount used is less than the minimum amount provisioned as set by the min vCores and min memory, then the minimum amount provisioned is billed. In order to compare CPU with memory for billing purposes, memory is normalized into units of vCores by rescaling the amount of memory in GB by 3 GB per vCore.|vCore seconds| |App package|app_memory_percent|Percentage of memory used by the app relative to max memory allowed for the app.|Percentage|
-|User pool|cpu_percent|Percentage of vCores used by user workload relative to max vCores allowed for user workload.|Percentage|
-|User pool|data_IO_percent|Percentage of data IOPS used by user workload relative to max data IOPS allowed for user workload.|Percentage|
-|User pool|log_IO_percent|Percentage of log MB/s used by user workload relative to max log MB/s allowed for user workload.|Percentage|
-|User pool|workers_percent|Percentage of workers used by user workload relative to max workers allowed for user workload.|Percentage|
-|User pool|sessions_percent|Percentage of sessions used by user workload relative to max sessions allowed for user workload.|Percentage|
+|User resource pool|cpu_percent|Percentage of vCores used by user workload relative to max vCores allowed for user workload.|Percentage|
+|User resource pool|data_IO_percent|Percentage of data IOPS used by user workload relative to max data IOPS allowed for user workload.|Percentage|
+|User resource pool|log_IO_percent|Percentage of log MB/s used by user workload relative to max log MB/s allowed for user workload.|Percentage|
+|User resource pool|workers_percent|Percentage of workers used by user workload relative to max workers allowed for user workload.|Percentage|
+|User resource pool|sessions_percent|Percentage of sessions used by user workload relative to max sessions allowed for user workload.|Percentage|
### Pause and resume status
Get-AzSqlDatabase -ResourceGroupName $resourcegroupname -ServerName $servername
| Select -ExpandProperty "Status" ```
-#### Use the Azure CLI
+#### Use Azure CLI
```azurecli az sql db show --name $databasename --resource-group $resourcegroupname --server $servername --query 'status' -o json ``` - ## Resource limits For resource limits, see [serverless compute tier](resource-limits-vcore-single-databases.md#general-purposeserverless-computegen5).
The [Azure SQL Database pricing calculator](https://azure.microsoft.com/pricing/
### Example scenario
-Consider a serverless database configured with 1 min vCore and 4 max vCores. This corresponds to around 3 GB min memory and 12-GB max memory. Suppose the auto-pause delay is set to 6 hours and the database workload is active during the first 2 hours of a 24-hour period and otherwise inactive.
+Consider a serverless database configured with 1 min vCore and 4 max vCores. This corresponds to around 3 GB min memory and 12 GB max memory. Suppose the auto-pause delay is set to 6 hours and the database workload is active during the first 2 hours of a 24-hour period and otherwise inactive.
In this case, the database is billed for compute and storage during the first 8 hours. Even though the database is inactive starting after the second hour, it is still billed for compute in the subsequent 6 hours based on the minimum compute provisioned while the database is online. Only storage is billed during the remainder of the 24-hour period while the database is paused.
More precisely, the compute bill in this example is calculated as follows:
|8:00-24:00|0|0|No compute billed while paused|0 vCore seconds| |Total vCore seconds billed over 24 hours||||50400 vCore seconds|
-Suppose the compute unit price is $0.000145/vCore/second. Then the compute billed for this 24-hour period is the product of the compute unit price and vCore seconds billed: $0.000145/vCore/second * 50400 vCore seconds ~ $7.31
+Suppose the compute unit price is $0.000145/vCore/second. Then the compute billed for this 24-hour period is the product of the compute unit price and vCore seconds billed: $0.000145/vCore/second * 50400 vCore seconds ~ $7.31.
### Azure Hybrid Benefit and reserved capacity
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/introduction.md
Last updated 04/20/2021
# What is Azure VMware Solution?
-Azure VMware Solution provides you with private clouds that contain vSphere clusters, built from dedicated bare-metal Azure infrastructure. The minimum initial deployment is three hosts, but additional hosts can be added one at a time, up to a maximum of 16 hosts per cluster. All provisioned private clouds have vCenter Server, vSAN, vSphere, and NSX-T. You can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. Azure VMware management tools (vCenter Server and NSX Manager) will be available at least 99.9% of the time. For more information, see [Azure VMware Solution SLA](https://aka.ms/avs/sla).
+Azure VMware Solution provides you with private clouds that contain vSphere clusters built from dedicated bare-metal Azure infrastructure. The minimum initial deployment is three hosts, but additional hosts can be added one at a time, up to a maximum of 16 hosts per cluster. All provisioned private clouds have vCenter Server, vSAN, vSphere, and NSX-T. As a result, you can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. In addition, Azure VMware Solution management tools (vCenter Server and NSX Manager) are available at least 99.9% of the time. For more information, see [Azure VMware Solution SLA](https://aka.ms/avs/sla).
-Azure VMware Solution is a VMware validated solution with on-going validation and testing of enhancements and upgrades. Microsoft manages and maintains private cloud infrastructure and software. It allows you to focus on developing and running workloads in your private clouds.
+Azure VMware Solution is a VMware validated solution with ongoing validation and testing of enhancements and upgrades. Microsoft manages and maintains private cloud infrastructure and software. It allows you to focus on developing and running workloads in your private clouds.
-The diagram shows the adjacency between private clouds and VNets in Azure, Azure services, and on-premises environments. Network access from private clouds to Azure services or VNets provides SLA-driven integration of Azure service endpoints. ExpressRoute Global Reach connects your on-premises environment to your Azure VMware Solution private cloud.
+The diagram shows the adjacency between private clouds and VNets in Azure, Azure services, and on-premises environments. Network access from private clouds to Azure services or VNets provides SLA-driven integration of Azure service endpoints. ExpressRoute Global Reach connects your on-premises environment to your Azure VMware Solution private cloud.
+
:::image type="content" source="media/adjacency-overview-drawing-final.png" alt-text="Diagram of Azure VMware Solution private cloud adjacency to Azure and on-premises." border="false"::: ## Hosts, clusters, and private clouds
-Azure VMware Solution private clouds and clusters are built from a bare-metal, hyper-converged Azure infrastructure host. The high-end hosts have 576-GB RAM and dual Intel 18 core, 2.3-GHz processors. The HE hosts have two vSAN diskgroups with 15.36 TB (SSD) of raw vSAN capacity tier and a 3.2 TB (NVMe) vSAN cache tier.
+Azure VMware Solution private clouds and clusters are built from a bare-metal, hyper-converged Azure infrastructure host. The high-end (HE) hosts have 576-GB RAM and dual Intel 18 core, 2.3-GHz processors. In addition, the HE hosts have two vSAN disk groups with 15.36 TB (SSD) of raw vSAN capacity tier and a 3.2 TB (NVMe) vSAN cache tier.
+
+You can deploy new private clouds through the Azure portal or Azure CLI.
-New private clouds are deployed through the Azure portal or Azure CLI.
## Networking
Regular upgrades of the Azure VMware Solution private cloud and VMware software
## Monitoring your private cloud
-Once Azure VMware Solution is deployed into your subscription, [Azure Monitor logs](../azure-monitor/overview.md) are generated automatically.
+Once youΓÇÖve deployed Azure VMware Solution into your subscription, [Azure Monitor logs](../azure-monitor/overview.md) are generated automatically.
In your private cloud, you can: - Collect logs on each of your VMs.
azure-vmware Plan Private Cloud Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/plan-private-cloud-deployment.md
Last updated 07/07/2021
# Plan the Azure VMware Solution deployment
-Planning your Azure VMware Solution deployment is critical for a successful production-ready environment for creating virtual machines (VMs) and migration. During the planning process, you'll identify and gather what's needed for your deployment. As you plan, make sure to document the information you gather for easy reference during the deployment. A successful deployment results in a production-ready environment for creating virtual machines (VMs) and migration.
+Planning your Azure VMware Solution deployment is critical for a successful production-ready environment for creating virtual machines (VMs) and migration. During the planning process, you'll identify and gather what's needed for your deployment. As you plan, make sure to document the information you gather for easy reference during the deployment. A successful deployment results in a production-ready environment for creating virtual machines (VMs) and migration.
In this how-to, you'll': > [!div class="checklist"] > * Identify the Azure subscription, resource group, region, and resource name
-> * Identify the size hosts and determin the number of clusters and hosts
+> * Identify the size hosts and determine the number of clusters and hosts
> * Request a host quota for eligible Azure plan > * Identify the /22 CIDR IP segment for private cloud management > * Identify a single network segment > * Define the virtual network gateway > * Define VMware HCX network segments
-After you're finished, follow the recommended next steps at the end to continue with the steps of this getting started guide.
+After you're finished, follow the recommended next steps at the end to continue with this getting started guide.
## Identify the subscription
The first Azure VMware Solution deployment you do consists of a private cloud co
## Request a host quota
-It's important to request a host quota early, so after you've finished the planning process, you're ready to deploy your Azure VMware Solution private cloud.
+It's crucial to request a host quota early, so after you've finished the planning process, you're ready to deploy your Azure VMware Solution private cloud.
+Before requesting a host quota, make sure you've identified the Azure subscription, resource group, and region. Also, make sure you've identified the size hosts and determine the number of clusters and hosts you'll need.
-Before you request a host quota, make sure you've identified the Azure subscription, resource group, and region. Also make sure you've identified the size hosts and determine the number of clusters and hosts you'll need. After the support team receives your request for a host quota, it takes up to five business days to confirm your request and allocate your hosts.
+After the support team receives your request for a host quota, it takes up to five business days to confirm your request and allocate your hosts.
- [EA customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-ea-customers) - [CSP customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-csp-customers)
For the initial deployment, identify a single network segment (IP network), for
## Define the virtual network gateway
-Azure VMware Solution requires an Azure Virtual Network and an ExpressRoute circuit. Define whether you want to use an *existing* OR *new* ExpressRoute virtual network gateway. If you decide to use a *new* virtual network gateway, you'll create it after you create your private cloud. It's acceptable to use an existing ExpressRoute virtual network gateway, and for planning purposes, make note of which ExpressRoute virtual network gateway you'll use.
+Azure VMware Solution requires an Azure Virtual Network and an ExpressRoute circuit. Define whether you want to use an *existing* OR *new* ExpressRoute virtual network gateway. If you decide to use a *new* virtual network gateway, you'll create it after creating your private cloud. It's acceptable to use an existing ExpressRoute virtual network gateway, and for planning purposes, make a note of which ExpressRoute virtual network gateway you'll use.
:::image type="content" source="media/pre-deployment/azure-vmware-solution-expressroute-diagram.png" alt-text="Diagram that shows the Azure Virtual Network attached to Azure VMware Solution" border="false":::
Azure VMware Solution requires an Azure Virtual Network and an ExpressRoute circ
## Define VMware HCX network segments
-VMware HCX is an application mobility platform designed for simplifying application migration, workload rebalancing, and business continuity across data centers and clouds. You can migrate your VMware workloads to Azure VMware Solution and other connected sites through various migration types.
+VMware HCX is an application mobility platform that simplifies application migration, workload rebalancing, and business continuity across data centers and clouds. You can migrate your VMware workloads to Azure VMware Solution and other connected sites through various migration types.
VMware HCX Connector deploys a subset of virtual appliances (automated) that require multiple IP segments. When you create your network profiles, you use the IP segments. Identify the following for the VMware HCX deployment, which supports a pilot or small product use case. Depending on the needs of your migration, modify as necessary.
VMware HCX Connector deploys a subset of virtual appliances (automated) that req
>[!NOTE] >Preparing for large environments, instead of using the management network used for the on-premises VMware cluster, create a new /26 network and present that network as a port group to your on-premises VMware cluster. You can then create up to 10 service meshes and 60 network extenders (-1 per service mesh). You can stretch **eight** networks per network extender by using Azure VMware Solution private clouds. -- **Uplink network:** When deploying VMware HCX on-premises, you'll need to identify a Uplink network for VMware HCX. Use the same network which you will be using for the Management network.
+- **Uplink network:** When deploying VMware HCX on-premises, you'll need to identify an Uplink network for VMware HCX. Use the same network which youΓÇÖll use for the Management network.
- **vMotion network:** When deploying VMware HCX on-premises, you'll need to identify a vMotion network for VMware HCX. Typically, it's the same network used for vMotion by your on-premises VMware cluster. At a minimum, identify **two** IPs on this network segment for VMware HCX. You might need larger numbers, depending on the scale of your deployment beyond the pilot or small use case.
- The vMotion network must be exposed on a distributed virtual switch or vSwitch0. If it's not, modify the environment to accommodate.
+ You must expose the vMotion network on a distributed virtual switch or vSwitch0. If it's not, modify the environment to accommodate.
>[!NOTE] >Many VMware environments use non-routed network segments for vMotion, which poses no problems.
azure-vmware Request Host Quota Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/request-host-quota-azure-vmware-solution.md
Title: Request host quota for Azure VMware Solution
description: Learn how to request host quota/capacity for Azure VMware Solution. You can also request more hosts in an existing Azure VMware Solution private cloud. Previously updated : 07/07/2021 Last updated : 08/06/2021 #Customer intent: As an Azure service admin, I want to request hosts for either a new private cloud deployment or I want to have more hosts allocated in an existing private cloud.
Last updated 07/07/2021
# Request host quota for Azure VMware Solution
-In this how-to, you'll request host quot). You'll submit a support ticket to have your hosts allocated whether it's for a new deployment or an existing private cloud.
+In this how-to, you'll request host quot). You'll submit a support ticket to have your hosts allocated whether it's for a new deployment or an existing one.
If you have an existing Azure VMware Solution private cloud and want more hosts allocated, you'll follow the same process.
You'll need an Azure account in an Azure subscription. The Azure subscription mu
- A subscription under an [Azure Enterprise Agreement (EA)](../cost-management-billing/manage/ea-portal-agreements.md) with Microsoft. - A Cloud Solution Provider (CSP) managed subscription under an existing CSP Azure offers contract or an Azure plan.
+- A [Modern Commerce Agreement](../cost-management-billing/understand/mca-overview.md) with Microsoft.
## Request host quota for EA customers
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 08/03/2021 Last updated : 08/05/2021 # How to restore Azure VM data in Azure portal
You can also select the [user-managed identity](/azure/active-directory/managed-
>[!Note] >The support is available for only managed VMs, and not supported for classic VMs and unmanaged VMs. For the [storage accounts that are restricted with firewalls](/azure/storage/common/storage-network-security?tabs=azure-portal), system MSI is only supported. >
+>Cross Region Restore isn't supported with managed identities.
+>
>Currently, this is available in all Azure public regions, except Germany West Central and India Central. ## Track the restore operation
backup Backup Blobs Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-blobs-storage-account-cli.md
Title: Back up Azure Blobs using Azure CLI description: Learn how to back up Azure Blobs using Azure CLI. Previously updated : 06/18/2021 Last updated : 08/06/2021 # Back up Azure Blobs in a storage account using Azure CLI
This article describes how to back up [Azure Blobs](./blob-backup-overview.md) u
In this article, you'll learn how to:
+- Before you start
+ - Create a Backup vault - Create a Backup policy
In this article, you'll learn how to:
For information on the Azure Blobs regions availability, supported scenarios, and limitations, see the [support matrix](blob-backup-support-matrix.md).
+## Before you start
+
+See the [prerequisites](/azure/backup/blob-backup-configure-manage#before-you-start) and [support matrix](/azure/backup/blob-backup-support-matrix) before you get started.
+ ## Create a Backup vault Backup vault is a storage entity in Azure that stores backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers, and blobs in a storage account and Azure Disks. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data.
backup Backup Blobs Storage Account Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-blobs-storage-account-ps.md
Title: Back up Azure blobs within a storage account using Azure PowerShell description: Learn how to back up all Azure blobs within a storage account using Azure PowerShell. Previously updated : 05/05/2021 Last updated : 08/06/2021 # Back up all Azure blobs in a storage account using Azure PowerShell
This article describes how to back up all [Azure blobs](./blob-backup-overview.m
In this article, you'll learn how to:
+- Before you start
+ - Create a Backup vault - Create a backup policy
For information on the Azure blob region availability, supported scenarios and l
> [!IMPORTANT] > Support for Azure blobs is available from Az 5.9.0 version.
+## Before you start
+
+See the [prerequisites](/azure/backup/blob-backup-configure-manage#before-you-start) and [support matrix](/azure/backup/blob-backup-support-matrix) before you get started.
+ ## Create a Backup vault A Backup vault is a storage entity in Azure that holds backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers, Azure blobs and Azure blobs. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data.
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 08/03/2021 Last updated : 08/05/2021
Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Support
[Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) | Supported<br></br>While restoring an Azure VM through the [Create New](backup-azure-arm-restore-vms.md#create-a-vm) option, though the restore gets successful, Azure VM can't be restored in the dedicated host. To achieve this, we recommend you to restore as disks. While [restoring as disks](backup-azure-arm-restore-vms.md#restore-disks) with the template, create a VM in dedicated host, and then attach the disks.<br></br>This is not applicable in secondary region, while performing [Cross Region Restore](backup-azure-arm-restore-vms.md#cross-region-restore). Windows Storage Spaces configuration of standalone Azure VMs | Supported [Azure VM Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for flexible orchestration model to back up and restore Single Azure VM.
-Restore with Managed identities | Yes, supported for managed Azure VMs, and not supported for classic and unmanaged Azure VMs. <br><br> Currently, this is available in all Azure public regions, except Germany West Central and India Central. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
+Restore with Managed identities | Yes, supported for managed Azure VMs, and not supported for classic and unmanaged Azure VMs. <br><br> Cross Region Restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public regions, except Germany West Central and India Central. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
## VM storage support
backup Blob Backup Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/blob-backup-configure-manage.md
Title: Configure operational backup for Azure Blobs description: Learn how to configure and manage operational backup for Azure Blobs. Previously updated : 05/05/2021 Last updated : 08/06/2021
Azure Backup lets you easily configure operational backup for protecting block b
- This solution allows you to retain your data for restore for up to 360 days. Long retention durations may, however, lead to longer time taken during the restore operation. - The solution can be used to perform restores to the source storage account only and may result in data being overwritten. - If you delete a container from the storage account by calling the Delete Container operation, that container cannot be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later. Also, Microsoft recommends enabling soft delete for containers, in addition to operational backup, to protect against accidental deletion of containers.
+- Ensure that the **Microsoft.DataProtection** provider is registered for your subscription.
- Refer to the [support matrix](blob-backup-support-matrix.md) to learn more about the supported scenarios, limitations, and availability. ## Create a Backup vault
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 05/05/2021 Last updated : 08/05/2021 # What's new in Azure Backup
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- July 2021
+ - [Archive Tier support for SQL Server in Azure VM for Azure Backup is now generally available](#archive-tier-support-for-sql-server-in-azure-vm-for-azure-backup-is-now-generally-available)
- May 2021 - [Backup for Azure Blobs is now generally available](#backup-for-azure-blobs-is-now-generally-available) - April 2021
You can learn more about the new releases by bookmarking this page or by [subscr
- [Zone redundant storage (ZRS) for backup data (in preview)](#zone-redundant-storage-zrs-for-backup-data-in-preview) - [Soft delete for SQL Server and SAP HANA workloads in Azure VMs](#soft-delete-for-sql-server-and-sap-hana-workloads)
+## Archive Tier support for SQL Server in Azure VM for Azure Backup is now generally available
+
+Azure Backup allows you to move your long-term retention points for Azure Virtual Machines and SQL Server in Azure Virtual Machines to the low-cost Archive Tier. You can also restore from the recovery points in the Vault-archive tier.
+
+In addition to the capability to move the recovery points:
+
+- Azure Backup provides recommendations to move a specific set of recovery points for Azure Virtual Machine backups that'll ensure cost savings.
+- You have the capability to move all their recovery points for a particular backup item at one go using sample scripts.
+- You can view Archive storage usage on the Vault dashboard.
+
+For more information, see [Archive Tier support](/azure/backup/archive-tier-support).
+ ## Backup for Azure Blobs is now generally available Operational backup for Azure Blobs is a managed-data protection solution that lets you protect your block blob data from various data loss scenarios, such as blob corruptions, blob deletions, and accidental deletion of storage accounts.
batch Batch Applications To Pool Nodes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-applications-to-pool-nodes.md
For applications or data that need to be installed on every node in the pool, co
Application packages are useful when you have a large number of files, because they can combine many file references into a small payload. If you try to include more than 100 separate resource files into one task, the Batch service might come up against internal system limitations for a single task. Application packages are also useful when you have many different versions of the same application and need to choose between them.
+## Extensions
+
+[Extensions](create-pool-extensions.md) are small applications that facilitate post-provisioning configuration and setup on Batch compute nodes. When you create a pool, you can select a supported extension to be installed on the compute nodes as they are provisioned. After that, the extension can perform its intended operation.
+ ## Job preparation task resource files For applications or data that must be installed for the job to run, but don't need to be installed on the entire pool, consider using [job preparation task resource files](./batch-job-prep-release.md).
batch Create Pool Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/create-pool-availability-zones.md
Title: Create a pool across availability zones description: Learn how to create a Batch pool with zonal policy to help protect against failures. Previously updated : 01/28/2021 Last updated : 08/06/2021 # Create an Azure Batch pool across Availability Zones
Request body
- Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks. - Learn about [creating a pool in a subnet of an Azure virtual network](batch-virtual-network.md).-- Learn about [creating an Azure Batch pool without public IP addresses](./batch-pool-no-public-ip-address.md).
+- Learn about [creating an Azure Batch pool without public IP addresses](./batch-pool-no-public-ip-address.md).
batch Create Pool Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/create-pool-extensions.md
Title: Use extensions with Batch pools description: Extensions are small applications that facilitate post-provisioning configuration and setup on Batch compute nodes. Previously updated : 02/10/2021 Last updated : 08/06/2021 # Use extensions with Batch pools
certification Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/overview.md
# What is the Azure Certified Device program?
-Thank you for your interest in the Azure Certified Device program! Azure Certified Device is a free program that enables you to differentiating, certify, and promote your IoT devices built to run on Azure. From intelligent cameras to connected sensors to edge infrastructure, this enhanced IoT device certification program helps device builders increase their product visibility and saves customers time in building solutions.
+Thank you for your interest in the Azure Certified Device program! Azure Certified Device is a free program that enables you to differentiate, certify, and promote your IoT devices built to run on Azure. From intelligent cameras to connected sensors to edge infrastructure, this enhanced IoT device certification program helps device builders increase their product visibility and saves customers time in building solutions.
## Our certification promise
Once you have certified your device, you then can optionally complete two of the
Ready to get started with your certification journey? View our resources below to start the device certification process! - [Starting the certification process](tutorial-00-selecting-your-certification.md)-- If you have other questions or feedback, contact [the Azure Certified Device team](mailto:iotcert@microsoft.com).
+- If you have other questions or feedback, contact [the Azure Certified Device team](mailto:iotcert@microsoft.com).
cloud-services-extended-support Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-powershell.md
# Deploy a Cloud Service (extended support) using Azure PowerShell
-This article shows how to use the `Az.CloudService` PowerShell module to deploy Cloud Services (extended support) in Azure that has multiple roles (WebRole and WorkerRole) and remote desktop extension.
+This article shows how to use the `Az.CloudService` PowerShell module to deploy Cloud Services (extended support) in Azure that has multiple roles (WebRole and WorkerRole).
-## Before you begin
+## Pre-requisites
-Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the associated resources.
+1. Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the associated resources.
+2. Install Az.CloudService PowerShell module.
-## Deploy a Cloud Services (extended support)
-1. Install Az.CloudService PowerShell module
-
- ```powershell
+ ```azurepowershell-interactive
Install-Module -Name Az.CloudService ```
-2. Create a new resource group. This step is optional if using an existing resource group.
+3. Create a new resource group. This step is optional if using an existing resource group.
- ```powershell
+ ```azurepowershell-interactive
New-AzResourceGroup -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥ ```
-3. Create a storage account and container which will be used to store the Cloud Service package (.cspkg) and Service Configuration (.cscfg) files. You must use a unique name for storage account name.
+4. Create a storage account and container, which will be used to store the Cloud Service package (.cspkg) and Service Configuration (.cscfg) files. A unique name for storage account name is required. This step is optional if using an existing storage account.
- ```powershell
+ ```azurepowershell-interactive
$storageAccount = New-AzStorageAccount -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Name ΓÇ£contosostorageaccountΓÇ¥ -Location ΓÇ£East USΓÇ¥ -SkuName ΓÇ£Standard_RAGRSΓÇ¥ -Kind ΓÇ£StorageV2ΓÇ¥ $container = New-AzStorageContainer -Name ΓÇ£contosocontainerΓÇ¥ -Context $storageAccount.Context -Permission Blob ```
+
+## Deploy a Cloud Services (extended support)
+
+Use any of the following PowerShell cmdlets to deploy Cloud Services (extended support):
+
+** Quick Create Cloud Service using a Storage Account**
+
+- This parameter set inputs the .cscfg, .cspkg and .csdef files as inputs along with the storage account.
+- The cloud service role profile, network profile, and OS profile are created by the cmdlet with minimal input from the user.
+- For certificate input, the keyvault name is to be specified. The certificate thumbprints in the keyvault are validated against those specified in the .cscfg file.
+
+ **Quick Create Cloud Service using a SAS URI**
+
+ - This parameter set inputs the SAS URI of the .cspkg along with the local paths of .csdef and .cscfg files. There is no storage account input required.
+ - The cloud service role profile, network profile, and OS profile are created by the cmdlet with minimal input from the user.
+ - For certificate input, the keyvault name is to be specified. The certificate thumbprints in the keyvault are validated against those specified in the .cscfg file.
+
+**Create Cloud Service with role, OS, network and extension profile and SAS URIs**
+
+ - This parameter set inputs the SAS URIs of the .cscfg and .cspkg files.
+ - The role, network, OS, and extension profile must be specified by the user and must match the values in the .cscfg and .csdef.
+
+### Quick Create Cloud Service using a Storage Account
+
+Create Cloud Service deployment using .cscfg, .csdef and .cspkg files.
+
+```azurepowershell-interactive
+$cspkgFilePath = "<Path to cspkg file>"
+$cscfgFilePath = "<Path to cscfg file>"
+$csdefFilePath = "<Path to csdef file>"
+
+# Create Cloud Service
+New-AzCloudService
+-Name "ContosoCS" `
+-ResourceGroupName "ContosOrg" `
+-Location "EastUS" `
+-ConfigurationFile $cscfgFilePath `
+-DefinitionFile $csdefFilePath `
+-PackageFile $cspkgFilePath `
+-StorageAccount $storageAccount `
+[-KeyVaultName <string>]
+```
+
+### Quick Create Cloud Service using a SAS URI
-4. Upload your Cloud Service package (cspkg) to the storage account.
+1. Upload your Cloud Service package (cspkg) to the storage account.
- ```powershell
+ ```azurepowershell-interactive
$tokenStartTime = Get-Date $tokenEndTime = $tokenStartTime.AddYears(1) $cspkgBlob = Set-AzStorageBlobContent -File ΓÇ£./ContosoApp/ContosoApp.cspkgΓÇ¥ -Container ΓÇ£contosocontainerΓÇ¥ -Blob ΓÇ£ContosoApp.cspkgΓÇ¥ -Context $storageAccount.Context $cspkgToken = New-AzStorageBlobSASToken -Container ΓÇ£contosocontainerΓÇ¥ -Blob $cspkgBlob.Name -Permission rwd -StartTime $tokenStartTime -ExpiryTime $tokenEndTime -Context $storageAccount.Context $cspkgUrl = $cspkgBlob.ICloudBlob.Uri.AbsoluteUri + $cspkgToken
+ $cscfgFilePath = "<Path to cscfg file>"
+ $csdefFilePath = "<Path to csdef file>"
```
-
-5. Upload your cloud service configuration (cscfg) to the storage account.
+ 2. Create Cloud Service deployment using .cscfg, .csdef and .cspkg SAS URI.
+
+ ```azurepowershell-interactive
+ New-AzCloudService
+ -Name "ContosoCS" `
+ -ResourceGroupName "ContosOrg" `
+ -Location "EastUS" `
+ -ConfigurationFile $cspkgFilePath `
+ -DefinitionFile $csdefFilePath `
+ -PackageURL $cspkgUrl `
+ [-KeyVaultName <string>]
+ ```
+
+### Create Cloud Service using profile objects & SAS URIs
- ```powershell
+1. Upload your cloud service configuration (cscfg) to the storage account.
+
+ ```azurepowershell-interactive
$cscfgBlob = Set-AzStorageBlobContent -File ΓÇ£./ContosoApp/ContosoApp.cscfgΓÇ¥ -Container contosocontainer -Blob ΓÇ£ContosoApp.cscfgΓÇ¥ -Context $storageAccount.Context $cscfgToken = New-AzStorageBlobSASToken -Container ΓÇ£contosocontainerΓÇ¥ -Blob $cscfgBlob.Name -Permission rwd -StartTime $tokenStartTime -ExpiryTime $tokenEndTime -Context $storageAccount.Context $cscfgUrl = $cscfgBlob.ICloudBlob.Uri.AbsoluteUri + $cscfgToken ```
+2. Upload your Cloud Service package (cspkg) to the storage account.
-6. Create a virtual network and subnet. This step is optional if using an existing network and subnet. This example uses a single virtual network and subnet for both cloud service roles (WebRole and WorkerRole).
+ ```azurepowershell-interactive
+ $tokenStartTime = Get-Date
+ $tokenEndTime = $tokenStartTime.AddYears(1)
+ $cspkgBlob = Set-AzStorageBlobContent -File ΓÇ£./ContosoApp/ContosoApp.cspkgΓÇ¥ -Container ΓÇ£contosocontainerΓÇ¥ -Blob ΓÇ£ContosoApp.cspkgΓÇ¥ -Context $storageAccount.Context
+ $cspkgToken = New-AzStorageBlobSASToken -Container ΓÇ£contosocontainerΓÇ¥ -Blob $cspkgBlob.Name -Permission rwd -StartTime $tokenStartTime -ExpiryTime $tokenEndTime -Context $storageAccount.Context
+ $cspkgUrl = $cspkgBlob.ICloudBlob.Uri.AbsoluteUri + $cspkgToken
+ ```
+
+3. Create a virtual network and subnet. This step is optional if using an existing network and subnet. This example uses a single virtual network and subnet for both cloud service roles (WebRole and WorkerRole).
- ```powershell
+ ```azurepowershell-interactive
$subnet = New-AzVirtualNetworkSubnetConfig -Name "ContosoWebTier1" -AddressPrefix "10.0.0.0/24" -WarningAction SilentlyContinue $virtualNetwork = New-AzVirtualNetwork -Name ΓÇ£ContosoVNetΓÇ¥ -Location ΓÇ£East USΓÇ¥ -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -AddressPrefix "10.0.0.0/24" -Subnet $subnet ```
-7. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic](../virtual-network/public-ip-addresses.md#basic) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services.
-If you are using a Static IP you need to reference it as a Reserved IP in Service Configuration (.cscfg) file
+4. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic](../virtual-network/public-ip-addresses.md#basic) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services.
+If you are using a Static IP, you need to reference it as a Reserved IP in Service Configuration (.cscfg) file.
- ```powershell
+ ```azurepowershell-interactive
$publicIp = New-AzPublicIpAddress -Name ΓÇ£ContosIpΓÇ¥ -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥ -AllocationMethod Dynamic -IpAddressVersion IPv4 -DomainNameLabel ΓÇ£contosoappdnsΓÇ¥ -Sku Basic ```
-8. Create a Network Profile Object and associate the public IP address to the frontend of the load balancer. The Azure platform automatically creates a 'Classic' SKU load balancer resource in the same subscription as the cloud service resource. The load balancer resource is a read-only resource in ARM. Any updates to the resource are supported only via the cloud service deployment files (.cscfg & .csdef)
+5. Create a Network Profile Object and associate the public IP address to the frontend of the load balancer. The Azure platform automatically creates a 'Classic' SKU load balancer resource in the same subscription as the cloud service resource. The load balancer resource is a read-only resource in Azure Resource Manager. Any updates to the resource are supported only via the cloud service deployment files (.cscfg & .csdef).
- ```powershell
+ ```azurepowershell-interactive
$publicIP = Get-AzPublicIpAddress -ResourceGroupName ContosOrg -Name ContosIp $feIpConfig = New-AzCloudServiceLoadBalancerFrontendIPConfigurationObject -Name 'ContosoFe' -PublicIPAddressId $publicIP.Id $loadBalancerConfig = New-AzCloudServiceLoadBalancerConfigurationObject -Name 'ContosoLB' -FrontendIPConfiguration $feIpConfig $networkProfile = @{loadBalancerConfiguration = $loadBalancerConfig} ```
-9. Create a Key Vault. This Key Vault will be used to store certificates that are associated with the Cloud Service (extended support) roles. The Key Vault must be located in the same region and subscription as cloud service and have a unique name. For more information see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
+6. Create a Key Vault. This Key Vault will be used to store certificates that are associated with the Cloud Service (extended support) roles. The Key Vault must be located in the same region and subscription as cloud service and have a unique name. For more information, see [use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
- ```powershell
+ ```azurepowershell-interactive
New-AzKeyVault -Name "ContosKeyVaultΓÇ¥ -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥ ```
-10. Update the Key Vault access policy and grant certificate permissions to your user account.
+7. Update the Key Vault access policy and grant certificate permissions to your user account.
- ```powershell
+ ```azurepowershell-interactive
Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' -EnabledForDeployment Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' -UserPrincipalName 'user@domain.com' -PermissionsToCertificates create,get,list,delete ```
- Alternatively, set access policy via ObjectId (which can be obtained by running `Get-AzADUser`)
+ Alternatively, set access policy via ObjectId (which can be obtained by running `Get-AzADUser`).
- ```powershell
+ ```azurepowershell-interactive
Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' -ObjectId 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' -PermissionsToCertificates create,get,list,delete ```
-11. For the purpose of this example we will add a self signed certificate to a Key Vault. The certificate thumbprint needs to be added in Cloud Service Configuration (.cscfg) file for deployment on cloud service roles.
+8. In this example, we will add a self-signed certificate to a Key Vault. The certificate thumbprint needs to be added in Cloud Service Configuration (.cscfg) file for deployment on cloud service roles.
- ```powershell
+ ```azurepowershell-interactive
$Policy = New-AzKeyVaultCertificatePolicy -SecretContentType "application/x-pkcs12" -SubjectName "CN=contoso.com" -IssuerName "Self" -ValidityInMonths 6 -ReuseKeyOnRenewal Add-AzKeyVaultCertificate -VaultName "ContosKeyVault" -Name "ContosCert" -CertificatePolicy $Policy ```
-12. Create an OS Profile in-memory object. OS Profile specifies the certificates which are associated to cloud service roles. This will be the same certificate created in the previous step.
+9. Create an OS Profile in-memory object. OS Profile specifies the certificates, which are associated to cloud service roles. This will be the same certificate created in the previous step.
- ```powershell
+ ```azurepowershell-interactive
$keyVault = Get-AzKeyVault -ResourceGroupName ContosOrg -VaultName ContosKeyVault $certificate = Get-AzKeyVaultCertificate -VaultName ContosKeyVault -Name ContosCert $secretGroup = New-AzCloudServiceVaultSecretGroupObject -Id $keyVault.ResourceId -CertificateUrl $certificate.SecretId $osProfile = @{secret = @($secretGroup)} ```
-13. Create a Role Profile in-memory object. Role profile defines a roles sku specific properties such as name, capacity and tier. For this example, we have defined two roles: frontendRole and backendRole. Role profile information should match the role configuration defined in configuration (cscfg) file and service definition (csdef) file.
+10. Create a Role Profile in-memory object. Role profile defines a role sku specific properties such as name, capacity, and tier. In this example, we have defined two roles: frontendRole and backendRole. Role profile information should match the role configuration defined in configuration (cscfg) file and service definition (csdef) file.
- ```powershell
+ ```azurepowershell-interactive
$frontendRole = New-AzCloudServiceRoleProfilePropertiesObject -Name 'ContosoFrontend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2 $backendRole = New-AzCloudServiceRoleProfilePropertiesObject -Name 'ContosoBackend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2 $roleProfile = @{role = @($frontendRole, $backendRole)} ```
-14. (Optional) Create a Extension Profile in-memory object that you want to add to your cloud service. For this example we will add RDP extension.
+11. (Optional) Create an Extension Profile in-memory object that you want to add to your cloud service. For this example we will add RDP extension.
- ```powershell
+ ```azurepowershell-interactive
$credential = Get-Credential $expiration = (Get-Date).AddYears(1) $rdpExtension = New-AzCloudServiceRemoteDesktopExtensionObject -Name 'RDPExtension' -Credential $credential -Expiration $expiration -TypeHandlerVersion '1.2.1'
If you are using a Static IP you need to reference it as a Reserved IP in Servic
$wadExtension = New-AzCloudServiceDiagnosticsExtension -Name "WADExtension" -ResourceGroupName "ContosOrg" -CloudServiceName "ContosCS" -StorageAccountName "contosostorageaccount" -StorageAccountKey $storageAccountKey[0].Value -DiagnosticsConfigurationPath $configFile -TypeHandlerVersion "1.5" -AutoUpgradeMinorVersion $true $extensionProfile = @{extension = @($rdpExtension, $wadExtension)} ```
- Note that configFile should have only PublicConfig tags and should contain a namespace as following:
+
+ ConfigFile should have only PublicConfig tags and should contain a namespace as following:
+
```xml <?xml version="1.0" encoding="utf-8"?> <PublicConfig xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"> ............... </PublicConfig> ```
-15. (Optional) Define Tags as PowerShell hash table which you want to add to your cloud service.
+
+12. (Optional) Define Tags as PowerShell hash table that you want to add to your cloud service.
- ```powershell
+ ```azurepowershell-interactive
$tag=@{"Owner" = "Contoso"} ```
-17. Create Cloud Service deployment using profile objects & SAS URLs.
+13. Create Cloud Service deployment using profile objects & SAS URLs.
- ```powershell
+ ```azurepowershell-interactive
$cloudService = New-AzCloudService `
- -Name ΓÇ£ContosoCSΓÇ¥ `
- -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ `
- -Location ΓÇ£East USΓÇ¥ `
- -PackageUrl $cspkgUrl `
- -ConfigurationUrl $cscfgUrl `
- -UpgradeMode 'Auto' `
- -RoleProfile $roleProfile `
- -NetworkProfile $networkProfile `
- -ExtensionProfile $extensionProfile `
- -OSProfile $osProfile `
- -Tag $tag
+ -Name ΓÇ£ContosoCSΓÇ¥ `
+ -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ `
+ -Location ΓÇ£East USΓÇ¥ `
+ -PackageUrl $cspkgUrl `
+ -ConfigurationUrl $cscfgUrl `
+ -UpgradeMode 'Auto' `
+ -RoleProfile $roleProfile `
+ -NetworkProfile $networkProfile `
+ -ExtensionProfile $extensionProfile `
+ -OSProfile $osProfile `
+ -Tag $tag
``` ## Next steps - Review [frequently asked questions](faq.yml) for Cloud Services (extended support). - Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support).
cloud-shell Private Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/private-vnet.md
As in standard Cloud Shell, a storage account is required while using Cloud Shel
## Virtual network deployment limitations * Due to the additional networking resources involved, starting Cloud Shell in a virtual network is typically slower than a standard Cloud Shell session.
-* All Cloud Shell regions apart from Central India are currently supported.
+* All Cloud Shell primary regions apart from Central India are currently supported.
* [Azure Relay](../azure-relay/relay-what-is-it.md) is not a free service, please view their [pricing](https://azure.microsoft.com/pricing/details/service-bus/). In the Cloud Shell scenario, one hybrid connection is used for each administrator while they are using Cloud Shell. The connection will automatically be shut down after the Cloud Shell session is complete.
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Audio files can have silence at the beginning and end of the recording. If possi
[!INCLUDE [supported-audio-formats](includes/supported-audio-formats.md)]
+> [!TIP]
+> DonΓÇÖt even have any real audio? You can also upload a text (.txt) file (select type **Transcript (automatic audio synthesis)** when uploading data) with some testing sentences, and audio pair for each spoken sentence will be automatically synthesized.
+>
+> The maximum file size is 500KB. We will synthesize one audio for each line, and the maximum size of each line is 65535 bytes.
+ > [!NOTE] > When uploading training and testing data, the .zip file size cannot exceed 2 GB. You can only test from a *single* dataset, be sure to keep it within the appropriate file size. Additionally, each training file cannot exceed 60 seconds otherwise it will error out.
Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX </a>
* [Inspect your data](how-to-custom-speech-inspect-data.md) * [Evaluate your data](how-to-custom-speech-evaluate-data.md) * [Train custom model](how-to-custom-speech-train-model.md)
-* [Deploy model](./how-to-custom-speech-train-model.md)
+* [Deploy model](./how-to-custom-speech-train-model.md)
communication-services Messaging Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/messaging-policy.md
# Azure Communication Services Messaging Policy -
-Azure Communication Services is transforming the way our customers engage with their clients by building rich, custom communication experiences that take advantage of the same enterprise-grade services that back Microsoft Teams and Skype. Integrate SMS messaging functionality into your communications solutions to reach your customers anytime and anywhere they need support. You just need to keep in mind a few messaging requirements to get started.
+Azure Communication Services is transforming the way our customers engage with their clients by building rich, custom communication experiences that take advantage of the same enterprise-grade services that back Microsoft Teams and Skype. Integrate SMS messaging functionality into your communications solutions to reach your customers anytime and anywhere they need support. You just need to keep in mind a few messaging requirements and industry standards to get started.
We know that messaging requirements can seem daunting to learn, but they're as easy as remembering ΓÇ£COMSΓÇ¥:
Message content that includes elements of sex, hate, alcohol, firearms, tobacco,
Even where such content is not unlawful, you should include an age verification mechanism at opt-in to age-gate the intended message recipient from adult content. In the United States, additional legal requirements apply to marketing communications directed at children under the age of 13.
-### Prohibited content:
+### Prohibited practices:
+
+Both you and your customers are prohibited from using Azure Communication Services to evade reasonable opt-out requests. Additionally, you and your customers may not evade any measures implemented by Azure Communication Services or a communications service provider to ensure your compliance with messaging requirements and industry standards.
-Azure Communication Services prohibits certain message content regardless of consent. Prohibited content includes:
+Azure Communication Services also prohibits certain message content regardless of consent. Prohibited content includes:
- Content that promotes unlawful activities (e.g., tax evasion or animal cruelty in the United States) - Hate speech, defamatory speech, harassment, or other speech determined to be patently offensive - Pornographic content
Spoofing is the act of causing a misleading or inaccurate originating number to
This Messaging Policy does not constitute legal advice, and we reserve the right to modify the policy at any time. Azure Communication Services is not responsible for ensuring that the content, timing, or recipients of our customersΓÇÖ messages meet all applicable legal requirements.
-Our customers are responsible for all messaging requirements. If you are a platform or software provider that uses Azure Communication Services for messaging purposes, then you should require that your customers also abide by all of the requirements discussed in this Messaging Policy. For further guidance, the CTIA provides helpful [Messaging Principles and Best Practices](https://api.ctia.org/wp-content/uploads/2019/07/190719-CTIA-Messaging-Principles-and-Best-Practices-FINAL.pdf).
+Our customers are responsible for all messaging requirements. If you are a platform or software provider that uses Azure Communication Services for messaging purposes, then you should require that your customers also abide by all of the requirements discussed in this Messaging Policy. For further guidance, the CTIA's [Messaging Principles and Best Practices](https://api.ctia.org/wp-content/uploads/2019/07/190719-CTIA-Messaging-Principles-and-Best-Practices-FINAL.pdf) provides a helpful overview of the relevant industry standards.
### Penalties:
container-registry Container Registry Access Selected Networks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-access-selected-networks.md
If you use Azure Pipelines with an Azure container registry that limits access t
One workaround is to change the agent used to run the pipeline from Microsoft-hosted to self-hosted. With a self-hosted agent running on a [Windows](/azure/devops/pipelines/agents/v2-windows) or [Linux](/azure/devops/pipelines/agents/v2-linux) machine that you manage, you control the outbound IP address of the pipeline, and you can add this address in a registry IP access rule.
-## Access from AKS
+### Access from AKS
If you use Azure Kubernetes Service (AKS) with an Azure container registry that limits access to specific IP addresses, you can't configure a fixed AKS IP address by default. The egress IP address from the AKS cluster is randomly assigned.
cosmos-db Mongodb Mongochef https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-mongochef.md
To add your Azure Cosmos account to the Studio 3T connection manager, use the fo
3. In the **New Connection** window, on the **Server** tab, enter the HOST (FQDN) of the Azure Cosmos account and the PORT. :::image type="content" source="./media/mongodb-mongochef/ConnectionManagerServerTab.png" alt-text="Screenshot of the Studio 3T connection manager server tab":::
-4. In the **New Connection** window, on the **Authentication** tab, choose Authentication Mode **Basic (MONGODB-CR or SCARM-SHA-1)** and enter the USERNAME and PASSWORD. Accept the default authentication db (admin) or provide your own value.
+4. In the **New Connection** window, on the **Authentication** tab, choose Authentication Mode **Basic (MONGODB-CR or SCRAM-SHA-1)** and enter the USERNAME and PASSWORD. Accept the default authentication db (admin) or provide your own value.
:::image type="content" source="./media/mongodb-mongochef/ConnectionManagerAuthenticationTab.png" alt-text="Screenshot of the Studio 3T connection manager authentication tab"::: 5. In the **New Connection** window, on the **SSL** tab, check the **Use SSL protocol to connect** check box and the **Accept server self-signed SSL certificates** radio button.
cosmos-db Sql Query Join https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-join.md
Previously updated : 01/07/2021 Last updated : 08/06/2021
In a relational database, joins across tables are the logical corollary to designing normalized schemas. In contrast, the SQL API uses the denormalized data model of schema-free items, which is the logical equivalent of a *self-join*.
+> [!NOTE]
+> In Azure Cosmos DB, joins are scoped to a single item. Cross-item and cross-container joins are not supported. In NoSQL databases like Azure Cosmos DB, good [data modeling](modeling-data.md) can help avoid the need for cross-item and cross-container joins.
+ Joins result in a complete cross product of the sets participating in the join. The result of an N-way join is a set of N-element tuples, where each value in the tuple is associated with the aliased set participating in the join and can be accessed by referencing that alias in other clauses. ## Syntax
data-factory Author Global Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-global-parameters.md
Title: Global parameters description: Set global parameters for each of your Azure Data Factory environments +
data-factory Author Management Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-management-hub.md
Title: Management hub description: Manage your connections, source control configuration and global authoring properties in the Azure Data Factory management hub +
data-factory Author Visually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-visually.md
Title: Visual authoring
description: Learn how to use visual authoring in Azure Data Factory +
data-factory Azure Integration Runtime Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/azure-integration-runtime-ip-addresses.md
description: Learn which IP addresses you must allow inbound traffic from, in or
+ Last updated 01/06/2020
data-factory Azure Ssis Integration Runtime Package Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/azure-ssis-integration-runtime-package-store.md
Title: Manage packages with Azure-SSIS Integration Runtime package store description: Learn how to manage packages with Azure-SSIS Integration Runtime package store. +
data-factory Built In Preinstalled Components Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/built-in-preinstalled-components-ssis-integration-runtime.md
Title: Built-in and preinstalled components on Azure-SSIS Integration Runtime description: List all built-in and preinstalled components, such as clients, drivers, providers, connection managers, data sources/destinations/transformations, and tasks on Azure-SSIS Integration Runtime. +
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ci-cd-github-troubleshoot-guide.md
+ Last updated 06/27/2021
data-factory Compare Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compare-versions.md
description: This article compares Azure Data Factory with Azure Data Factory ve
+ Last updated 04/09/2018
data-factory Compute Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compute-linked-services.md
Title: Compute environments supported by Azure Data Factory
description: Compute environments that can be used with Azure Data Factory pipelines (such as Azure HDInsight) to transform or process data. +
data-factory Concepts Data Flow Column Pattern https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-column-pattern.md
+ Last updated 05/21/2021
data-factory Concepts Data Flow Debug Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-debug-mode.md
description: Start an interactive debug session when building data flows
+ Last updated 04/16/2021
data-factory Concepts Data Flow Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-expression-builder.md
+ Last updated 04/29/2021
data-factory Concepts Data Flow Manage Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-manage-graph.md
+ Last updated 09/02/2020
data-factory Concepts Data Flow Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-monitoring.md
description: How to visually monitor mapping data flows in Azure Data Factory
+ Last updated 06/18/2021
data-factory Concepts Data Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-overview.md
description: An overview of mapping data flows in Azure Data Factory
+ Last updated 05/20/2021
data-factory Concepts Data Flow Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-performance.md
+ Last updated 06/07/2021
data-factory Concepts Data Flow Schema Drift https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-schema-drift.md
+ Last updated 04/15/2020
data-factory Concepts Data Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-redundancy.md
description: 'Learn about meta-data redundancy mechanisms in Azure Data Factory'
+ Last updated 11/05/2020
data-factory Concepts Datasets Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-datasets-linked-services.md
+ Last updated 08/24/2020
data-factory Concepts Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-integration-runtime.md
Title: Integration runtime
-description: 'Learn about integration runtime in Azure Data Factory.'
+description: Learn about the integration runtime in Azure Data Factory and Azure Synapse Analytics.
+ Last updated 06/16/2021
Last updated 06/16/2021
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-The Integration Runtime (IR) is the compute infrastructure used by Azure Data Factory to provide the following data integration capabilities across different network environments:
+The Integration Runtime (IR) is the compute infrastructure used by Azure Data Factory and Azure Synapse pipelines to provide the following data integration capabilities across different network environments:
- **Data Flow**: Execute a [Data Flow](concepts-data-flow-overview.md) in managed Azure compute environment. - **Data movement**: Copy data across data stores in public network and data stores in private network (on-premises or virtual private network). It provides support for built-in connectors, format conversion, column mapping, and performant and scalable data transfer. - **Activity dispatch**: Dispatch and monitor transformation activities running on a variety of compute services such as Azure Databricks, Azure HDInsight, Azure Machine Learning, Azure SQL Database, SQL Server, and more. - **SSIS package execution**: Natively execute SQL Server Integration Services (SSIS) packages in a managed Azure compute environment.
-In Data Factory, an activity defines the action to be performed. A linked service defines a target data store or a compute service. An integration runtime provides the bridge between the activity and linked Services. It's referenced by the linked service or activity, and provides the compute environment where the activity either runs on or gets dispatched from. This way, the activity can be performed in the region closest possible to the target data store or compute service in the most performant way while meeting security and compliance needs.
+In Data Factory and Synapse pipelines, an activity defines the action to be performed. A linked service defines a target data store or a compute service. An integration runtime provides the bridge between the activity and linked Services. It's referenced by the linked service or activity, and provides the compute environment where the activity either runs on or gets dispatched from. This way, the activity can be performed in the region closest possible to the target data store or compute service in the most performant way while meeting security and compliance needs.
-Integration runtimes can be created in the Azure Data Factory UX via the [management hub](author-management-hub.md) and any activities, datasets, or data flows that reference them.
+Integration runtimes can be created in the Azure Data Factory and Azure Synapse UI via the [management hub](author-management-hub.md) and any activities, datasets, or data flows that reference them.
## Integration runtime types
Data Factory offers three types of Integration Runtime (IR), and you should choo
- Self-hosted - Azure-SSIS
+> [!NOTE]
+> Synapse pipelines currently only support Azure or self-hosted integration runtimes.
+ The following table describes the capabilities and network support for each of the integration runtime types:
-IR type | Public network | Private network
+IR type | Public network | Private network
- | -- | Azure | Data Flow<br/>Data movement<br/>Activity dispatch | Data Flow<br/>Data movement<br/>Activity dispatch Self-hosted | Data movement<br/>Activity dispatch | Data movement<br/>Activity dispatch
For high availability and scalability, you can scale out the self-hosted IR by a
## Azure-SSIS Integration Runtime
-To lift and shift existing SSIS workload, you can create an Azure-SSIS IR to natively execute SSIS packages.
+> [!NOTE]
+> Azure-SSIS integration runtimes are not currently supported in Synapse pipelines.
+
+To lift and shift existing SSIS workload, you can create an Azure-SSIS IR to natively execute SSIS packages.
### Azure-SSIS IR network environment
For more information about Azure-SSIS runtime, see the following articles:
### Relationship between factory location and IR location
-When customer creates a data factory instance, they need to specify the location for the data factory. The Data Factory location is where the metadata of the data factory is stored and where the triggering of the pipeline is initiated from. Metadata for the factory is only stored in the region of customerΓÇÖs choice and will not be stored in other regions.
+When customer creates a Data Factory instance, they need to specify the location for the Data Factory or Synapse Workspace. The metadata for the Data Factory or Synapse Workspace is stored here, and triggering of the pipeline is initiated from here. Metadata is only stored in the region of the customerΓÇÖs choice and will not be stored in other regions.
-Meanwhile, a data factory can access data stores and compute services in other Azure regions to move data between data stores or process data using compute services. This behavior is realized through the [globally available IR](https://azure.microsoft.com/global-infrastructure/services/) to ensure data compliance, efficiency, and reduced network egress costs.
+Meanwhile, an Azure Data Factory or Azure Synapse pipeline can access data stores and compute services in other Azure regions to move data between data stores or process data using compute services. This behavior is realized through the [globally available IR](https://azure.microsoft.com/global-infrastructure/services/) to ensure data compliance, efficiency, and reduced network egress costs.
-The IR Location defines the location of its back-end compute, and essentially the location where the data movement, activity dispatching, and SSIS package execution are performed. The IR location can be different from the location of the data factory it belongs to.
+The IR Location defines the location of its back-end compute, and essentially the location where the data movement, activity dispatching, and SSIS package execution are performed. The IR location can be different from the location of the Data Factory it belongs to.
### Azure IR location
You can set a certain location of an Azure IR, in which case the activity execut
If you choose to use the auto-resolve Azure IR in public network, which is the default, -- For copy activity, ADF will make a best effort to automatically detect your sink data store's location, then use the IR in either the same region if available or the closest one in the same geography; if the sink data store's region is not detectable, IR in the data factory region as alternative is used.
+- For copy activity, a best effort is made to automatically detect your sink data store's location, then use the IR in either the same region if available or the closest one in the same geography; if the sink data store's region is not detectable, IR in the Data Factory region as alternative is used.
- For example, you have your factory created in East US,
+ For example, you have your Data Factory or Synapse Workspace was created in East US,
- - When copy data to Azure Blob in West US, if ADF successfully detected that the Blob is in West US, copy activity is executed on IR in West US; if the region detection fails, copy activity is executed on IR in East US.
+ - When copying data to Azure Blob in West US, if the Blob is detected in West US, copy activity is executed on the IR in West US; if the region detection fails, copy activity is executed on IR in East US.
- When copy data to Salesforce of which the region is not detectable, copy activity is executed on IR in East US. >[!TIP] >If you have strict data compliance requirements and need ensure that data do not leave a certain geography, you can explicitly create an Azure IR in a certain region and point the Linked Service to this IR using ConnectVia property. For example, if you want to copy data from Blob in UK South to Azure Synapse Analytics in UK South and want to ensure data do not leave UK, create an Azure IR in UK South and link both Linked Services to this IR. -- For Lookup/GetMetadata/Delete activity execution (also known as Pipeline activities), transformation activity dispatching (also known as External activities), and authoring operations (test connection, browse folder list and table list, preview data), ADF uses the IR in the data factory region.
+- For Lookup/GetMetadata/Delete activity execution (also known as Pipeline activities), transformation activity dispatching (also known as External activities), and authoring operations (test connection, browse folder list and table list, preview data), the IR in the same region as the Data Factory or Synapse Workspace is used.
-- For Data Flow, ADF uses the IR in the data factory region.
+- For Data Flow, the IR in the Data Factory or Synapse Workspace region is used.
> [!TIP]
- > A good practice would be to ensure Data flow runs in the same region as your corresponding data stores (if possible). You can either achieve this by auto-resolve Azure IR (if data store location is same as Data Factory location), or by creating a new Azure IR instance in the same region as your data stores and then execute the data flow on it.
+ > A good practice would be to ensure Data flow runs in the same region as your corresponding data stores (if possible). You can either achieve this by auto-resolve Azure IR (if data store location is same as Data Factory or Synapse Workspace location), or by creating a new Azure IR instance in the same region as your data stores and then execute the data flow on it.
-If you enable Managed Virtual Network for auto-resolve Azure IR, ADF uses the IR in the data factory region.
+If you enable Managed Virtual Network for auto-resolve Azure IR, the IR in the Data Factory or Synapse Workspace region is used.
You can monitor which IR location takes effect during activity execution in pipeline activity monitoring view on UI or activity monitoring payload. ### Self-hosted IR location
-The self-hosted IR is logically registered to the Data Factory and the compute used to support its functionalities is provided by you. Therefore there is no explicit location property for self-hosted IR.
+The self-hosted IR is logically registered to the Data Factory or Synapse Workspace and the compute used to support its functionalities is provided by you. Therefore there is no explicit location property for self-hosted IR.
When used to perform data movement, the self-hosted IR extracts data from the source and writes into the destination. ### Azure-SSIS IR location
+> [!NOTE]
+> Azure-SSIS integration runtimes are not currently supported in Synapse pipelines.
+ Selecting the right location for your Azure-SSIS IR is essential to achieve high performance in your extract-transform-load (ETL) workflows. -- The location of your Azure-SSIS IR does not need to be the same as the location of your data factory, but it should be the same as the location of your own Azure SQL Database or SQL Managed Instance where SSISDB. This way, your Azure-SSIS Integration Runtime can easily access SSISDB without incurring excessive traffics between different locations.
+- The location of your Azure-SSIS IR does not need to be the same as the location of your Data Factory, but it should be the same as the location of your own Azure SQL Database or SQL Managed Instance where SSISDB. This way, your Azure-SSIS Integration Runtime can easily access SSISDB without incurring excessive traffics between different locations.
- If you do not have an existing SQL Database or SQL Managed Instance, but you have on-premises data sources/destinations, you should create a new Azure SQL Database or SQL Managed Instance in the same location of a virtual network connected to your on-premises network. This way, you can create your Azure-SSIS IR using the new Azure SQL Database or SQL Managed Instance and joining that virtual network, all in the same location, effectively minimizing data movements across different locations. - If the location of your existing Azure SQL Database or SQL Managed Instance is not the same as the location of a virtual network connected to your on-premises network, first create your Azure-SSIS IR using an existing Azure SQL Database or SQL Managed Instance and joining another virtual network in the same location, and then configure a virtual network to virtual network connection between different locations.
The following diagram shows location settings of Data Factory and its integratio
:::image type="content" source="media/concepts-integration-runtime/integration-runtime-location.png" alt-text="Integration runtime location"::: ## Determining which IR to use
-If one data factory activity associates with more than one type of integration runtime, it will resolve to one of them. The self-hosted integration runtime takes precedence over Azure integration runtime in Azure Data Factory managed virtual network. And the latter takes precedence over public Azure integration runtime.
-For example, one copy activity is used to copy data from source to sink. The public Azure integration runtime is associated with the linked service to source and an Azure integration runtime in Azure Data Factory managed virtual network associates with the linked service for sink, then the result is that both source and sink linked service use Azure integration runtime in Azure Data Factory managed virtual network. But if a self-hosted integration runtime associates the linked service for source, then both source and sink linked service use self-hosted integration runtime.
+If an activity associates with more than one type of integration runtime, it will resolve to one of them. The self-hosted integration runtime takes precedence over Azure integration runtime in Azure Data Factory or Synapse Workspaces using a managed virtual network. And the latter takes precedence over global Azure integration runtime.
+
+For example, one copy activity is used to copy data from source to sink. The global Azure integration runtime is associated with the linked service to source and an Azure integration runtime in Azure Data Factory managed virtual network associates with the linked service for sink, then the result is that both source and sink linked service use Azure integration runtime in Azure Data Factory or Synapse Workspaces using a managed virtual network. But if a self-hosted integration runtime associates the linked service for source, then both source and sink linked service use self-hosted integration runtime.
### Copy activity For Copy activity, it requires source and sink linked services to define the direction of data flow. The following logic is used to determine which integration runtime instance is used to perform the copy: -- **Copying between two cloud data sources**: when both source and sink linked services are using Azure IR, ADF uses the regional Azure IR if you specified, or auto determine a location of Azure IR if you choose the autoresolve IR (default) as described in [Integration runtime location](#integration-runtime-location) section.
+- **Copying between two cloud data sources**: when both source and sink linked services are using Azure IR, the regional Azure IR is used if it was specified, or location of Azure IR is automatically determined if the autoresolve IR (default) was chosen as described in [Integration runtime location](#integration-runtime-location) section.
- **Copying between a cloud data source and a data source in private network**: if either source or sink linked service points to a self-hosted IR, the copy activity is executed on that self-hosted Integration Runtime. - **Copying between two data sources in private network**: both the source and sink Linked Service must point to the same instance of integration runtime, and that integration runtime is used to execute the copy Activity.
data-factory Concepts Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-linked-services.md
Title: Linked services in Azure Data Factory
+ Title: Linked services
-description: 'Learn about linked services in Data Factory. Linked services link compute/data stores to data factory.'
+description: Learn about linked services in Azure Data Factory and Azure Synapse Analytics. Linked services link compute and data stores to the service.
+ Last updated 08/21/2020
-# Linked services in Azure Data Factory
+# Linked services in Azure Data Factory and Azure Synapse Analytics
> [!div class="op_single_selector" title1="Select the version of Data Factory service you're using:"] > * [Version 1](v1/data-factory-create-datasets.md)
Last updated 08/21/2020
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article describes what linked services are, how they're defined in JSON format, and how they're used in Azure Data Factory pipelines.
+This article describes what linked services are, how they're defined in JSON format, and how they're used in Azure Data Factory and Azure Synapse Analytics.
-If you're new to Data Factory, see [Introduction to Azure Data Factory](introduction.md) for an overview.
+To learn more read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse](../synapse-analytics/overview-what-is.md).
## Overview
-A data factory can have one or more pipelines. A **pipeline** is a logical grouping of **activities** that together perform a task. The activities in a pipeline define actions to perform on your data. For example, you might use a copy activity to copy data from SQL Server to Azure Blob storage. Then, you might use a Hive activity that runs a Hive script on an Azure HDInsight cluster to process data from Blob storage to produce output data. Finally, you might use a second copy activity to copy the output data to Azure Synapse Analytics, on top of which business intelligence (BI) reporting solutions are built. For more information about pipelines and activities, see [Pipelines and activities](concepts-pipelines-activities.md) in Azure Data Factory.
+Azure Data Factory and Azure Synapse Analytics can have one or more pipelines. A **pipeline** is a logical grouping of **activities** that together perform a task. The activities in a pipeline define actions to perform on your data. For example, you might use a copy activity to copy data from SQL Server to Azure Blob storage. Then, you might use a Hive activity that runs a Hive script on an Azure HDInsight cluster to process data from Blob storage to produce output data. Finally, you might use a second copy activity to copy the output data to Azure Synapse Analytics, on top of which business intelligence (BI) reporting solutions are built. For more information about pipelines and activities, see [Pipelines and activities](concepts-pipelines-activities.md).
Now, a **dataset** is a named view of data that simply points or references the data you want to use in your **activities** as inputs and outputs.
-Before you create a dataset, you must create a **linked service** to link your data store to the data factory. Linked services are much like connection strings, which define the connection information needed for Data Factory to connect to external resources. Think of it this way; the dataset represents the structure of the data within the linked data stores, and the linked service defines the connection to the data source. For example, an Azure Storage linked service links a storage account to the data factory. An Azure Blob dataset represents the blob container and the folder within that Azure Storage account that contains the input blobs to be processed.
+Before you create a dataset, you must create a **linked service** to link your data store to the Data Factory or Synapse Workspace. Linked services are much like connection strings, which define the connection information needed for the service to connect to external resources. Think of it this way; the dataset represents the structure of the data within the linked data stores, and the linked service defines the connection to the data source. For example, an Azure Storage linked service links a storage account to the the service. An Azure Blob dataset represents the blob container and the folder within that Azure Storage account that contains the input blobs to be processed.
-Here is a sample scenario. To copy data from Blob storage to a SQL Database, you create two linked
+Here is a sample scenario. To copy data from Blob storage to a SQL Database, you create two linked
-The following diagram shows the relationships among pipeline, activity, dataset, and linked service in Data Factory:
+The following diagram shows the relationships among pipeline, activity, dataset, and linked service in the service:
![Relationship between pipeline, activity, dataset, linked services](media/concepts-datasets-linked-services/relationship-between-data-factory-entities.png) ## Linked service JSON
-A linked service in Data Factory is defined in JSON format as follows:
+A linked service is defined in JSON format as follows:
```json {
The following table describes properties in the above JSON:
Property | Description | Required | -- | -- | -- |
-name | Name of the linked service. See [Azure Data Factory - Naming rules](naming-rules.md). | Yes |
+name | Name of the linked service. See [Naming rules](naming-rules.md). | Yes |
type | Type of the linked service. For example: AzureBlobStorage (data store) or AzureBatch (compute). See the description for typeProperties. | Yes | typeProperties | The type properties are different for each data store or compute. <br/><br/> For the supported data store types and their type properties, see the [connector overview](copy-activity-overview.md#supported-data-stores-and-formats) article. Navigate to the data store connector article to learn about type properties specific to a data store. <br/><br/> For the supported compute types and their type properties, see [Compute linked services](compute-linked-services.md). | Yes | connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or Self-hosted Integration Runtime (if your data store is located in a private network). If not specified, it uses the default Azure Integration Runtime. | No ## Linked service example
-The following linked service is an Azure Blob storage linked service. Notice that the type is set to Azure Blob storage. The type properties for the Azure Blob storage linked service include a connection string. The Data Factory service uses this connection string to connect to the data store at runtime.
+The following linked service is an Azure Blob storage linked service. Notice that the type is set to Azure Blob storage. The type properties for the Azure Blob storage linked service include a connection string. The service uses this connection string to connect to the data store at runtime.
```json {
You can create linked services by using one of these tools or SDKs: [.NET API](q
## Data store linked services
-You can find the list of data stores supported by Data Factory from [connector overview](copy-activity-overview.md#supported-data-stores-and-formats) article. Click a data store to learn the supported connection properties.
+You can find the list of supported data stores in the [connector overview](copy-activity-overview.md#supported-data-stores-and-formats) article. Click a data store to learn the supported connection properties.
## Compute linked services
-Reference [compute environments supported](compute-linked-services.md) for details about different compute environments you can connect to from your data factory as well as the different configurations.
+Reference [compute environments supported](compute-linked-services.md) for details about different compute environments you can connect to from your service as well as the different configurations.
## Next steps See the following tutorial for step-by-step instructions for creating pipelines and datasets by using one of these tools or SDKs. -- [Quickstart: create a data factory using .NET](quickstart-create-data-factory-dot-net.md)-- [Quickstart: create a data factory using PowerShell](quickstart-create-data-factory-powershell.md)-- [Quickstart: create a data factory using REST API](quickstart-create-data-factory-rest-api.md)-- [Quickstart: create a data factory using Azure portal](quickstart-create-data-factory-portal.md)
+- [Quickstart: create a Data Factory using .NET](quickstart-create-data-factory-dot-net.md)
+- [Quickstart: create a Data Factory using PowerShell](quickstart-create-data-factory-powershell.md)
+- [Quickstart: create a Data Factory using REST API](quickstart-create-data-factory-rest-api.md)
+- [Quickstart: create a Data Factory using Azure portal](quickstart-create-data-factory-portal.md)
data-factory Concepts Pipeline Execution Triggers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-pipeline-execution-triggers.md
Title: Pipeline execution and triggers in Azure Data Factory
+ Title: Pipeline execution and triggers
-description: This article provides information about how to execute a pipeline in Azure Data Factory, either on-demand or by creating a trigger.
+description: This article provides information about how to execute a pipeline in Azure Data Factory or Azure Synapse Analytics, either on-demand or by creating a trigger.
+ Last updated 07/05/2018
-# Pipeline execution and triggers in Azure Data Factory
+# Pipeline execution and triggers in Azure Data Factory or Azure Synapse Analytics
> [!div class="op_single_selector" title1="Select the version of the Data Factory service that you're using:"] > * [Version 1](v1/data-factory-scheduling-and-execution.md) > * [Current version](concepts-pipeline-execution-triggers.md) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-A _pipeline run_ in Azure Data Factory defines an instance of a pipeline execution. For example, say you have a pipeline that executes at 8:00 AM, 9:00 AM, and 10:00 AM. In this case, there are three separate runs of the pipeline or pipeline runs. Each pipeline run has a unique pipeline run ID. A run ID is a GUID that uniquely defines that particular pipeline run.
+A _pipeline run_ in Azure Data Factory and Azure Synapse defines an instance of a pipeline execution. For example, say you have a pipeline that executes at 8:00 AM, 9:00 AM, and 10:00 AM. In this case, there are three separate runs of the pipeline or pipeline runs. Each pipeline run has a unique pipeline run ID. A run ID is a GUID that uniquely defines that particular pipeline run.
Pipeline runs are typically instantiated by passing arguments to parameters that you define in the pipeline. You can execute a pipeline either manually or by using a _trigger_. This article provides details about both ways of executing a pipeline.
client.Pipelines.CreateRunWithHttpMessagesAsync(resourceGroup, dataFactoryName,
For a complete sample, see [Quickstart: Create a data factory by using the .NET SDK](quickstart-create-data-factory-dot-net.md). > [!NOTE]
-> You can use the .NET SDK to invoke Data Factory pipelines from Azure Functions, from your web services, and so on.
+> You can use the .NET SDK to invoke pipelines from Azure Functions, from your web services, and so on.
## Trigger execution
-Triggers are another way that you can execute a pipeline run. Triggers represent a unit of processing that determines when a pipeline execution needs to be kicked off. Currently, Data Factory supports three types of triggers:
+Triggers are another way that you can execute a pipeline run. Triggers represent a unit of processing that determines when a pipeline execution needs to be kicked off. Currently, the service supports three types of triggers:
- Schedule trigger: A trigger that invokes a pipeline on a wall-clock schedule.
data-factory Concepts Pipelines Activities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-pipelines-activities.md
Title: Pipelines and activities in Azure Data Factory
+ Title: Pipelines and activities
-description: 'Learn about pipelines and activities in Azure Data Factory.'
+description: Learn about pipelines and activities in Azure Data Factory and Azure Synapse Analytics.
+ Last updated 06/19/2021
-# Pipelines and activities in Azure Data Factory
+# Pipelines and activities in Azure Data Factory and Azure Synapse Analytics
> [!div class="op_single_selector" title1="Select the version of Data Factory service you're using:"] > * [Version 1](v1/data-factory-create-pipelines.md) > * [Current version](concepts-pipelines-activities.md) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article helps you understand pipelines and activities in Azure Data Factory and use them to construct end-to-end data-driven workflows for your data movement and data processing scenarios.
+This article helps you understand pipelines and activities in Azure Data Factory and Azure Synapse Analytics and use them to construct end-to-end data-driven workflows for your data movement and data processing scenarios.
## Overview
-A data factory can have one or more pipelines. A pipeline is a logical grouping of activities that together perform a task. For example, a pipeline could contain a set of activities that ingest and clean log data, and then kick off a mapping data flow to analyze the log data. The pipeline allows you to manage the activities as a set instead of each one individually. You deploy and schedule the pipeline instead of the activities independently.
+A Data Factory or Synapse Workspace can have one or more pipelines. A pipeline is a logical grouping of activities that together perform a task. For example, a pipeline could contain a set of activities that ingest and clean log data, and then kick off a mapping data flow to analyze the log data. The pipeline allows you to manage the activities as a set instead of each one individually. You deploy and schedule the pipeline instead of the activities independently.
The activities in a pipeline define actions to perform on your data. For example, you may use a copy activity to copy data from SQL Server to an Azure Blob Storage. Then, use a data flow activity or a Databricks Notebook activity to process and transform data from the blob storage to an Azure Synapse Analytics pool on top of which business intelligence reporting solutions are built.
-Data Factory has three groupings of activities: [data movement activities](copy-activity-overview.md), [data transformation activities](transform-data.md), and [control activities](#control-flow-activities). An activity can take zero or more input [datasets](concepts-datasets-linked-services.md) and produce one or more output [datasets](concepts-datasets-linked-services.md). The following diagram shows the relationship between pipeline, activity, and dataset in Data Factory:
+Azure Data Factory and Azure Synapse Analytics have three groupings of activities: [data movement activities](copy-activity-overview.md), [data transformation activities](transform-data.md), and [control activities](#control-flow-activities). An activity can take zero or more input [datasets](concepts-datasets-linked-services.md) and produce one or more output [datasets](concepts-datasets-linked-services.md). The following diagram shows the relationship between pipeline, activity, and dataset:
![Relationship between dataset, activity, and pipeline](media/concepts-pipelines-activities/relationship-between-dataset-pipeline-activity.png)
Copy Activity in Data Factory copies data from a source data store to a sink dat
For more information, see [Copy Activity - Overview](copy-activity-overview.md) article. ## Data transformation activities
-Azure Data Factory supports the following transformation activities that can be added to pipelines either individually or chained with another activity.
+Azure Data Factory and Azure Synapse Analytics support the following transformation activities that can be added either individually or chained with another activity.
Data transformation activity | Compute environment - | -
The following control flow activities are supported:
Control activity | Description - | -- [Append Variable](control-flow-append-variable-activity.md) | Add a value to an existing array variable.
-[Execute Pipeline](control-flow-execute-pipeline-activity.md) | Execute Pipeline activity allows a Data Factory pipeline to invoke another pipeline.
+[Execute Pipeline](control-flow-execute-pipeline-activity.md) | Execute Pipeline activity allows a Data Factory or Synapse pipeline to invoke another pipeline.
[Filter](control-flow-filter-activity.md) | Apply a filter expression to an input array [For Each](control-flow-for-each-activity.md) | ForEach Activity defines a repeating control flow in your pipeline. This activity is used to iterate over a collection and executes specified activities in a loop. The loop implementation of this activity is similar to the Foreach looping structure in programming languages.
-[Get Metadata](control-flow-get-metadata-activity.md) | GetMetadata activity can be used to retrieve metadata of any data in Azure Data Factory.
+[Get Metadata](control-flow-get-metadata-activity.md) | GetMetadata activity can be used to retrieve metadata of any data in a Data Factory or Synapse pipeline.
[If Condition Activity](control-flow-if-condition-activity.md) | The If Condition can be used to branch based on condition that evaluates to true or false. The If Condition activity provides the same functionality that an if statement provides in programming languages. It evaluates a set of activities when the condition evaluates to `true` and another set of activities when the condition evaluates to `false.` [Lookup Activity](control-flow-lookup-activity.md) | Lookup Activity can be used to read or look up a record/ table name/ value from any external source. This output can further be referenced by succeeding activities. [Set Variable](control-flow-set-variable-activity.md) | Set the value of an existing variable.
-[Until Activity](control-flow-until-activity.md) | Implements Do-Until loop that is similar to Do-Until looping structure in programming languages. It executes a set of activities in a loop until the condition associated with the activity evaluates to true. You can specify a timeout value for the until activity in Data Factory.
+[Until Activity](control-flow-until-activity.md) | Implements Do-Until loop that is similar to Do-Until looping structure in programming languages. It executes a set of activities in a loop until the condition associated with the activity evaluates to true. You can specify a timeout value for the until activity.
[Validation Activity](control-flow-validation-activity.md) | Ensure a pipeline only continues execution if a reference dataset exists, meets a specified criteria, or a timeout has been reached. [Wait Activity](control-flow-wait-activity.md) | When you use a Wait activity in a pipeline, the pipeline waits for the specified time before continuing with execution of subsequent activities.
-[Web Activity](control-flow-web-activity.md) | Web Activity can be used to call a custom REST endpoint from a Data Factory pipeline. You can pass datasets and linked services to be consumed and accessed by the activity.
+[Web Activity](control-flow-web-activity.md) | Web Activity can be used to call a custom REST endpoint from a pipeline. You can pass datasets and linked services to be consumed and accessed by the activity.
[Webhook Activity](control-flow-webhook-activity.md) | Using the webhook activity, call an endpoint, and pass a callback URL. The pipeline run waits for the callback to be invoked before proceeding to the next activity. ## Pipeline JSON
Note the following points:
- Input for the activity is set to **InputDataset** and output for the activity is set to **OutputDataset**. See [Datasets](concepts-datasets-linked-services.md) article for defining datasets in JSON. - In the **typeProperties** section, **BlobSource** is specified as the source type and **SqlSink** is specified as the sink type. In the [data movement activities](#data-movement-activities) section, click the data store that you want to use as a source or a sink to learn more about moving data to/from that data store.
-For a complete walkthrough of creating this pipeline, see [Quickstart: create a data factory](quickstart-create-data-factory-powershell.md).
+For a complete walkthrough of creating this pipeline, see [Quickstart: create a Data Factory](quickstart-create-data-factory-powershell.md).
## Sample transformation pipeline In the following sample pipeline, there is one activity of type **HDInsightHive** in the **activities** section. In this sample, the [HDInsight Hive activity](transform-data-using-hadoop-hive.md) transforms data from an Azure Blob storage by running a Hive script file on an Azure HDInsight Hadoop cluster.
data-factory Concepts Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-roles-permissions.md
description: Describes the roles and permissions required to create Data Factori
Last updated 11/5/2018 +
data-factory Configure Azure Ssis Integration Runtime Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/configure-azure-ssis-integration-runtime-performance.md
description: Learn how to configure the properties of the Azure-SSIS Integration
Last updated 01/10/2018 +
data-factory Configure Bcdr Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/configure-bcdr-azure-ssis-integration-runtime.md
Title: Configure Azure-SSIS integration runtime for business continuity and disa
description: This article describes how to configure Azure-SSIS integration runtime in Azure Data Factory with Azure SQL Database/Managed Instance failover group for business continuity and disaster recovery (BCDR). + ms.devlang: powershell
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connect-data-factory-to-azure-purview.md
description: Learn about how to connect a Data Factory to Azure Purview
+ Last updated 12/3/2020
data-factory Connector Amazon Marketplace Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-marketplace-web-service.md
Title: Copy data from AWS Marketplace
description: Learn how to copy data from Amazon Marketplace Web Service to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. +
data-factory Connector Amazon Redshift https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-redshift.md
description: Learn about how to copy data from Amazon Redshift to supported sink
+ Last updated 12/09/2020
data-factory Connector Amazon Simple Storage Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-simple-storage-service.md
description: Learn about how to copy data from Amazon Simple Storage Service (S3
+ Last updated 03/17/2021
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-blob-storage.md
Title: Copy and transform data in Azure Blob storage
-description: Learn how to copy data to and from Blob storage, and transform data in Blob storage by using Data Factory.
+description: Learn how to copy data to and from Blob storage, and transform data in Blob storage by using Data Factory or Azure Synapse Analytics.
+ Last updated 07/19/2021
-# Copy and transform data in Azure Blob storage by using Azure Data Factory
+# Copy and transform data in Azure Blob storage by using Azure Data Factory or Azure Synapse Analytics
> [!div class="op_single_selector" title1="Select the version of Data Factory service you're using:"] > - [Version 1](v1/data-factory-azure-blob-connector.md)
Last updated 07/19/2021
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use the Copy activity in Azure Data Factory to copy data from and to Azure Blob storage. It also describes how to use the Data Flow activity to transform data in Azure Blob storage. To learn about Azure Data Factory, read the [introductory article](introduction.md).
+This article outlines how to use the Copy activity in Azure Data Factory and Azure Synapse pipelines to copy data from and to Azure Blob storage. It also describes how to use the Data Flow activity to transform data in Azure Blob storage. To learn more read the [Azure Data Factory](introduction.md) and the [Azure Synapse Analytics](..\synapse-analytics\overview-what-is.md) introduction articles.
>[!TIP]
->To learn about a migration scenario for a data lake or a data warehouse, see [Use Azure Data Factory to migrate data from your data lake or data warehouse to Azure](data-migration-guidance-overview.md).
+>To learn about a migration scenario for a data lake or a data warehouse, see the article [Migrate data from your data lake or data warehouse to Azure](data-migration-guidance-overview.md).
## Supported capabilities
For the Copy activity, this Blob storage connector supports:
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
-The following sections provide details about properties that are used to define Data Factory entities specific to Blob storage.
+The following sections provide details about properties that are used to define Data Factory and Synapse pipeline entities specific to Blob storage.
## Linked service properties
This Blob storage connector supports the following authentication types. See the
>[!NOTE] >- If want to use the public Azure integration runtime to connect to your Blob storage by leveraging the **Allow trusted Microsoft services to access this storage account** option enabled on Azure Storage firewall, you must use [managed identity authentication](#managed-identity).
->- When you use PolyBase or COPY statement to load data into Azure Synapse Analytics, if your source or staging Blob storage is configured with an Azure Virtual Network endpoint, you must use managed identity authentication as required by Synapse. See the [Managed identity authentication](#managed-identity) section for more configuration prerequisites.
+>- When you use PolyBase or COPY statement to load data into Azure Synapse Analytics, if your source or staging Blob storage is configured with an Azure Virtual Network endpoint, you must use managed identity authentication as required by Azure Synapse. See the [Managed identity authentication](#managed-identity) section for more configuration prerequisites.
>[!NOTE] >Azure HDInsight and Azure Machine Learning activities only support authentication that uses Azure Blob storage account keys. ### Account key authentication
-Data Factory supports the following properties for storage account key authentication:
+The following properties are supported for storage account key authentication in Azure Data Factory or Synapse pipelines:
| Property | Description | Required | |: |: |: |
You don't have to share your account access keys. The shared access signature is
For more information about shared access signatures, see [Shared access signatures: Understand the shared access signature model](../storage/common/storage-sas-overview.md). > [!NOTE]
->- Data Factory now supports both *service shared access signatures* and *account shared access signatures*. For more information about shared access signatures, see [Grant limited access to Azure Storage resources using shared access signatures](../storage/common/storage-sas-overview.md).
+>- The service now supports both *service shared access signatures* and *account shared access signatures*. For more information about shared access signatures, see [Grant limited access to Azure Storage resources using shared access signatures](../storage/common/storage-sas-overview.md).
>- In later dataset configurations, the folder path is the absolute path starting from the container level. You need to configure one aligned with the path in your SAS URI.
-Data Factory supports the following properties for using shared access signature authentication:
+The following properties are supported for using shared access signature authentication:
| Property | Description | Required | |: |: |: | | type | The `type` property must be set to `AzureBlobStorage` (suggested) or `AzureStorage` (see the following note). | Yes |
-| sasUri | Specify the shared access signature URI to the Storage resources such as blob or container. <br/>Mark this field as `SecureString` to store it securely in Data Factory. You can also put the SAS token in Azure Key Vault to use auto-rotation and remove the token portion. For more information, see the following samples and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| sasUri | Specify the shared access signature URI to the Storage resources such as blob or container. <br/>Mark this field as `SecureString` to store it securely. You can also put the SAS token in Azure Key Vault to use auto-rotation and remove the token portion. For more information, see the following samples and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No | >[!NOTE]
Data Factory supports the following properties for using shared access signature
When you create a shared access signature URI, consider the following points: -- Set appropriate read/write permissions on objects based on how the linked service (read, write, read/write) is used in your data factory.
+- Set appropriate read/write permissions on objects based on how the linked service (read, write, read/write) is used.
- Set **Expiry time** appropriately. Make sure that the access to Storage objects doesn't expire within the active period of the pipeline.-- The URI should be created at the right container or blob based on the need. A shared access signature URI to a blob allows Data Factory to access that particular blob. A shared access signature URI to a Blob storage container allows Data Factory to iterate through blobs in that container. To provide access to more or fewer objects later, or to update the shared access signature URI, remember to update the linked service with the new URI.
+- The URI should be created at the right container or blob based on the need. A shared access signature URI to a blob allows the data factory or Synapse pipeline to access that particular blob. A shared access signature URI to a Blob storage container allows the data factory or Synapse pipeline to iterate through blobs in that container. To provide access to more or fewer objects later, or to update the shared access signature URI, remember to update the linked service with the new URI.
### Service principal authentication
These properties are supported for an Azure Blob storage linked service:
| serviceEndpoint | Specify the Azure Blob storage service endpoint with the pattern of `https://<accountName>.blob.core.windows.net/`. | Yes | | accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication is not supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No | | servicePrincipalId | Specify the application's client ID. | Yes |
-| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securelyFactory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering over the upper-right corner of the Azure portal. | Yes |
-| azureCloudType | For service principal authentication, specify the type of Azure cloud environment, to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory's cloud environment is used. | No |
+| azureCloudType | For service principal authentication, specify the type of Azure cloud environment, to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory or Synapse pipeline's cloud environment is used. | No |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No | >[!NOTE] > >- If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), service principal authentication is not supported in Data Flow.
->- If you access the blob storage through private endpoint using Data Flow, note when service principal authentication is used Data Flow connects to the ADLS Gen2 endpoint instead of Blob endpoint. Make sure you create the corresponding private endpoint in ADF to enable access.
+>- If you access the blob storage through private endpoint using Data Flow, note when service principal authentication is used Data Flow connects to the ADLS Gen2 endpoint instead of Blob endpoint. Make sure you create the corresponding private endpoint in your data factory or Synapse workspace to enable access.
>[!NOTE] >Service principal authentication is supported only by the "AzureBlobStorage" type linked service, not the previous "AzureStorage" type linked service.
These properties are supported for an Azure Blob storage linked service:
### <a name="managed-identity"></a> System-assigned managed identity authentication
-A data factory can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity), which represents this specific data factory. You can directly use this system-assigned managed identity for Blob storage authentication, which is similar to using your own service principal. It allows this designated factory to access and copy data from or to Blob storage. To learn more about managed identities for Azure resources, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
+A data factory or Synapse pipeline can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity), which represents that resource for authentication to other Azure services. You can directly use this system-assigned managed identity for Blob storage authentication, which is similar to using your own service principal. It allows this designated resource to access and copy data from or to Blob storage. To learn more about managed identities for Azure resources, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
For general information about Azure Storage authentication, see [Authenticate access to Azure Storage using Azure Active Directory](../storage/common/storage-auth-aad.md). To use managed identities for Azure resource authentication, follow these steps:
-1. [Retrieve Data Factory system-assigned managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the system-assigned managed identity object ID generated along with your factory.
+1. [Retrieve system-assigned managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the system-assigned managed identity object ID generated along with your factory or Synapse workspace.
2. Grant the managed identity permission in Azure Blob storage. For more information on the roles, see [Use the Azure portal to assign an Azure role for access to blob and queue data](../storage/blobs/assign-azure-role-data-access.md).
These properties are supported for an Azure Blob storage linked service:
``` >[!IMPORTANT]
->If you use PolyBase or COPY statement to load data from Blob storage (as a source or as staging) into Azure Synapse Analytics, when you use managed identity authentication for Blob storage, make sure you also follow steps 1 to 3 in [this guidance](../azure-sql/database/vnet-service-endpoint-rule-overview.md#impact-of-using-virtual-network-service-endpoints-with-azure-storage). Those steps will register your server with Azure AD and assign the Storage Blob Data Contributor role to your server. Data Factory handles the rest. If you configure Blob storage with an Azure Virtual Network endpoint, you also need to have **Allow trusted Microsoft services to access this storage account** turned on under Azure Storage account **Firewalls and Virtual networks** settings menu as required by Synapse.
+>If you use PolyBase or COPY statement to load data from Blob storage (as a source or as staging) into Azure Synapse Analytics, when you use managed identity authentication for Blob storage, make sure you also follow steps 1 to 3 in [this guidance](../azure-sql/database/vnet-service-endpoint-rule-overview.md#impact-of-using-virtual-network-service-endpoints-with-azure-storage). Those steps will register your server with Azure AD and assign the Storage Blob Data Contributor role to your server. Data Factory handles the rest. If you configure Blob storage with an Azure Virtual Network endpoint, you also need to have **Allow trusted Microsoft services to access this storage account** turned on under Azure Storage account **Firewalls and Virtual networks** settings menu as required by Azure Synapse.
> [!NOTE] >
The following properties are supported for Azure Blob storage under `storeSettin
| maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | > [!NOTE]
-> For Parquet/delimited text format, the **BlobSource** type for the Copy activity source mentioned in the next section is still supported as is for backward compatibility. We suggest that you use the new model until the Data Factory authoring UI has switched to generating these new types.
+> For Parquet/delimited text format, the **BlobSource** type for the Copy activity source mentioned in the next section is still supported as is for backward compatibility. We suggest that you use the new model until the authoring UI has switched to generating these new types.
**Example:**
The following properties are supported for Azure Blob storage under `storeSettin
``` > [!NOTE]
-> The `$logs` container, which is automatically created when Storage Analytics is enabled for a storage account, isn't shown when a container listing operation is performed via the Data Factory UI. The file path must be provided directly for Data Factory to consume files from the `$logs` container.
+> The `$logs` container, which is automatically created when Storage Analytics is enabled for a storage account, isn't shown when a container listing operation is performed via the UI. The file path must be provided directly for your data factory or Synapse pipeline to consume files from the `$logs` container.
### Blob storage as a sink type
The following properties are supported for Azure Blob storage under `storeSettin
| | | -- | | type | The `type` property under `storeSettings` must be set to `AzureBlobStorageWriteSettings`. | Yes | | copyBehavior | Defines the copy behavior when the source is files from a file-based data store.<br/><br/>Allowed values are:<br/><b>- PreserveHierarchy (default)</b>: Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder.<br/><b>- FlattenHierarchy</b>: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. <br/><b>- MergeFiles</b>: Merges all files from the source folder to one file. If the file or blob name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. | No |
-| blockSizeInMB | Specify the block size, in megabytes, used to write data to block blobs. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is *between 4 MB and 100 MB*. <br/>By default, Data Factory automatically determines the block size based on your source store type and data. For nonbinary copy into Blob storage, the default block size is 100 MB so it can fit in (at most) 4.95 TB of data. It might be not optimal when your data is not large, especially when you use the self-hosted integration runtime with poor network connections that result in operation timeout or performance issues. You can explicitly specify a block size, while ensuring that `blockSizeInMB*50000` is big enough to store the data. Otherwise, the Copy activity run will fail. | No |
+| blockSizeInMB | Specify the block size, in megabytes, used to write data to block blobs. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is *between 4 MB and 100 MB*. <br/>By default, the service automatically determines the block size based on your source store type and data. For nonbinary copy into Blob storage, the default block size is 100 MB so it can fit in (at most) 4.95 TB of data. It might be not optimal when your data is not large, especially when you use the self-hosted integration runtime with poor network connections that result in operation timeout or performance issues. You can explicitly specify a block size, while ensuring that `blockSizeInMB*50000` is big enough to store the data. Otherwise, the Copy activity run will fail. | No |
| maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | | metadata |Set custom metadata when copy to sink. Each object under the `metadata` array represents an extra column. The `name` defines the metadata key name, and the `value` indicates the data value of that key. If [preserve attributes feature](./copy-activity-preserve-metadata.md#preserve-metadata) is used, the specified metadata will union/overwrite with the source file metadata.<br/><br/>Allowed data values are:<br/>- `$$LASTMODIFIED`: a reserved variable indicates to store the source files' last modified time. Apply to file-based source with binary format only.<br/><b>- Expression<b><br/>- <b>Static value<b>| No |
This section describes the resulting behavior of using a file list path in the C
Assume that you have the following source folder structure and want to copy the files in bold:
-| Sample source structure | Content in FileListToCopy.txt | Data Factory configuration |
+| Sample source structure | Content in FileListToCopy.txt | Configuration
| | | | | container<br/>&nbsp;&nbsp;&nbsp;&nbsp;FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Metadata<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileListToCopy.txt | File1.csv<br>Subfolder1/File3.csv<br>Subfolder1/File5.csv | **In dataset:**<br>- Container: `container`<br>- Folder path: `FolderA`<br><br>**In Copy activity source:**<br>- File list path: `container/Metadata/FileListToCopy.txt` <br><br>The file list path points to a text file in the same data store that includes a list of files you want to copy, one file per line, with the relative path to the path configured in the dataset. |
In source transformation, you can read from a container, folder, or individual f
![Source options](media/data-flow/sourceOptions1.png "Source options")
-**Wildcard paths:** Using a wildcard pattern will instruct Data Factory to loop through each matching folder and file in a single source transformation. This is an effective way to process multiple files within a single flow. Add multiple wildcard matching patterns with the plus sign that appears when you hover over your existing wildcard pattern.
+**Wildcard paths:** Using a wildcard pattern will instruct the service to loop through each matching folder and file in a single source transformation. This is an effective way to process multiple files within a single flow. Add multiple wildcard matching patterns with the plus sign that appears when you hover over your existing wildcard pattern.
From your source container, choose a series of files that match a pattern. Only a container can be specified in the dataset. Your wildcard path must therefore also include your folder path from the root folder.
First, set a wildcard to include all paths that are the partitioned folders plus
![Partition source file settings](media/data-flow/partfile2.png "Partition file setting")
-Use the **Partition root path** setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that Data Factory will add the resolved partitions found in each of your folder levels.
+Use the **Partition root path** setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that the service will add the resolved partitions found in each of your folder levels.
![Partition root path](media/data-flow/partfile1.png "Partition root path preview")
To learn details about the properties, check [Delete activity](delete-activity.m
## Legacy models >[!NOTE]
->The following models are still supported as is for backward compatibility. We suggest that you use the new model mentioned earlier. The Data Factory authoring UI has switched to generating the new model.
+>The following models are still supported as is for backward compatibility. We suggest that you use the new model mentioned earlier. The authoring UI has switched to generating the new model.
### Legacy dataset model
To learn details about the properties, check [Delete activity](delete-activity.m
## Next steps
-For a list of data stores that the Copy activity in Data Factory supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores that the Copy activity supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Cosmos Db Mongodb Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-cosmos-db-mongodb-api.md
description: Learn how to copy data from supported source data stores to or from
+ Last updated 11/20/2019
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-cosmos-db.md
description: Learn how to copy data to and from Azure Cosmos DB (SQL API), and t
+ Last updated 05/18/2021
data-factory Connector Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-explorer.md
Title: Copy data to or from Azure Data Explorer description: Learn how to copy data to or from Azure Data Explorer by using a copy activity in an Azure Data Factory pipeline.--++ + Last updated 07/19/2020
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-storage.md
Title: Copy and transform data in Azure Data Lake Storage Gen2
-description: Learn how to copy data to and from Azure Data Lake Storage Gen2, and transform data in Azure Data Lake Storage Gen2 by using Azure Data Factory.
+description: Learn how to copy data to and from Azure Data Lake Storage Gen2, and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics pipelines.
+ Last updated 07/19/2021
-# Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory
+# Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] Azure Data Lake Storage Gen2 (ADLS Gen2) is a set of capabilities dedicated to big data analytics built into [Azure Blob storage](../storage/blobs/storage-blobs-introduction.md). You can use it to interface with your data by using both file system and object storage paradigms.
-This article outlines how to use Copy Activity in Azure Data Factory to copy data from and to Azure Data Lake Storage Gen2, and use Data Flow to transform data in Azure Data Lake Storage Gen2. To learn about Azure Data Factory, read the [introductory article](introduction.md).
+This article outlines how to use Copy Activity to copy data from and to Azure Data Lake Storage Gen2, and use Data Flow to transform data in Azure Data Lake Storage Gen2. To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
>[!TIP]
->For data lake or data warehouse migration scenario, learn more from [Use Azure Data Factory to migrate data from your data lake or data warehouse to Azure](data-migration-guidance-overview.md).
+>For data lake or data warehouse migration scenario, learn more in [Migrate data from your data lake or data warehouse to Azure](data-migration-guidance-overview.md).
## Supported capabilities
For Copy activity, with this connector you can:
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
-The following sections provide information about properties that are used to define Data Factory entities specific to Data Lake Storage Gen2.
+The following sections provide information about properties that are used to define Data Factory and Synapse pipeline entities specific to Data Lake Storage Gen2.
## Linked service properties
The Azure Data Lake Storage Gen2 connector supports the following authentication
- >[!NOTE] >- If want to use the public Azure integration runtime to connect to the Data Lake Storage Gen2 by leveraging the **Allow trusted Microsoft services to access this storage account** option enabled on Azure Storage firewall, you must use [managed identity authentication](#managed-identity).
->- When you use PolyBase or COPY statement to load data into Azure Synapse Analytics, if your source or staging Data Lake Storage Gen2 is configured with an Azure Virtual Network endpoint, you must use managed identity authentication as required by Synapse. See the [managed identity authentication](#managed-identity) section with more configuration prerequisites.
+>- When you use PolyBase or COPY statement to load data into Azure Synapse Analytics, if your source or staging Data Lake Storage Gen2 is configured with an Azure Virtual Network endpoint, you must use managed identity authentication as required by Azure Synapse. See the [managed identity authentication](#managed-identity) section with more configuration prerequisites.
### Account key authentication
To use storage account key authentication, the following properties are supporte
|: |: |: | | type | The type property must be set to **AzureBlobFS**. |Yes | | url | Endpoint for Data Lake Storage Gen2 with the pattern of `https://<accountname>.dfs.core.windows.net`. | Yes |
-| accountKey | Account key for Data Lake Storage Gen2. Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+| accountKey | Account key for Data Lake Storage Gen2. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If this property isn't specified, the default Azure integration runtime is used. |No | >[!NOTE]
To use service principal authentication, follow these steps.
- **As sink**: In Storage Explorer, grant at least **Execute** permission for ALL upstream folders and the file system, along with **Write** permission for the sink folder. Alternatively, in Access control (IAM), grant at least the **Storage Blob Data Contributor** role. >[!NOTE]
->If you use Data Factory UI to author and the service principal is not set with "Storage Blob Data Reader/Contributor" role in IAM, when doing test connection or browsing/navigating folders, choose "Test connection to file path" or "Browse from specified path", and specify a path with **Read + Execute** permission to continue.
+>If you use UI to author and the service principal is not set with "Storage Blob Data Reader/Contributor" role in IAM, when doing test connection or browsing/navigating folders, choose "Test connection to file path" or "Browse from specified path", and specify a path with **Read + Execute** permission to continue.
These properties are supported for the linked service:
These properties are supported for the linked service:
| url | Endpoint for Data Lake Storage Gen2 with the pattern of `https://<accountname>.dfs.core.windows.net`. | Yes | | servicePrincipalId | Specify the application's client ID. | Yes | | servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes |
-| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the the application's key. Mark this field as **SecureString** to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault. | Yes |
-| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> This property is still supported as-is for `servicePrincipalId` + `servicePrincipalKey`. As ADF adds new service principal certificate authentication, the new model for service principal authentication is `servicePrincipalId` + `servicePrincipalCredentialType` + `servicePrincipalCredential`. | No |
+| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault. | Yes |
+| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> This property is still supported as-is for `servicePrincipalId` + `servicePrincipalKey`. As ADF adds new service principal certificate authentication, the new model for service principal authentication is `servicePrincipalId` + `servicePrincipalCredentialType` + `servicePrincipalCredential`. | No |
| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes |
-| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory's cloud environment is used. | No |
+| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory or Synapse pipeline's cloud environment is used. | No |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No | **Example: using service principal key authentication**
You can also store service principal key in Azure Key Vault.
### <a name="managed-identity"></a> System-assigned managed identity authentication
-A data factory can be associated with a [system-assigned managed identity](data-factory-service-identity.md), which represents this specific data factory. You can directly use this system-assigned managed identity for Data Lake Storage Gen2 authentication, similar to using your own service principal. It allows this designated factory to access and copy data to or from your Data Lake Storage Gen2.
+A data factory or Synapse workspace can be associated with a [system-assigned managed identity](data-factory-service-identity.md). You can directly use this system-assigned managed identity for Data Lake Storage Gen2 authentication, similar to using your own service principal. It allows this designated factory or workspace to access and copy data to or from your Data Lake Storage Gen2.
To use system-assigned managed identity authentication, follow these steps.
-1. [Retrieve the Data Factory system-assigned managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the **managed identity object ID** generated along with your factory.
+1. [Retrieve the system-assigned managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the **managed identity object ID** generated along with your data factory or Synapse workspace.
2. Grant the system-assigned managed identity proper permission. See examples on how permission works in Data Lake Storage Gen2 from [Access control lists on files and directories](../storage/blobs/data-lake-storage-access-control.md#access-control-lists-on-files-and-directories).
These properties are supported for the linked service:
>If you use Data Factory UI to author and the managed identity is not set with "Storage Blob Data Reader/Contributor" role in IAM, when doing test connection or browsing/navigating folders, choose "Test connection to file path" or "Browse from specified path", and specify a path with **Read + Execute** permission to continue. >[!IMPORTANT]
->If you use PolyBase or COPY statement to load data from Data Lake Storage Gen2 into Azure Synapse Analytics, when you use managed identity authentication for Data Lake Storage Gen2, make sure you also follow steps 1 to 3 in [this guidance](../azure-sql/database/vnet-service-endpoint-rule-overview.md#impact-of-using-virtual-network-service-endpoints-with-azure-storage). Those steps will register your server with Azure AD and assign the Storage Blob Data Contributor role to your server. Data Factory handles the rest. If you configure Blob storage with an Azure Virtual Network endpoint, you also need to have **Allow trusted Microsoft services to access this storage account** turned on under Azure Storage account **Firewalls and Virtual networks** settings menu as required by Synapse.
+>If you use PolyBase or COPY statement to load data from Data Lake Storage Gen2 into Azure Synapse Analytics, when you use managed identity authentication for Data Lake Storage Gen2, make sure you also follow steps 1 to 3 in [this guidance](../azure-sql/database/vnet-service-endpoint-rule-overview.md#impact-of-using-virtual-network-service-endpoints-with-azure-storage). Those steps will register your server with Azure AD and assign the Storage Blob Data Contributor role to your server. Data Factory handles the rest. If you configure Blob storage with an Azure Virtual Network endpoint, you also need to have **Allow trusted Microsoft services to access this storage account** turned on under Azure Storage account **Firewalls and Virtual networks** settings menu as required by Azure Synapse.
## Dataset properties
When you copy files from Amazon S3/Azure Blob/Azure Data Lake Storage Gen2 to Az
When you copy files from Azure Data Lake Storage Gen1/Gen2 to Gen2, you can choose to preserve the POSIX access control lists (ACLs) along with data. Learn more from [Preserve ACLs from Data Lake Storage Gen1/Gen2 to Gen2](copy-activity-preserve-metadata.md#preserve-acls). >[!TIP]
->To copy data from Azure Data Lake Storage Gen1 into Gen2 in general, see [Copy data from Azure Data Lake Storage Gen1 to Gen2 with Azure Data Factory](load-azure-data-lake-storage-gen2-from-gen1.md) for a walk-through and best practices.
+>To copy data from Azure Data Lake Storage Gen1 into Gen2 in general, see [Copy data from Azure Data Lake Storage Gen1 to Gen2](load-azure-data-lake-storage-gen2-from-gen1.md) for a walk-through and best practices.
## Mapping data flow properties
To learn details about the properties, check [Delete activity](delete-activity.m
## Next steps
-For a list of data stores supported as sources and sinks by the copy activity in Data Factory, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-store.md
description: Learn how to copy data from supported source data stores to Azure D
+ Last updated 07/19/2021
data-factory Connector Azure Database For Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-mariadb.md
description: Learn how to copy data from Azure Database for MariaDB to supported
+ Last updated 09/04/2019
data-factory Connector Azure Database For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-mysql.md
Title: Copy and transform data in Azure Database for MySQL description: earn how to copy and transform data in Azure Database for MySQL by using Azure Data Factory.--++ + Last updated 03/10/2021
data-factory Connector Azure Database For Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-postgresql.md
Title: Copy and transform data in Azure Database for PostgreSQL description: Learn how to copy and transform data in Azure Database for PostgreSQL by using Azure Data Factory.--++ + Last updated 06/16/2021
data-factory Connector Azure Databricks Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-databricks-delta-lake.md
Title: Copy data to and from Azure Databricks Delta Lake description: Learn how to copy data to and from Azure Databricks Delta Lake by using a copy activity in an Azure Data Factory pipeline.--++ + Last updated 06/16/2021
data-factory Connector Azure File Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-file-storage.md
description: Learn how to copy data from Azure File Storage to supported sink da
+ Last updated 03/17/2021
data-factory Connector Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-search.md
description: Learn about how to push or copy data to an Azure search index by us
+ Last updated 03/17/2021
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-data-warehouse.md
description: Learn how to copy data to and from Azure Synapse Analytics, and tra
+ Last updated 05/10/2021
-# Copy and transform data in Azure Synapse Analytics by using Azure Data Factory
+# Copy and transform data in Azure Synapse Analytics by using Azure Data Factory or Synapse pipelines
> [!div class="op_single_selector" title1="Select the version of Data Factory service you're using:"] >
Last updated 05/10/2021
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use Copy Activity in Azure Data Factory to copy data from and to Azure Synapse Analytics, and use Data Flow to transform data in Azure Data Lake Storage Gen2. To learn about Azure Data Factory, read the [introductory article](introduction.md).
+This article outlines how to use Copy Activity in Azure Data Factory or Synapse pipelines to copy data from and to Azure Synapse Analytics, and use Data Flow to transform data in Azure Data Lake Storage Gen2. To learn about Azure Data Factory, read the [introductory article](introduction.md).
## Supported capabilities
For Copy activity, this Azure Synapse Analytics connector supports these functio
- As a sink, load data by using [PolyBase](#use-polybase-to-load-data-into-azure-synapse-analytics) or [COPY statement](#use-copy-statement) or bulk insert. We recommend PolyBase or COPY statement for better copy performance. The connector also supports automatically creating destination table if not exists based on the source schema. > [!IMPORTANT]
-> If you copy data by using Azure Data Factory Integration Runtime, configure a [server-level firewall rule](../azure-sql/database/firewall-configure.md) so that Azure services can access the [logical SQL server](../azure-sql/database/logical-servers.md).
+> If you copy data by using an Azure integration runtime, configure a [server-level firewall rule](../azure-sql/database/firewall-configure.md) so that Azure services can access the [logical SQL server](../azure-sql/database/logical-servers.md).
> If you copy data by using a self-hosted integration runtime, configure the firewall to allow the appropriate IP range. This range includes the machine's IP that is used to connect to Azure Synapse Analytics. ## Get started
For Copy activity, this Azure Synapse Analytics connector supports these functio
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
-The following sections provide details about properties that define Data Factory entities specific to an Azure Synapse Analytics connector.
+The following sections provide details about properties that define Data Factory and Synapse pipeline entities specific to an Azure Synapse Analytics connector.
## Linked service properties
The following properties are supported for an Azure Synapse Analytics linked ser
| Property | Description | Required | | : | :-- | :-- | | type | The type property must be set to **AzureSqlDW**. | Yes |
-| connectionString | Specify the information needed to connect to the Azure Synapse Analytics instance for the **connectionString** property. <br/>Mark this field as a SecureString to store it securely in Data Factory. You can also put password/service principal key in Azure Key Vault,and if it's SQL authentication pull the `password` configuration out of the connection string. See the JSON example below the table and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article with more details. | Yes |
+| connectionString | Specify the information needed to connect to the Azure Synapse Analytics instance for the **connectionString** property. <br/>Mark this field as a SecureString to store it securely. You can also put password/service principal key in Azure Key Vault,and if it's SQL authentication pull the `password` configuration out of the connection string. See the JSON example below the table and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article with more details. | Yes |
| servicePrincipalId | Specify the application's client ID. | Yes, when you use Azure AD authentication with a service principal. |
-| servicePrincipalKey | Specify the application's key. Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes, when you use Azure AD authentication with a service principal. |
+| servicePrincipalKey | Specify the application's key. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes, when you use Azure AD authentication with a service principal. |
| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. You can retrieve it by hovering the mouse in the top-right corner of the Azure portal. | Yes, when you use Azure AD authentication with a service principal. |
-| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are `AzurePublic`, `AzureChina`, `AzureUsGovernment`, and `AzureGermany`. By default, the data factory's cloud environment is used. | No |
+| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are `AzurePublic`, `AzureChina`, `AzureUsGovernment`, and `AzureGermany`. By default, the data factory or Synapse pipeline's cloud environment is used. | No |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or a self-hosted integration runtime (if your data store is located in a private network). If not specified, it uses the default Azure Integration Runtime. | No | For different authentication types, refer to the following sections on prerequisites and JSON samples, respectively:
To use service principal-based Azure AD application token authentication, follow
3. **[Create contained database users](../azure-sql/database/authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities)** for the service principal. Connect to the data warehouse from or to which you want to copy data by using tools like SSMS, with an Azure AD identity that has at least ALTER ANY USER permission. Run the following T-SQL: ```sql
- CREATE USER [your application name] FROM EXTERNAL PROVIDER;
+ CREATE USER [your_application_name] FROM EXTERNAL PROVIDER;
``` 4. **Grant the service principal needed permissions** as you normally do for SQL users or others. Run the following code, or refer to more options [here](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql). If you want to use PolyBase to load the data, learn the [required database permission](#required-database-permission).
To use service principal-based Azure AD application token authentication, follow
EXEC sp_addrolemember db_owner, [your application name]; ```
-5. **Configure an Azure Synapse Analytics linked service** in Azure Data Factory.
+5. **Configure an Azure Synapse Analytics linked service** in an Azure Data Factory or Synapse workspace.
#### Linked service example that uses service principal authentication
To use service principal-based Azure AD application token authentication, follow
### <a name="managed-identity"></a> Managed identities for Azure resources authentication
-A data factory can be associated with a [managed identity for Azure resources](data-factory-service-identity.md) that represents the specific factory. You can use this managed identity for Azure Synapse Analytics authentication. The designated factory can access and copy data from or to your data warehouse by using this identity.
+A data factory or Synapse workspace can be associated with a [managed identity for Azure resources](data-factory-service-identity.md) that represents the resource. You can use this managed identity for Azure Synapse Analytics authentication. The designated resource can access and copy data from or to your data warehouse by using this identity.
To use managed identity authentication, follow these steps: 1. **[Provision an Azure Active Directory administrator](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-database)** for your server on the Azure portal if you haven't already done so. The Azure AD administrator can be an Azure AD user or Azure AD group. If you grant the group with managed identity an admin role, skip steps 3 and 4. The administrator will have full access to the database.
-2. **[Create contained database users](../azure-sql/database/authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities)** for the Data Factory Managed Identity. Connect to the data warehouse from or to which you want to copy data by using tools like SSMS, with an Azure AD identity that has at least ALTER ANY USER permission. Run the following T-SQL.
+2. **[Create contained database users](../azure-sql/database/authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities)** for the Managed Identity. Connect to the data warehouse from or to which you want to copy data by using tools like SSMS, with an Azure AD identity that has at least ALTER ANY USER permission. Run the following T-SQL.
```sql
- CREATE USER [your Data Factory name] FROM EXTERNAL PROVIDER;
+ CREATE USER [your_resource_name] FROM EXTERNAL PROVIDER;
```
-3. **Grant the Data Factory Managed Identity needed permissions** as you normally do for SQL users and others. Run the following code, or refer to more options [here](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql). If you want to use PolyBase to load the data, learn the [required database permission](#required-database-permission).
+3. **Grant the Managed Identity needed permissions** as you normally do for SQL users and others. Run the following code, or refer to more options [here](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql). If you want to use PolyBase to load the data, learn the [required database permission](#required-database-permission).
```sql
- EXEC sp_addrolemember db_owner, [your Data Factory name];
+ EXEC sp_addrolemember db_owner, [your_resource_name];
```
-4. **Configure an Azure Synapse Analytics linked service** in Azure Data Factory.
+4. **Configure an Azure Synapse Analytics linked service**.
**Example:**
GO
### <a name="azure-sql-data-warehouse-as-sink"></a> Azure Synapse Analytics as sink
-Azure Data Factory supports three ways to load data into Azure Synapse Analytics.
+Azure Data Factory and Synapse pipelines support three ways to load data into Azure Synapse Analytics.
![Azure Synapse Analytics sink copy options](./media/connector-azure-sql-data-warehouse/sql-dw-sink-copy-options.png)
To copy data to Azure Synapse Analytics, set the sink type in Copy Activity to *
| polyBaseSettings | A group of properties that can be specified when the `allowPolybase` property is set to **true**. | No.<br/>Apply when using PolyBase. | | allowCopyCommand | Indicates whether to use [COPY statement](/sql/t-sql/statements/copy-into-transact-sql) to load data into Azure Synapse Analytics. `allowCopyCommand` and `allowPolyBase` cannot be both true. <br/><br/>See [Use COPY statement to load data into Azure Synapse Analytics](#use-copy-statement) section for constraints and details.<br/><br/>Allowed values are **True** and **False** (default). | No.<br>Apply when using COPY. | | copyCommandSettings | A group of properties that can be specified when `allowCopyCommand` property is set to TRUE. | No.<br/>Apply when using COPY. |
-| writeBatchSize | Number of rows to inserts into the SQL table **per batch**.<br/><br/>The allowed value is **integer** (number of rows). By default, Data Factory dynamically determines the appropriate batch size based on the row size. | No.<br/>Apply when using bulk insert. |
+| writeBatchSize | Number of rows to inserts into the SQL table **per batch**.<br/><br/>The allowed value is **integer** (number of rows). By default, the service dynamically determines the appropriate batch size based on the row size. | No.<br/>Apply when using bulk insert. |
| writeBatchTimeout | Wait time for the batch insert operation to finish before it times out.<br/><br/>The allowed value is **timespan**. Example: "00:30:00" (30 minutes). | No.<br/>Apply when using bulk insert. | | preCopyScript | Specify a SQL query for Copy Activity to run before writing data into Azure Synapse Analytics in each run. Use this property to clean up the preloaded data. | No | | tableOption | Specifies whether to [automatically create the sink table](copy-activity-overview.md#auto-create-sink-tables) if not exists based on the source schema. Allowed values are: `none` (default), `autoCreate`. |No |
-| disableMetricsCollection | Data Factory collects metrics such as Azure Synapse Analytics DWUs for copy performance optimization and recommendations, which introduce additional master DB access. If you are concerned with this behavior, specify `true` to turn it off. | No (default is `false`) |
+| disableMetricsCollection | The service collects metrics such as Azure Synapse Analytics DWUs for copy performance optimization and recommendations, which introduce additional master DB access. If you are concerned with this behavior, specify `true` to turn it off. | No (default is `false`) |
| maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | #### Azure Synapse Analytics sink example
The Azure Synapse Analytics connector in copy activity provides built-in data pa
![Screenshot of partition options](./media/connector-sql-server/connector-sql-partition-options.png)
-When you enable partitioned copy, copy activity runs parallel queries against your Azure Synapse Analytics source to load data by partitions. The parallel degree is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. For example, if you set `parallelCopies` to four, Data Factory concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your Azure Synapse Analytics.
+When you enable partitioned copy, copy activity runs parallel queries against your Azure Synapse Analytics source to load data by partitions. The parallel degree is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. For example, if you set `parallelCopies` to four, the service concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your Azure Synapse Analytics.
You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Azure Synapse Analytics. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommended to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file. | Scenario | Suggested settings | | | |
-| Full load from large table, with physical partitions. | **Partition option**: Physical partitions of table. <br><br/>During execution, Data Factory automatically detects the physical partitions, and copies data by partitions. <br><br/>To check if your table has physical partition or not, you can refer to [this query](#sample-query-to-check-physical-partition). |
-| Full load from large table, without physical partitions, while with an integer or datetime column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Partition column** (optional): Specify the column used to partition data. If not specified, the index or primary key column is used.<br/>**Partition upper bound** and **partition lower bound** (optional): Specify if you want to determine the partition stride. This is not for filtering the rows in table, all rows in the table will be partitioned and copied. If not specified, copy activity auto detect the values.<br><br>For example, if your partition column "ID" has values range from 1 to 100, and you set the lower bound as 20 and the upper bound as 80, with parallel copy as 4, Data Factory retrieves data by 4 partitions - IDs in range <=20, [21, 50], [51, 80], and >=81, respectively. |
-| Load a large amount of data by using a custom query, without physical partitions, while with an integer or date/datetime column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to partition data.<br>**Partition upper bound** and **partition lower bound** (optional): Specify if you want to determine the partition stride. This is not for filtering the rows in table, all rows in the query result will be partitioned and copied. If not specified, copy activity auto detect the value.<br><br>During execution, Data Factory replaces `?AdfRangePartitionColumnName` with the actual column name and value ranges for each partition, and sends to Azure Synapse Analytics. <br>For example, if your partition column "ID" has values range from 1 to 100, and you set the lower bound as 20 and the upper bound as 80, with parallel copy as 4, Data Factory retrieves data by 4 partitions- IDs in range <=20, [21, 50], [51, 80], and >=81, respectively. <br><br>Here are more sample queries for different scenarios:<br> 1. Query the whole table: <br>`SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition`<br> 2. Query from a table with column selection and additional where-clause filters: <br>`SELECT <column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`<br> 3. Query with subqueries: <br>`SELECT <column_list> FROM (<your_sub_query>) AS T WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`<br> 4. Query with partition in subquery: <br>`SELECT <column_list> FROM (SELECT <your_sub_query_column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition) AS T`
+| Full load from large table, with physical partitions. | **Partition option**: Physical partitions of table. <br><br/>During execution, the service automatically detects the physical partitions, and copies data by partitions. <br><br/>To check if your table has physical partition or not, you can refer to [this query](#sample-query-to-check-physical-partition). |
+| Full load from large table, without physical partitions, while with an integer or datetime column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Partition column** (optional): Specify the column used to partition data. If not specified, the index or primary key column is used.<br/>**Partition upper bound** and **partition lower bound** (optional): Specify if you want to determine the partition stride. This is not for filtering the rows in table, all rows in the table will be partitioned and copied. If not specified, copy activity auto detect the values.<br><br>For example, if your partition column "ID" has values range from 1 to 100, and you set the lower bound as 20 and the upper bound as 80, with parallel copy as 4, the service retrieves data by 4 partitions - IDs in range <=20, [21, 50], [51, 80], and >=81, respectively. |
+| Load a large amount of data by using a custom query, without physical partitions, while with an integer or date/datetime column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to partition data.<br>**Partition upper bound** and **partition lower bound** (optional): Specify if you want to determine the partition stride. This is not for filtering the rows in table, all rows in the query result will be partitioned and copied. If not specified, copy activity auto detect the value.<br><br>During execution, the service replaces `?AdfRangePartitionColumnName` with the actual column name and value ranges for each partition, and sends to Azure Synapse Analytics. <br>For example, if your partition column "ID" has values range from 1 to 100, and you set the lower bound as 20 and the upper bound as 80, with parallel copy as 4, the service retrieves data by 4 partitions- IDs in range <=20, [21, 50], [51, 80], and >=81, respectively. <br><br>Here are more sample queries for different scenarios:<br> 1. Query the whole table: <br>`SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition`<br> 2. Query from a table with column selection and additional where-clause filters: <br>`SELECT <column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`<br> 3. Query with subqueries: <br>`SELECT <column_list> FROM (<your_sub_query>) AS T WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`<br> 4. Query with partition in subquery: <br>`SELECT <column_list> FROM (SELECT <your_sub_query_column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition) AS T`
| Best practices to load data with partition option:
Best practices to load data with partition option:
2. If the table has built-in partition, use partition option "Physical partitions of table" to get better performance. 3. If you use Azure Integration Runtime to copy data, you can set larger "[Data Integration Units (DIU)](copy-activity-performance-features.md#data-integration-units)" (>4) to utilize more computing resource. Check the applicable scenarios there. 4. "[Degree of copy parallelism](copy-activity-performance-features.md#parallel-copy)" control the partition numbers, setting this number too large sometime hurts the performance, recommend setting this number as (DIU or number of Self-hosted IR nodes) * (2 to 4).
-5. Note Azure Synapse Analytics can execute a maximum of 32 queries at a moment, setting "Degree of copy parallelism" too large may cause Synapse throttling issue.
+5. Note Azure Synapse Analytics can execute a maximum of 32 queries at a moment, setting "Degree of copy parallelism" too large may cause an Azure Synapse throttling issue.
**Example: full load from large table with physical partitions**
Using [PolyBase](/sql/relational-databases/polybase/polybase-guide) is an effici
- If your source data store and format isn't originally supported by PolyBase, use the **[Staged copy by using PolyBase](#staged-copy-by-using-polybase)** feature instead. The staged copy feature also provides you better throughput. It automatically converts the data into PolyBase-compatible format, stores the data in Azure Blob storage, then calls PolyBase to load data into Azure Synapse Analytics. > [!TIP]
-> Learn more on [Best practices for using PolyBase](#best-practices-for-using-polybase). When using PolyBase with Azure Integration Runtime, effective [Data Integration Units (DIU)](copy-activity-performance-features.md#data-integration-units) for direct or staged storage-to-Synapse is always 2. Tuning the DIU doesn't impact the performance, as loading data from storage is powered by Synapse engine.
+> Learn more on [Best practices for using PolyBase](#best-practices-for-using-polybase). When using PolyBase with Azure Integration Runtime, effective [Data Integration Units (DIU)](copy-activity-performance-features.md#data-integration-units) for direct or staged storage-to-Synapse is always 2. Tuning the DIU doesn't impact the performance, as loading data from storage is powered by the Azure Synapse engine.
The following PolyBase settings are supported under `polyBaseSettings` in copy activity:
Azure Synapse Analytics PolyBase directly supports Azure Blob, Azure Data Lake S
> [!TIP] > To copy data efficiently to Azure Synapse Analytics, learn more from [Azure Data Factory makes it even easier and convenient to uncover insights from data when using Data Lake Store with Azure Synapse Analytics](/archive/blogs/azuredatalake/azure-data-factory-makes-it-even-easier-and-convenient-to-uncover-insights-from-data-when-using-data-lake-store-with-sql-data-warehouse).
-If the requirements aren't met, Azure Data Factory checks the settings and automatically falls back to the BULKINSERT mechanism for the data movement.
+If the requirements aren't met, the service checks the settings and automatically falls back to the BULKINSERT mechanism for the data movement.
1. The **source linked service** is with the following types and authentication methods:
If the requirements aren't met, Azure Data Factory checks the settings and autom
3. `rowDelimiter` is **default**, **\n**, **\r\n**, or **\r**. 4. `nullValue` is left as default or set to **empty string** (""), and `treatEmptyAsNull` is left as default or set to true. 5. `encodingName` is left as default or set to **utf-8**.
- 6. `quoteChar`, `escapeChar`, and `skipLineCount` aren't specified. PolyBase support skip header row, which can be configured as `firstRowAsHeader` in ADF.
+ 6. `quoteChar`, `escapeChar`, and `skipLineCount` aren't specified. PolyBase support skip header row, which can be configured as `firstRowAsHeader`.
7. `compression` can be **no compression**, **``GZip``**, or **Deflate**. 3. If your source is a folder, `recursive` in copy activity must be set to true.
If the requirements aren't met, Azure Data Factory checks the settings and autom
### Staged copy by using PolyBase
-When your source data is not natively compatible with PolyBase, enable data copying via an interim staging Azure Blob or Azure Data Lake Storage Gen2 (it can't be Azure Premium Storage). In this case, Azure Data Factory automatically converts the data to meet the data format requirements of PolyBase. Then it invokes PolyBase to load data into Azure Synapse Analytics. Finally, it cleans up your temporary data from the storage. See [Staged copy](copy-activity-performance-features.md#staged-copy) for details about copying data via a staging.
+When your source data is not natively compatible with PolyBase, enable data copying via an interim staging Azure Blob or Azure Data Lake Storage Gen2 (it can't be Azure Premium Storage). In this case, the service automatically converts the data to meet the data format requirements of PolyBase. Then it invokes PolyBase to load data into Azure Synapse Analytics. Finally, it cleans up your temporary data from the storage. See [Staged copy](copy-activity-performance-features.md#staged-copy) for details about copying data via a staging.
To use this feature, create an [Azure Blob Storage linked service](connector-azure-blob-storage.md#linked-service-properties) or [Azure Data Lake Storage Gen2 linked service](connector-azure-data-lake-storage.md#linked-service-properties) with **account key or managed identity authentication** that refers to the Azure storage account as the interim storage.
PolyBase loads are limited to rows smaller than 1 MB. It cannot be used to load
When your source data has rows greater than 1 MB, you might want to vertically split the source tables into several small ones. Make sure that the largest size of each row doesn't exceed the limit. The smaller tables can then be loaded by using PolyBase and merged together in Azure Synapse Analytics.
-Alternatively, for data with such wide columns, you can use non-PolyBase to load the data using ADF, by turning off "allow PolyBase" setting.
+Alternatively, for data with such wide columns, you can use non-PolyBase to load the data by turning off "allow PolyBase" setting.
#### Azure Synapse Analytics resource class
Type=System.Data.SqlClient.SqlException,Message=Invalid object name 'stg.Account
#### Columns with default values
-Currently, the PolyBase feature in Data Factory accepts only the same number of columns as in the target table. An example is a table with four columns where one of them is defined with a default value. The input data still needs to have four columns. A three-column input dataset yields an error similar to the following message:
+Currently, the PolyBase feature accepts only the same number of columns as in the target table. An example is a table with four columns where one of them is defined with a default value. The input data still needs to have four columns. A three-column input dataset yields an error similar to the following message:
```output All columns of the table must be specified in the INSERT BULK statement.
For more information, see [Grant permissions to managed identity after workspace
## <a name="use-copy-statement"></a> Use COPY statement to load data into Azure Synapse Analytics
-Azure Synapse Analytics [COPY statement](/sql/t-sql/statements/copy-into-transact-sql) directly supports loading data from **Azure Blob and Azure Data Lake Storage Gen2**. If your source data meets the criteria described in this section, you can choose to use COPY statement in ADF to load data into Azure Synapse Analytics. Azure Data Factory checks the settings and fails the copy activity run if the criteria is not met.
+Azure Synapse Analytics [COPY statement](/sql/t-sql/statements/copy-into-transact-sql) directly supports loading data from **Azure Blob and Azure Data Lake Storage Gen2**. If your source data meets the criteria described in this section, you can choose to use COPY statement to load data into Azure Synapse Analytics. Azure Data Factory checks the settings and fails the copy activity run if the criteria is not met.
>[!NOTE]
->Currently Data Factory only support copy from COPY statement compatible sources mentioned below.
+>Currently the service only supports copy from COPY statement compatible sources mentioned below.
>[!TIP]
->When using COPY statement with Azure Integration Runtime, effective [Data Integration Units (DIU)](copy-activity-performance-features.md#data-integration-units) is always 2. Tuning the DIU doesn't impact the performance, as loading data from storage is powered by Synapse engine.
+>When using COPY statement with Azure Integration Runtime, effective [Data Integration Units (DIU)](copy-activity-performance-features.md#data-integration-units) is always 2. Tuning the DIU doesn't impact the performance, as loading data from storage is powered by the Azure Synapse engine.
Using COPY statement supports the following configuration:
Settings specific to Azure Synapse Analytics are available in the **Source Optio
**Input** Select whether you point your source at a table (equivalent of ```Select * from <table-name>```) or enter a custom SQL query.
-**Enable Staging** It is highly recommended that you use this option in production workloads with Azure Synapse Analytics sources. When you execute a [data flow activity](control-flow-execute-data-flow-activity.md) with Azure Synapse Analytics sources from a pipeline, ADF will prompt you for a staging location storage account and will use that for staged data loading. It is the fastest mechanism to load data from Azure Synapse Analytics.
+**Enable Staging** It is highly recommended that you use this option in production workloads with Azure Synapse Analytics sources. When you execute a [data flow activity](control-flow-execute-data-flow-activity.md) with Azure Synapse Analytics sources from a pipeline, you will be prompted for a staging location storage account and will use that for staged data loading. It is the fastest mechanism to load data from Azure Synapse Analytics.
- When you use managed identity authentication for your storage linked service, learn the needed configurations for [Azure Blob](connector-azure-blob-storage.md#managed-identity) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#managed-identity) respectively. - If your Azure Storage is configured with VNet service endpoint, you must use managed identity authentication with "allow trusted Microsoft service" enabled on storage account, refer to [Impact of using VNet Service Endpoints with Azure storage](../azure-sql/database/vnet-service-endpoint-rule-overview.md#impact-of-using-virtual-network-service-endpoints-with-azure-storage).
Settings specific to Azure Synapse Analytics are available in the **Source Optio
SQL Example: ```Select * from MyTable where customerId > 1000 and customerId < 2000```
-**Batch size**: Enter a batch size to chunk large data into reads. In data flows, ADF will use this setting to set Spark columnar caching. This is an option field, which will use Spark defaults if it is left blank.
+**Batch size**: Enter a batch size to chunk large data into reads. In data flows, this setting will be used to set Spark columnar caching. This is an option field, which will use Spark defaults if it is left blank.
**Isolation Level**: The default for SQL sources in mapping data flow is read uncommitted. You can change the isolation level here to one of these values:
When writing to Azure Synapse Analytics, certain rows of data may fail due to co
* Cannot insert the value NULL into column * Conversion failed when converting the value to data type
-By default, a data flow run will fail on the first error it gets. You can choose to **Continue on error** that allows your data flow to complete even if individual rows have errors. Azure Data Factory provides different options for you to handle these error rows.
+By default, a data flow run will fail on the first error it gets. You can choose to **Continue on error** that allows your data flow to complete even if individual rows have errors. The service provides different options for you to handle these error rows.
**Transaction Commit:** Choose whether your data gets written in a single transaction or in batches. Single transaction will provide better performance and no data written will be visible to others until the transaction completes. Batch transactions have worse performance but can work for large datasets.
To learn details about the properties, check [GetMetadata activity](control-flow
## Data type mapping for Azure Synapse Analytics
-When you copy data from or to Azure Synapse Analytics, the following mappings are used from Azure Synapse Analytics data types to Azure Data Factory interim data types. See [schema and data type mappings](copy-activity-schema-and-type-mapping.md) to learn how Copy Activity maps the source schema and data type to the sink.
+When you copy data from or to Azure Synapse Analytics, the following mappings are used from Azure Synapse Analytics data types to Azure Data Factory interim data types. These mappings are also used when copying data from or to Azure Synapse Analytics using Synapse pipelines, since pipelines also implement Azure Data Factory within Azure Synapse. See [schema and data type mappings](copy-activity-schema-and-type-mapping.md) to learn how Copy Activity maps the source schema and data type to the sink.
>[!TIP] >Refer to [Table data types in Azure Synapse Analytics](../synapse-analytics/sql/develop-tables-data-types.md) article on Azure Synapse Analytics supported data types and the workarounds for unsupported ones.
When you copy data from or to Azure Synapse Analytics, the following mappings ar
## Next steps
-For a list of data stores supported as sources and sinks by Copy Activity in Azure Data Factory, see [supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores supported as sources and sinks by Copy Activity, see [supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-database.md
Title: Copy and transform data in Azure SQL Database
-description: Learn how to copy data to and from Azure SQL Database, and transform data in Azure SQL Database by using Azure Data Factory.
+description: Learn how to copy data to and from Azure SQL Database, and transform data in Azure SQL Database using Azure Data Factory or Azure Synapse Analytics pipelines.
+ Last updated 06/15/2021
-# Copy and transform data in Azure SQL Database by using Azure Data Factory
+# Copy and transform data in Azure SQL Database by using Azure Data Factory or Azure Synapse Analytics
> [!div class="op_single_selector" title1="Select the version of Azure Data Factory that you're using:"] >
Last updated 06/15/2021
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use Copy Activity in Azure Data Factory to copy data from and to Azure SQL Database, and use Data Flow to transform data in Azure SQL Database. To learn about Azure Data Factory, read the [introductory article](introduction.md).
+This article outlines how to use Copy Activity in Azure Data Factory or Azure Synapse pipelines to copy data from and to Azure SQL Database, and use Data Flow to transform data in Azure SQL Database. To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
## Supported capabilities
If you use Azure SQL Database [serverless tier](../azure-sql/database/serverless
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
-The following sections provide details about properties that are used to define Azure Data Factory entities specific to an Azure SQL Database connector.
+The following sections provide details about properties that are used to define Azure Data Factory or Synapse pipeline entities specific to an Azure SQL Database connector.
## Linked service properties
These properties are supported for an Azure SQL Database linked service:
| type | The **type** property must be set to **AzureSqlDatabase**. | Yes | | connectionString | Specify information needed to connect to the Azure SQL Database instance for the **connectionString** property. <br/>You also can put a password or service principal key in Azure Key Vault. If it's SQL authentication, pull the `password` configuration out of the connection string. For more information, see the JSON example following the table and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md). | Yes | | servicePrincipalId | Specify the application's client ID. | Yes, when you use Azure AD authentication with a service principal |
-| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely in Azure Data Factory or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes, when you use Azure AD authentication with a service principal |
+| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes, when you use Azure AD authentication with a service principal |
| tenant | Specify the tenant information, like the domain name or tenant ID, under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes, when you use Azure AD authentication with a service principal |
-| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory's cloud environment is used. | No |
+| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory or Synapse pipeline's cloud environment is used. | No |
| alwaysEncryptedSettings | Specify **alwaysencryptedsettings** information that's needed to enable Always Encrypted to protect sensitive data stored in SQL server by using either managed identity or service principal. For more information, see the JSON example following the table and [Using Always Encrypted](#using-always-encrypted) section. If not specified, the default always encrypted setting is disabled. |No | | connectVia | This [integration runtime](concepts-integration-runtime.md) is used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is located in a private network. If not specified, the default Azure integration runtime is used. | No |
To use a service principal-based Azure AD application token authentication, foll
ALTER ROLE [role name] ADD MEMBER [your application name]; ```
-5. Configure an Azure SQL Database linked service in Azure Data Factory.
+5. Configure an Azure SQL Database linked service in an Azure Data Factory or Synapse workspace.
#### Linked service example that uses service principal authentication
To use a service principal-based Azure AD application token authentication, foll
### <a name="managed-identity"></a> Managed identities for Azure resources authentication
-A data factory can be associated with a [managed identity for Azure resources](data-factory-service-identity.md) that represents the specific data factory. You can use this managed identity for Azure SQL Database authentication. The designated factory can access and copy data from or to your database by using this identity.
+A data factory or Synapse workspace can be associated with a [managed identity for Azure resources](data-factory-service-identity.md) that represents the service when authenticating to other resources in Azure. You can use this managed identity for Azure SQL Database authentication. The designated factory or Synapse workspace can access and copy data from or to your database by using this identity.
To use managed identity authentication, follow these steps. 1. [Provision an Azure Active Directory administrator](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-database) for your server on the Azure portal if you haven't already done so. The Azure AD administrator can be an Azure AD user or an Azure AD group. If you grant the group with managed identity an admin role, skip steps 3 and 4. The administrator has full access to the database.
-2. [Create contained database users](../azure-sql/database/authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities) for the Azure Data Factory managed identity. Connect to the database from or to which you want to copy data by using tools like SQL Server Management Studio, with an Azure AD identity that has at least ALTER ANY USER permission. Run the following T-SQL:
+2. [Create contained database users](../azure-sql/database/authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities) for the managed identity. Connect to the database from or to which you want to copy data by using tools like SQL Server Management Studio, with an Azure AD identity that has at least ALTER ANY USER permission. Run the following T-SQL:
```sql
- CREATE USER [your Data Factory name] FROM EXTERNAL PROVIDER;
+ CREATE USER [your_resource_name] FROM EXTERNAL PROVIDER;
```
-3. Grant the Data Factory managed identity needed permissions as you normally do for SQL users and others. Run the following code. For more options, see [this document](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql).
+3. Grant the managed identity needed permissions as you normally do for SQL users and others. Run the following code. For more options, see [this document](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql).
```sql
- ALTER ROLE [role name] ADD MEMBER [your Data Factory name];
+ ALTER ROLE [role name] ADD MEMBER [your_resource_name];
```
-4. Configure an Azure SQL Database linked service in Azure Data Factory.
+4. Configure an Azure SQL Database linked service.
**Example**
To copy data to Azure SQL Database, the following properties are supported in th
| storedProcedureTableTypeParameterName |The parameter name of the table type specified in the stored procedure. |No | | sqlWriterTableType |The table type name to be used in the stored procedure. The copy activity makes the data being moved available in a temp table with this table type. Stored procedure code can then merge the data that's being copied with existing data. |No | | storedProcedureParameters |Parameters for the stored procedure.<br/>Allowed values are name and value pairs. Names and casing of parameters must match the names and casing of the stored procedure parameters. | No |
-| writeBatchSize | Number of rows to insert into the SQL table *per batch*.<br/> The allowed value is **integer** (number of rows). By default, Azure Data Factory dynamically determines the appropriate batch size based on the row size. | No |
+| writeBatchSize | Number of rows to insert into the SQL table *per batch*.<br/> The allowed value is **integer** (number of rows). By default, the service dynamically determines the appropriate batch size based on the row size. | No |
| writeBatchTimeout | The wait time for the batch insert operation to finish before it times out.<br/> The allowed value is **timespan**. An example is "00:30:00" (30 minutes). | No |
-| disableMetricsCollection | Data Factory collects metrics such as Azure SQL Database DTUs for copy performance optimization and recommendations, which introduces additional master DB access. If you are concerned with this behavior, specify `true` to turn it off. | No (default is `false`) |
+| disableMetricsCollection | The service collects metrics such as Azure SQL Database DTUs for copy performance optimization and recommendations, which introduces additional master DB access. If you are concerned with this behavior, specify `true` to turn it off. | No (default is `false`) |
| maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | **Example 1: Append data**
The Azure SQL Database connector in copy activity provides built-in data partiti
![Screenshot of partition options](./media/connector-sql-server/connector-sql-partition-options.png)
-When you enable partitioned copy, copy activity runs parallel queries against your Azure SQL Database source to load data by partitions. The parallel degree is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. For example, if you set `parallelCopies` to four, Data Factory concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your Azure SQL Database.
+When you enable partitioned copy, copy activity runs parallel queries against your Azure SQL Database source to load data by partitions. The parallel degree is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. For example, if you set `parallelCopies` to four, the service concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your Azure SQL Database.
You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Azure SQL Database. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommended to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file. | Scenario | Suggested settings | | | |
-| Full load from large table, with physical partitions. | **Partition option**: Physical partitions of table. <br><br/>During execution, Data Factory automatically detects the physical partitions, and copies data by partitions. <br><br/>To check if your table has physical partition or not, you can refer to [this query](#sample-query-to-check-physical-partition). |
-| Full load from large table, without physical partitions, while with an integer or datetime column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Partition column** (optional): Specify the column used to partition data. If not specified, the index or primary key column is used.<br/>**Partition upper bound** and **partition lower bound** (optional): Specify if you want to determine the partition stride. This is not for filtering the rows in table, all rows in the table will be partitioned and copied. If not specified, copy activity auto detect the values.<br><br>For example, if your partition column "ID" has values range from 1 to 100, and you set the lower bound as 20 and the upper bound as 80, with parallel copy as 4, Data Factory retrieves data by 4 partitions - IDs in range <=20, [21, 50], [51, 80], and >=81, respectively. |
-| Load a large amount of data by using a custom query, without physical partitions, while with an integer or date/datetime column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to partition data.<br>**Partition upper bound** and **partition lower bound** (optional): Specify if you want to determine the partition stride. This is not for filtering the rows in table, all rows in the query result will be partitioned and copied. If not specified, copy activity auto detect the value.<br><br>During execution, Data Factory replaces `?AdfRangePartitionColumnName` with the actual column name and value ranges for each partition, and sends to Azure SQL Database. <br>For example, if your partition column "ID" has values range from 1 to 100, and you set the lower bound as 20 and the upper bound as 80, with parallel copy as 4, Data Factory retrieves data by 4 partitions- IDs in range <=20, [21, 50], [51, 80], and >=81, respectively. <br><br>Here are more sample queries for different scenarios:<br> 1. Query the whole table: <br>`SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition`<br> 2. Query from a table with column selection and additional where-clause filters: <br>`SELECT <column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`<br> 3. Query with subqueries: <br>`SELECT <column_list> FROM (<your_sub_query>) AS T WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`<br> 4. Query with partition in subquery: <br>`SELECT <column_list> FROM (SELECT <your_sub_query_column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition) AS T`
+| Full load from large table, with physical partitions. | **Partition option**: Physical partitions of table. <br><br/>During execution, the service automatically detects the physical partitions, and copies data by partitions. <br><br/>To check if your table has physical partition or not, you can refer to [this query](#sample-query-to-check-physical-partition). |
+| Full load from large table, without physical partitions, while with an integer or datetime column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Partition column** (optional): Specify the column used to partition data. If not specified, the index or primary key column is used.<br/>**Partition upper bound** and **partition lower bound** (optional): Specify if you want to determine the partition stride. This is not for filtering the rows in table, all rows in the table will be partitioned and copied. If not specified, copy activity auto detect the values.<br><br>For example, if your partition column "ID" has values range from 1 to 100, and you set the lower bound as 20 and the upper bound as 80, with parallel copy as 4, the service retrieves data by 4 partitions - IDs in range <=20, [21, 50], [51, 80], and >=81, respectively. |
+| Load a large amount of data by using a custom query, without physical partitions, while with an integer or date/datetime column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to partition data.<br>**Partition upper bound** and **partition lower bound** (optional): Specify if you want to determine the partition stride. This is not for filtering the rows in table, all rows in the query result will be partitioned and copied. If not specified, copy activity auto detect the value.<br><br>During execution, the service replaces `?AdfRangePartitionColumnName` with the actual column name and value ranges for each partition, and sends to Azure SQL Database. <br>For example, if your partition column "ID" has values range from 1 to 100, and you set the lower bound as 20 and the upper bound as 80, with parallel copy as 4, the service retrieves data by 4 partitions- IDs in range <=20, [21, 50], [51, 80], and >=81, respectively. <br><br>Here are more sample queries for different scenarios:<br> 1. Query the whole table: <br>`SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition`<br> 2. Query from a table with column selection and additional where-clause filters: <br>`SELECT <column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`<br> 3. Query with subqueries: <br>`SELECT <column_list> FROM (<your_sub_query>) AS T WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`<br> 4. Query with partition in subquery: <br>`SELECT <column_list> FROM (SELECT <your_sub_query_column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition) AS T`
| Best practices to load data with partition option:
When you copy data into Azure SQL Database, you might require different write be
- [Overwrite](#overwrite-the-entire-table): I want to reload an entire dimension table each time. - [Write with custom logic](#write-data-with-custom-logic): I need extra processing before the final insertion into the destination table.
-Refer to the respective sections about how to configure in Azure Data Factory and best practices.
+Refer to the respective sections about how to configure in the service and best practices.
### Append data
-Appending data is the default behavior of this Azure SQL Database sink connector. Azure Data Factory does a bulk insert to write to your table efficiently. You can configure the source and sink accordingly in the copy activity.
+Appending data is the default behavior of this Azure SQL Database sink connector. the service does a bulk insert to write to your table efficiently. You can configure the source and sink accordingly in the copy activity.
### Upsert data
Appending data is the default behavior of this Azure SQL Database sink connector
Copy activity currently doesn't natively support loading data into a database temporary table. There is an advanced way to set it up with a combination of multiple activities, refer to [Optimize Azure SQL Database Bulk Upsert scenarios](https://github.com/scoriani/azuresqlbulkupsert). Below shows a sample of using a permanent table as staging.
-As an example, in Azure Data Factory, you can create a pipeline with a **Copy activity** chained with a **Stored Procedure activity**. The former copies data from your source store into an Azure SQL Database staging table, for example, **UpsertStagingTable**, as the table name in the dataset. Then the latter invokes a stored procedure to merge source data from the staging table into the target table and clean up the staging table.
+As an example, you can create a pipeline with a **Copy activity** chained with a **Stored Procedure activity**. The former copies data from your source store into an Azure SQL Database staging table, for example, **UpsertStagingTable**, as the table name in the dataset. Then the latter invokes a stored procedure to merge source data from the staging table into the target table and clean up the staging table.
![Upsert](./media/connector-azure-sql-database/azure-sql-database-upsert.png)
END
### Overwrite the entire table
-You can configure the **preCopyScript** property in the copy activity sink. In this case, for each copy activity that runs, Azure Data Factory runs the script first. Then it runs the copy to insert the data. For example, to overwrite the entire table with the latest data, specify a script to first delete all the records before you bulk load the new data from the source.
+You can configure the **preCopyScript** property in the copy activity sink. In this case, for each copy activity that runs, the service runs the script first. Then it runs the copy to insert the data. For example, to overwrite the entire table with the latest data, specify a script to first delete all the records before you bulk load the new data from the source.
### Write data with custom logic
The following sample shows how to use a stored procedure to do an upsert into a
END ```
-3. In Azure Data Factory, define the **SQL sink** section in the copy activity as follows:
+3. In your Azure Data Factory or Synapse pipeline, define the **SQL sink** section in the copy activity as follows:
```json "sink": {
Settings specific to Azure SQL Database are available in the **Source Options**
**Query**: If you select Query in the input field, enter a SQL query for your source. This setting overrides any table that you've chosen in the dataset. **Order By** clauses aren't supported here, but you can set a full SELECT FROM statement. You can also use user-defined table functions. **select * from udfGetData()** is a UDF in SQL that returns a table. This query will produce a source table that you can use in your data flow. Using queries is also a great way to reduce rows for testing or for lookups.
-**Stored procedure**: Choose this option if you wish to generate a projection and source data from a stored procedure that is executed from your source database. You can type in the schema, procedure name, and parameters, or click on Refresh to ask ADF to discover the schemas and procedure names. Then you can click on Import to import all procedure parameters using the form ``@paraName``.
+**Stored procedure**: Choose this option if you wish to generate a projection and source data from a stored procedure that is executed from your source database. You can type in the schema, procedure name, and parameters, or click on Refresh to ask the service to discover the schemas and procedure names. Then you can click on Import to import all procedure parameters using the form ``@paraName``.
![Stored procedure](media/data-flow/stored-procedure-2.png "Stored Procedure")
Settings specific to Azure SQL Database are available in the **Settings** tab of
![Key Columns](media/data-flow/keycolumn.png "Key Columns")
-The column name that you pick as the key here will be used by ADF as part of the subsequent update, upsert, delete. Therefore, you must pick a column that exists in the Sink mapping. If you wish to not write the value to this key column, then click "Skip writing key columns".
+The column name that you pick as the key here will be used by the service as part of the subsequent update, upsert, delete. Therefore, you must pick a column that exists in the Sink mapping. If you wish to not write the value to this key column, then click "Skip writing key columns".
-You can parameterize the key column used here for updating your target Azure SQL Database table. If you have multiple columns for a composite key, the click on "Custom Expression" and you will be able to add dynamic content using the ADF data flow expression language, which can include an array of strings with column names for a composite key.
+You can parameterize the key column used here for updating your target Azure SQL Database table. If you have multiple columns for a composite key, the click on "Custom Expression" and you will be able to add dynamic content using the data flow [expression language](control-flow-expression-language-functions.md), which can include an array of strings with column names for a composite key.
**Table action:** Determines whether to recreate or remove all rows from the destination table prior to writing.
You can parameterize the key column used here for updating your target Azure SQL
**Batch size**: Controls how many rows are being written in each bucket. Larger batch sizes improve compression and memory optimization, but risk out of memory exceptions when caching data.
-**Use TempDB:** By default, Data Factory will use a global temporary table to store data as part of the loading process. You can alternatively uncheck the "Use TempDB" option and instead, ask Data Factory to store the temporary holding table in a user database that is located in the database that is being used for this Sink.
+**Use TempDB:** By default, the service will use a global temporary table to store data as part of the loading process. You can alternatively uncheck the "Use TempDB" option and instead, ask the service to store the temporary holding table in a user database that is located in the database that is being used for this Sink.
![Use Temp DB](media/data-flow/tempdb.png "Use Temp DB")
When writing to Azure SQL DB, certain rows of data may fail due to constraints s
* Cannot insert the value NULL into column * The INSERT statement conflicted with the CHECK constraint
-By default, a data flow run will fail on the first error it gets. You can choose to **Continue on error** that allows your data flow to complete even if individual rows have errors. Azure Data Factory provides different options for you to handle these error rows.
+By default, a data flow run will fail on the first error it gets. You can choose to **Continue on error** that allows your data flow to complete even if individual rows have errors. The service provides different options for you to handle these error rows.
**Transaction Commit:** Choose whether your data gets written in a single transaction or in batches. Single transaction will provide worse performance but no data written will be visible to others until the transaction completes.
By default, a data flow run will fail on the first error it gets. You can choose
## Data type mapping for Azure SQL Database
-When data is copied from or to Azure SQL Database, the following mappings are used from Azure SQL Database data types to Azure Data Factory interim data types. To learn how the copy activity maps the source schema and data type to the sink, see [Schema and data type mappings](copy-activity-schema-and-type-mapping.md).
+When data is copied from or to Azure SQL Database, the following mappings are used from Azure SQL Database data types to Azure Data Factory interim data types. The same mappings are used by the Synapse pipeline feature, which implements Azure Data Factory directly. To learn how the copy activity maps the source schema and data type to the sink, see [Schema and data type mappings](copy-activity-schema-and-type-mapping.md).
-| Azure SQL Database data type | Azure Data Factory interim data type |
+| Azure SQL Database data type | Data Factory interim data type |
|: |: | | bigint |Int64 | | binary |Byte[] |
When you copy data from/to SQL Server with [Always Encrypted](/sql/relational-da
## Next steps
-For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-managed-instance.md
Title: Copy and transform data in Azure SQL Managed Instance
description: Learn how to copy and transform data in Azure SQL Managed Instance by using Azure Data Factory. +
data-factory Connector Azure Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-table-storage.md
description: Learn how to copy data from supported source stores to Azure Table
+ Last updated 03/17/2021
data-factory Connector Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-cassandra.md
description: Learn how to copy data from Cassandra to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 08/12/2019
data-factory Connector Concur https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-concur.md
description: Learn how to copy data from Concur to supported sink data stores by
+ Last updated 11/25/2020
data-factory Connector Couchbase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-couchbase.md
description: Learn how to copy data from Couchbase to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 08/12/2019
data-factory Connector Db2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-db2.md
description: Learn how to copy data from DB2 to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 05/26/2020
data-factory Connector Drill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-drill.md
description: Learn how to copy data from Drill to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 10/25/2019
data-factory Connector Dynamics Ax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-ax.md
description: Learn how to copy data from Dynamics AX to supported sink data stor
+ Last updated 06/12/2020
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-crm-office-365.md
Title: Copy data in Dynamics (Microsoft Dataverse)
description: Learn how to copy data from Microsoft Dynamics CRM or Microsoft Dynamics 365 (Microsoft Dataverse) to supported sink data stores or from supported source data stores to Dynamics CRM or Dynamics 365 by using a copy activity in a data factory pipeline. +
data-factory Connector File System https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-file-system.md
Title: Copy data from/to a file system by using Azure Data Factory
+ Title: Copy data from/to a file system
-description: Learn how to copy data from file system to supported sink data stores (or) from supported source data stores to file system by using Azure Data Factory.
+description: Learn how to copy data from file system to supported sink data stores (or) from supported source data stores to file system using an Azure Data Factory or Azure Synapse Analytics pipelines.
+ Last updated 03/29/2021
-# Copy data to or from a file system by using Azure Data Factory
+# Copy data to or from a file system by using Azure Data Factory or Azure Synapse Analytics
> [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"] > * [Version 1](v1/data-factory-onprem-file-system-connector.md) > * [Current version](connector-file-system.md) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to copy data to and from file system. To learn about Azure Data Factory, read the [introductory article](introduction.md).
+This article outlines how to copy data to and from file system. To learn more read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
## Supported capabilities
Specifically, this file system connector supports:
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
-The following sections provide details about properties that are used to define Data Factory entities specific to file system.
+The following sections provide details about properties that are used to define Data Factory and Synapse pipeline entities specific to file system.
## Linked service properties
The following properties are supported for file system linked service:
| type | The type property must be set to: **FileServer**. | Yes | | host | Specifies the root path of the folder that you want to copy. Use the escape character "\" for special characters in the string. See [Sample linked service and dataset definitions](#sample-linked-service-and-dataset-definitions) for examples. | Yes | | userId | Specify the ID of the user who has access to the server. | Yes |
-| password | Specify the password for the user (userId). Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| password | Specify the password for the user (userId). Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No | ### Sample linked service and dataset definitions
The following properties are supported for file system under `storeSettings` set
| ***Locate the files to copy:*** | | | | OPTION 1: static path<br> | Copy from the given folder/file path specified in the dataset. If you want to copy all files from a folder, additionally specify `wildcardFileName` as `*`. | | | OPTION 2: server side filter<br>- fileFilter | File server side native filter, which provides better performance than OPTION 3 wildcard filter. Use `*` to match zero or more characters and `?` to match zero or single character. Learn more about the syntax and notes from the **Remarks** under [this section](/dotnet/api/system.io.directory.getfiles#System_IO_Directory_GetFiles_System_String_System_String_System_IO_SearchOption_). | No |
-| OPTION 3: client side filter<br>- wildcardFolderPath | The folder path with wildcard characters to filter source folders. Such filter happens on ADF side, ADF enumerate the folders/files under the given path then apply the wildcard filter.<br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual folder name has wildcard or this escape char inside. <br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | No |
-| OPTION 3: client side filter<br>- wildcardFileName | The file name with wildcard characters under the given folderPath/wildcardFolderPath to filter source files. Such filter happens on ADF side, ADF enumerate the files under the given path then apply the wildcard filter.<br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual file name has wildcard or this escape char inside.<br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | Yes |
+| OPTION 3: client side filter<br>- wildcardFolderPath | The folder path with wildcard characters to filter source folders. Such filter happens within the service, which enumerate the folders/files under the given path then apply the wildcard filter.<br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual folder name has wildcard or this escape char inside. <br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | No |
+| OPTION 3: client side filter<br>- wildcardFileName | The file name with wildcard characters under the given folderPath/wildcardFolderPath to filter source files. Such filter happens within the service, which enumerates the files under the given path then apply the wildcard filter.<br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual file name has wildcard or this escape char inside.<br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | Yes |
| OPTION 3: a list of files<br>- fileListPath | Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.<br/>When using this option, do not specify file name in dataset. See more examples in [File list examples](#file-list-examples). |No | | ***Additional settings:*** | | | | recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when recursive is set to true and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. <br>Allowed values are **true** (default) and **false**.<br>This property doesn't apply when you configure `fileListPath`. |No |
This section describes the resulting behavior of using file list path in copy ac
Assuming you have the following source folder structure and want to copy the files in bold:
-| Sample source structure | Content in FileListToCopy.txt | ADF configuration |
+| Sample source structure | Content in FileListToCopy.txt | Pipeline configuration |
| | | | | root<br/>&nbsp;&nbsp;&nbsp;&nbsp;FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Metadata<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileListToCopy.txt | File1.csv<br>Subfolder1/File3.csv<br>Subfolder1/File5.csv | **In dataset:**<br>- Folder path: `root/FolderA`<br><br>**In copy activity source:**<br>- File list path: `root/Metadata/FileListToCopy.txt` <br><br>The file list path points to a text file in the same data store that includes a list of files you want to copy, one file per line with the relative path to the path configured in the dataset. |
To learn details about the properties, check [Delete activity](delete-activity.m
## Legacy models >[!NOTE]
->The following models are still supported as-is for backward compatibility. You are suggested to use the new model mentioned in above sections going forward, and the ADF authoring UI has switched to generating the new model.
+>The following models are still supported as-is for backward compatibility. You are suggested to use the new model mentioned in above sections going forward, and the authoring UI has switched to generating the new model.
### Legacy dataset model
To learn details about the properties, check [Delete activity](delete-activity.m
``` ## Next steps
-For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Ftp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-ftp.md
description: Learn how to copy data from an FTP server to a supported sink data store by using a copy activity in an Azure Data Factory pipeline. + Last updated 03/17/2021
data-factory Connector Github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-github.md
description: Use GitHub to specify your Common Data Model entity references + Last updated 06/03/2020
data-factory Connector Google Adwords https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-google-adwords.md
description: Learn how to copy data from Google AdWords to supported sink data s
+ Last updated 10/25/2019
data-factory Connector Google Bigquery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-google-bigquery.md
description: Learn how to copy data from Google BigQuery to supported sink data
+ Last updated 09/04/2019
data-factory Connector Google Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-google-cloud-storage.md
description: Learn about how to copy data from Google Cloud Storage to supported sink data stores by using Azure Data Factory. + Last updated 03/17/2021
data-factory Connector Greenplum https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-greenplum.md
description: Learn how to copy data from Greenplum to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 09/04/2019
data-factory Connector Hbase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hbase.md
description: Learn how to copy data from HBase to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 08/12/2019
data-factory Connector Hdfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hdfs.md
description: Learn how to copy data from a cloud or on-premises HDFS source to supported sink data stores by using Copy activity in an Azure Data Factory pipeline. + Last updated 03/17/2021
data-factory Connector Hive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hive.md
description: Learn how to copy data from Hive to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 11/17/2020
data-factory Connector Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-http.md
description: Learn how to copy data from a cloud or on-premises HTTP source to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 03/17/2021
data-factory Connector Hubspot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hubspot.md
description: Learn how to copy data from HubSpot to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 12/18/2020
data-factory Connector Impala https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-impala.md
description: Learn how to copy data from Impala to supported sink data stores by using a copy activity in a data factory pipeline. + Last updated 09/04/2019
data-factory Connector Informix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-informix.md
description: Learn how to copy data from and to IBM Informix by using a copy activity in an Azure Data Factory pipeline. + Last updated 03/17/2021
data-factory Connector Jira https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-jira.md
description: Learn how to copy data from Jira to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 10/25/2019
data-factory Connector Magento https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-magento.md
description: Learn how to copy data from Magento to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 08/01/2019
data-factory Connector Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mariadb.md
description: Learn how to copy data from MariaDB to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 08/12/2019
data-factory Connector Marketo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-marketo.md
Title: Copy data from Marketo using Azure Data Factory (Preview) description: Learn how to copy data from Marketo to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- + Last updated 06/04/2020-++ # Copy data from Marketo using Azure Data Factory (Preview) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory Connector Microsoft Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-microsoft-access.md
Title: Copy data from and to Microsoft Access description: Learn how to copy data from and to Microsoft Access by using a copy activity in an Azure Data Factory pipeline.--++ + Last updated 03/17/2021
data-factory Connector Mongodb Atlas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb-atlas.md
Title: Copy data from or to MongoDB Atlas description: Learn how to copy data from MongoDB Atlas to supported sink data stores, or from supported source data stores to MongoDB Atlas, by using a copy activity in an Azure Data Factory pipeline.--++ + Last updated 06/01/2021
data-factory Connector Mongodb Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb-legacy.md
description: Learn how to copy data from Mongo DB to supported sink data stores
+ Last updated 08/12/2019
data-factory Connector Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb.md
Title: Copy data from or to MongoDB description: Learn how to copy data from MongoDB to supported sink data stores, or from supported source data stores to MongoDB, by using a copy activity in an Azure Data Factory pipeline.--++ + Last updated 06/01/2021
data-factory Connector Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mysql.md
description: Learn about MySQL connector in Azure Data Factory that lets you copy data from a MySQL database to a data store supported as a sink. + Last updated 09/09/2020
data-factory Connector Netezza https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-netezza.md
description: Learn how to copy data from Netezza to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 05/28/2020
data-factory Connector Odata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-odata.md
description: Learn how to copy data from OData sources to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 03/30/2021
data-factory Connector Odbc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-odbc.md
description: Learn how to copy data from and to ODBC data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 05/10/2021
data-factory Connector Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-office-365.md
description: Learn how to copy data from Office 365 to supported sink data stores by using copy activity in an Azure Data Factory pipeline. + Last updated 10/20/2019
data-factory Connector Oracle Eloqua https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle-eloqua.md
description: Learn how to copy data from Oracle Eloqua to supported sink data st
+ Last updated 08/01/2019
data-factory Connector Oracle Responsys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle-responsys.md
description: Learn how to copy data from Oracle Responsys to supported sink data
+ Last updated 08/01/2019
data-factory Connector Oracle Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle-service-cloud.md
description: Learn how to copy data from Oracle Service Cloud to supported sink
+ Last updated 08/01/2019
data-factory Connector Oracle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle.md
description: Learn how to copy data from supported source stores to an Oracle database, or from Oracle to supported sink stores, by using Data Factory. + Last updated 03/17/2021
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-overview.md
Title: Azure Data Factory connector overview
+ Title: Connector overview
-description: Learn the supported connectors in Data Factory.
+description: Learn the supported connectors in Azure Data Factory and Azure Synapse Analytics pipelines.
+ Last updated 05/26/2021
-# Azure Data Factory connector overview
+# Azure Data Factory and Azure Synapse Analytics connector overview
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Azure Data Factory supports the following data stores and formats via Copy, Data Flow, Look up, Get Metadata, and Delete activities. Click each data store to learn the supported capabilities and the corresponding configurations in details.
+Azure Data Factory and Azure Synapse Analytics pipelines support the following data stores and formats via Copy, Data Flow, Look up, Get Metadata, and Delete activities. Click each data store to learn the supported capabilities and the corresponding configurations in details.
## Supported data stores
Azure Data Factory supports the following data stores and formats via Copy, Data
## Integrate with more data stores
-Azure Data Factory can reach broader set of data stores than the list mentioned above. If you need to move data to/from a data store that is not in the Azure Data Factory built-in connector list, here are some extensible options:
+Azure Data Factory and Synapse pipelines can reach broader set of data stores than the list mentioned above. If you need to move data to/from a data store that is not in the service built-in connector list, here are some extensible options:
- For database and data warehouse, usually you can find a corresponding ODBC driver, with which you can use [generic ODBC connector](connector-odbc.md). - For SaaS applications: - If it provides RESTful APIs, you can use [generic REST connector](connector-rest.md). - If it has OData feed, you can use [generic OData connector](connector-odata.md). - If it provides SOAP APIs, you can use [generic HTTP connector](connector-http.md). - If it has ODBC driver, you can use [generic ODBC connector](connector-odbc.md).-- For others, check if you can load data to or expose data as any ADF supported data stores, e.g. Azure Blob/File/FTP/SFTP/etc, then let ADF pick up from there. You can invoke custom data loading mechanism via [Azure Function](control-flow-azure-function-activity.md), [Custom activity](transform-data-using-dotnet-custom-activity.md), [Databricks](transform-data-databricks-notebook.md)/[HDInsight](transform-data-using-hadoop-hive.md), [Web activity](control-flow-web-activity.md), etc.
+- For others, check if you can load data to or expose data as any supported data stores, e.g. Azure Blob/File/FTP/SFTP/etc, then let the service pick up from there. You can invoke custom data loading mechanism via [Azure Function](control-flow-azure-function-activity.md), [Custom activity](transform-data-using-dotnet-custom-activity.md), [Databricks](transform-data-databricks-notebook.md)/[HDInsight](transform-data-using-hadoop-hive.md), [Web activity](control-flow-web-activity.md), etc.
## Supported file formats
-Azure Data Factory supports the following file formats. Refer to each article for format-based settings.
+The following file formats are supported. Refer to each article for format-based settings.
- [Avro format](format-avro.md) - [Binary format](format-binary.md)
data-factory Connector Paypal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-paypal.md
description: Learn how to copy data from PayPal to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 08/01/2019
data-factory Connector Phoenix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-phoenix.md
description: Learn how to copy data from Phoenix to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 09/04/2019
data-factory Connector Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-postgresql.md
description: Learn how to copy data from PostgreSQL to supported sink data store
+ Last updated 02/19/2020
data-factory Connector Presto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-presto.md
description: Learn how to copy data from Presto to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 12/18/2020
data-factory Connector Quickbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-quickbooks.md
description: Learn how to copy data from QuickBooks Online to supported sink dat
+ Last updated 01/15/2021
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-rest.md
Title: Copy data from and to a REST endpoint by using Azure Data Factory
+ Title: Copy data from and to a REST endpoint
-description: Learn how to copy data from a cloud or on-premises REST source to supported sink data stores, or from supported source data store to a REST sink by using a copy activity in an Azure Data Factory pipeline.
+description: Learn how to copy data from a cloud or on-premises REST source to supported sink data stores, or from supported source data store to a REST sink by using the copy activity in Azure Data Factory or Azure Synapse Analytics pipelines.
+ Last updated 07/27/2021
+# Copy data from and to a REST endpoint using Azure Data Factory or Azure Synapse Analytics
-# Copy data from and to a REST endpoint by using Azure Data Factory
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use Copy Activity in Azure Data Factory to copy data from and to a REST endpoint. The article builds on [Copy Activity in Azure Data Factory](copy-activity-overview.md), which presents a general overview of Copy Activity.
+This article outlines how to use the Copy Activity in Azure Data Factory and Azure Synapse Analytics pipelines to copy data from and to a REST endpoint. The article builds on [Copy Activity in Azure Data Factory and Azure Synapse pipelines](copy-activity-overview.md), which presents a general overview of Copy Activity.
-The difference among this REST connector, [HTTP connector](connector-http.md), and the [Web table connector](connector-web-table.md) are:
+The differences between this REST connector, [HTTP connector](connector-http.md), and the [Web table connector](connector-web-table.md) are:
- **REST connector** specifically supports copying data from RESTful APIs; - **HTTP connector** is generic to retrieve data from any HTTP endpoint, for example, to download file. Before this REST connector you may happen to use HTTP connector to copy data from RESTful API, which is supported but less functional comparing to REST connector.
Specifically, this generic REST connector supports:
- For REST as source, copying the REST JSON response [as-is](#export-json-response-as-is) or parse it by using [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping). Only response payload in **JSON** is supported. > [!TIP]
-> To test a request for data retrieval before you configure the REST connector in Data Factory, learn about the API specification for header and body requirements. You can use tools like Postman or a web browser to validate.
+> To test a request for data retrieval before you configure the REST connector, learn about the API specification for header and body requirements. You can use tools like Postman or a web browser to validate.
## Prerequisites
Specifically, this generic REST connector supports:
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
-The following sections provide details about properties you can use to define Data Factory entities that are specific to the REST connector.
+The following sections provide details about properties you can use to define entities that are specific to the REST connector.
## Linked service properties
Set the **authenticationType** property to **Basic**. In addition to the generic
| Property | Description | Required | |: |: |: | | userName | The user name to use to access the REST endpoint. | Yes |
-| password | The password for the user (the **userName** value). Mark this field as a **SecureString** type to store it securely in Data Factory. You can also [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| password | The password for the user (the **userName** value). Mark this field as a **SecureString** type to store it securely. You can also [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
**Example**
Set the **authenticationType** property to **AadServicePrincipal**. In addition
| Property | Description | Required | |: |: |: | | servicePrincipalId | Specify the Azure Active Directory application's client ID. | Yes |
-| servicePrincipalKey | Specify the Azure Active Directory application's key. Mark this field as a **SecureString** to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| servicePrincipalKey | Specify the Azure Active Directory application's key. Mark this field as a **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the top-right corner of the Azure portal. | Yes | | aadResourceId | Specify the AAD resource you are requesting for authorization, for example, `https://management.core.windows.net`.| Yes |
-| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your AAD application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory's cloud environment is used. | No |
+| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your AAD application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory or Synapse pipeline's cloud environment is used. | No |
**Example**
To copy data from REST endpoint to tabular sink, refer to [schema mapping](copy-
## Next steps
-For a list of data stores that Copy Activity supports as sources and sinks in Azure Data Factory, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores that Copy Activity supports as sources and sinks in Azure Data Factory and Synapse pipelines, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Salesforce Marketing Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce-marketing-cloud.md
description: Learn how to copy data from Salesforce Marketing Cloud to supported
+ Last updated 07/17/2020
data-factory Connector Salesforce Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce-service-cloud.md
description: Learn how to copy data from Salesforce Service Cloud to supported s
+ Last updated 03/17/2021
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce.md
description: Learn how to copy data from Salesforce to supported sink data store
+ Last updated 03/17/2021
data-factory Connector Sap Business Warehouse Open Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-business-warehouse-open-hub.md
description: Learn how to copy data from SAP Business Warehouse (BW) via Open Hu
+ Last updated 07/30/2021
data-factory Connector Sap Business Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-business-warehouse.md
description: Learn how to copy data from SAP Business Warehouse to supported sin
+ Last updated 09/04/2019
data-factory Connector Sap Cloud For Customer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-cloud-for-customer.md
description: Learn how to copy data from SAP Cloud for Customer to supported sin
+ Last updated 03/17/2021
data-factory Connector Sap Ecc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-ecc.md
description: Learn how to copy data from SAP ECC to supported sink data stores b
+ Last updated 10/28/2020
data-factory Connector Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-hana.md
description: Learn how to copy data from SAP HANA to supported sink data stores
+ Last updated 04/22/2020
data-factory Connector Sap Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-table.md
description: Learn how to copy data from an SAP table to supported sink data sto
+ Last updated 07/30/2021
data-factory Connector Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-servicenow.md
description: Learn how to copy data from ServiceNow to supported sink data store
+ Last updated 08/01/2019
data-factory Connector Sftp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sftp.md
description: Learn how to copy data from and to SFTP server by using Azure Data
+ Last updated 03/17/2021
data-factory Connector Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sharepoint-online-list.md
description: Learn how to copy data from SharePoint Online List to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 05/19/2020
data-factory Connector Shopify https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-shopify.md
description: Learn how to copy data from Shopify to supported sink data stores b
+ Last updated 08/01/2019
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-snowflake.md
description: Learn how to copy and transform data in Snowflake by using Data Fac
+ Last updated 03/16/2021
data-factory Connector Spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-spark.md
description: Learn how to copy data from Spark to supported sink data stores by
+ Last updated 09/04/2019
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sql-server.md
Title: Copy and transform data to and from SQL Server
-description: Learn about how to copy and transform data to and from SQL Server database that is on-premises or in an Azure VM by using Azure Data Factory.
+description: Learn about how to copy and transform data to and from SQL Server database that is on-premises or in an Azure VM by using Azure Data Factory or Azure Synapse Analytics pipelines.
+ Last updated 06/08/2021
-# Copy and transform data to and from SQL Server by using Azure Data Factory
+# Copy and transform data to and from SQL Server by using Azure Data Factory or Azure Synapse Analytics
> [!div class="op_single_selector" title1="Select the version of Azure Data Factory that you're using:"] > * [Version 1](v1/data-factory-sqlserver-connector.md) > * [Current version](connector-sql-server.md) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use the copy activity in Azure Data Factory to copy data from and to SQL Server database and use Data Flow to transform data in SQL Server database. To learn about Azure Data Factory, read the [introductory article](introduction.md).
+This article outlines how to use the copy activity in Azure Data Factory and Azure Synapse pipelines to copy data from and to SQL Server database and use Data Flow to transform data in SQL Server database. To learn more read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
## Supported capabilities
Specifically, this SQL Server connector supports:
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
-The following sections provide details about properties that are used to define Data Factory entities specific to the SQL Server database connector.
+The following sections provide details about properties that are used to define Data Factory and Synapse pipeline entities specific to the SQL Server database connector.
## Linked service properties
The following properties are supported for the SQL Server linked service:
| type | The type property must be set to **SqlServer**. | Yes | | connectionString |Specify **connectionString** information that's needed to connect to the SQL Server database by using either SQL authentication or Windows authentication. Refer to the following samples.<br/>You also can put a password in Azure Key Vault. If it's SQL authentication, pull the `password` configuration out of the connection string. For more information, see the JSON example following the table and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md). |Yes | | userName |Specify a user name if you use Windows authentication. An example is **domainname\\username**. |No |
-| password |Specify a password for the user account you specified for the user name. Mark this field as **SecureString** to store it securely in Azure Data Factory. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |No |
+| password |Specify a password for the user account you specified for the user name. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |No |
| alwaysEncryptedSettings | Specify **alwaysencryptedsettings** information that's needed to enable Always Encrypted to protect sensitive data stored in SQL server by using either managed identity or service principal. For more information, see the JSON example following the table and [Using Always Encrypted](#using-always-encrypted) section. If not specified, the default always encrypted setting is disabled. |No | | connectVia | This [integration runtime](concepts-integration-runtime.md) is used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, the default Azure integration runtime is used. |No |
To copy data to SQL Server, set the sink type in the copy activity to **SqlSink*
| storedProcedureTableTypeParameterName |The parameter name of the table type specified in the stored procedure. |No | | sqlWriterTableType |The table type name to be used in the stored procedure. The copy activity makes the data being moved available in a temp table with this table type. Stored procedure code can then merge the data that's being copied with existing data. |No | | storedProcedureParameters |Parameters for the stored procedure.<br/>Allowed values are name and value pairs. Names and casing of parameters must match the names and casing of the stored procedure parameters. | No |
-| writeBatchSize |Number of rows to insert into the SQL table *per batch*.<br/>Allowed values are integers for the number of rows. By default, Azure Data Factory dynamically determines the appropriate batch size based on the row size. |No |
+| writeBatchSize |Number of rows to insert into the SQL table *per batch*.<br/>Allowed values are integers for the number of rows. By default, the service dynamically determines the appropriate batch size based on the row size. |No |
| writeBatchTimeout |This property specifies the wait time for the batch insert operation to complete before it times out.<br/>Allowed values are for the timespan. An example is "00:30:00" for 30 minutes. If no value is specified, the timeout defaults to "02:00:00". |No | | maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No |
The SQL Server connector in copy activity provides built-in data partitioning to
![Screenshot of partition options](./media/connector-sql-server/connector-sql-partition-options.png)
-When you enable partitioned copy, copy activity runs parallel queries against your SQL Server source to load data by partitions. The parallel degree is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. For example, if you set `parallelCopies` to four, Data Factory concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your SQL Server.
+When you enable partitioned copy, copy activity runs parallel queries against your SQL Server source to load data by partitions. The parallel degree is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. For example, if you set `parallelCopies` to four, the service concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your SQL Server.
You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your SQL Server. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommended to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file. | Scenario | Suggested settings | | | |
-| Full load from large table, with physical partitions. | **Partition option**: Physical partitions of table. <br><br/>During execution, Data Factory automatically detects the physical partitions, and copies data by partitions. <br><br/>To check if your table has physical partition or not, you can refer to [this query](#sample-query-to-check-physical-partition). |
-| Full load from large table, without physical partitions, while with an integer or datetime column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Partition column** (optional): Specify the column used to partition data. If not specified, the primary key column is used.<br/>**Partition upper bound** and **partition lower bound** (optional): Specify if you want to determine the partition stride. This is not for filtering the rows in table, all rows in the table will be partitioned and copied. If not specified, copy activity auto detects the values and it can take long time depending on MIN and MAX values. It is recommended to provide upper bound and lower bound. <br><br>For example, if your partition column "ID" has values range from 1 to 100, and you set the lower bound as 20 and the upper bound as 80, with parallel copy as 4, Data Factory retrieves data by 4 partitions - IDs in range <=20, [21, 50], [51, 80], and >=81, respectively. |
-| Load a large amount of data by using a custom query, without physical partitions, while with an integer or date/datetime column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to partition data.<br>**Partition upper bound** and **partition lower bound** (optional): Specify if you want to determine the partition stride. This is not for filtering the rows in table, all rows in the query result will be partitioned and copied. If not specified, copy activity auto detect the value.<br><br>During execution, Data Factory replaces `?AdfRangePartitionColumnName` with the actual column name and value ranges for each partition, and sends to SQL Server. <br>For example, if your partition column "ID" has values range from 1 to 100, and you set the lower bound as 20 and the upper bound as 80, with parallel copy as 4, Data Factory retrieves data by 4 partitions- IDs in range <=20, [21, 50], [51, 80], and >=81, respectively. <br><br>Here are more sample queries for different scenarios:<br> 1. Query the whole table: <br>`SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition`<br> 2. Query from a table with column selection and additional where-clause filters: <br>`SELECT <column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`<br> 3. Query with subqueries: <br>`SELECT <column_list> FROM (<your_sub_query>) AS T WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`<br> 4. Query with partition in subquery: <br>`SELECT <column_list> FROM (SELECT <your_sub_query_column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition) AS T`
+| Full load from large table, with physical partitions. | **Partition option**: Physical partitions of table. <br><br/>During execution, the service automatically detects the physical partitions, and copies data by partitions. <br><br/>To check if your table has physical partition or not, you can refer to [this query](#sample-query-to-check-physical-partition). |
+| Full load from large table, without physical partitions, while with an integer or datetime column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Partition column** (optional): Specify the column used to partition data. If not specified, the primary key column is used.<br/>**Partition upper bound** and **partition lower bound** (optional): Specify if you want to determine the partition stride. This is not for filtering the rows in table, all rows in the table will be partitioned and copied. If not specified, copy activity auto detects the values and it can take long time depending on MIN and MAX values. It is recommended to provide upper bound and lower bound. <br><br>For example, if your partition column "ID" has values range from 1 to 100, and you set the lower bound as 20 and the upper bound as 80, with parallel copy as 4, the service retrieves data by 4 partitions - IDs in range <=20, [21, 50], [51, 80], and >=81, respectively. |
+| Load a large amount of data by using a custom query, without physical partitions, while with an integer or date/datetime column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to partition data.<br>**Partition upper bound** and **partition lower bound** (optional): Specify if you want to determine the partition stride. This is not for filtering the rows in table, all rows in the query result will be partitioned and copied. If not specified, copy activity auto detect the value.<br><br>During execution, the service replaces `?AdfRangePartitionColumnName` with the actual column name and value ranges for each partition, and sends to SQL Server. <br>For example, if your partition column "ID" has values range from 1 to 100, and you set the lower bound as 20 and the upper bound as 80, with parallel copy as 4, the service retrieves data by 4 partitions- IDs in range <=20, [21, 50], [51, 80], and >=81, respectively. <br><br>Here are more sample queries for different scenarios:<br> 1. Query the whole table: <br>`SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition`<br> 2. Query from a table with column selection and additional where-clause filters: <br>`SELECT <column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`<br> 3. Query with subqueries: <br>`SELECT <column_list> FROM (<your_sub_query>) AS T WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`<br> 4. Query with partition in subquery: <br>`SELECT <column_list> FROM (SELECT <your_sub_query_column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition) AS T`
| Best practices to load data with partition option:
When you copy data into SQL Server, you might require different write behavior:
- [Overwrite](#overwrite-the-entire-table): I want to reload the entire dimension table each time. - [Write with custom logic](#write-data-with-custom-logic): I need extra processing before the final insertion into the destination table.
-See the respective sections for how to configure in Azure Data Factory and best practices.
+See the respective sections for how to configure and best practices.
### Append data
-Appending data is the default behavior of this SQL Server sink connector. Azure Data Factory does a bulk insert to write to your table efficiently. You can configure the source and sink accordingly in the copy activity.
+Appending data is the default behavior of this SQL Server sink connector. the service does a bulk insert to write to your table efficiently. You can configure the source and sink accordingly in the copy activity.
### Upsert data
Appending data is the default behavior of this SQL Server sink connector. Azure
Copy activity currently doesn't natively support loading data into a database temporary table. There is an advanced way to set it up with a combination of multiple activities, refer to [Optimize SQL Database Bulk Upsert scenarios](https://github.com/scoriani/azuresqlbulkupsert). Below shows a sample of using a permanent table as staging.
-As an example, in Azure Data Factory, you can create a pipeline with a **Copy activity** chained with a **Stored Procedure activity**. The former copies data from your source store into a SQL Server staging table, for example, **UpsertStagingTable**, as the table name in the dataset. Then the latter invokes a stored procedure to merge source data from the staging table into the target table and clean up the staging table.
+As an example, you can create a pipeline with a **Copy activity** chained with a **Stored Procedure activity**. The former copies data from your source store into a SQL Server staging table, for example, **UpsertStagingTable**, as the table name in the dataset. Then the latter invokes a stored procedure to merge source data from the staging table into the target table and clean up the staging table.
![Upsert](./media/connector-azure-sql-database/azure-sql-database-upsert.png)
END
### Overwrite the entire table
-You can configure the **preCopyScript** property in a copy activity sink. In this case, for each copy activity that runs, Azure Data Factory runs the script first. Then it runs the copy to insert the data. For example, to overwrite the entire table with the latest data, specify a script to first delete all the records before you bulk load the new data from the source.
+You can configure the **preCopyScript** property in a copy activity sink. In this case, for each copy activity that runs, the service runs the script first. Then it runs the copy to insert the data. For example, to overwrite the entire table with the latest data, specify a script to first delete all the records before you bulk load the new data from the source.
### Write data with custom logic
The following sample shows how to use a stored procedure to do an upsert into a
END ```
-3. In Azure Data Factory, define the **SQL sink** section in the copy activity as follows:
+3. Define the **SQL sink** section in the copy activity as follows:
```json "sink": {
The following sample shows how to use a stored procedure to do an upsert into a
When transforming data in mapping data flow, you can read and write to tables from SQL Server Database. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows. > [!NOTE]
-> To access on premise SQL Server, you need to use Azure Data Factory [Managed Virtual Network](managed-virtual-network-private-endpoint.md) using private endpoint. Refer to this [tutorial](tutorial-managed-virtual-network-on-premise-sql-server.md) for detailed steps.
+> To access on premise SQL Server, you need to use Azure Data Factory or Synapse workspace [Managed Virtual Network](managed-virtual-network-private-endpoint.md) using a private endpoint. Refer to this [tutorial](tutorial-managed-virtual-network-on-premise-sql-server.md) for detailed steps.
### Source transformation
IncomingStream sink(allowSchemaDrift: true,
## Data type mapping for SQL Server
-When you copy data from and to SQL Server, the following mappings are used from SQL Server data types to Azure Data Factory interim data types. To learn how the copy activity maps the source schema and data type to the sink, see [Schema and data type mappings](copy-activity-schema-and-type-mapping.md).
+When you copy data from and to SQL Server, the following mappings are used from SQL Server data types to Azure Data Factory interim data types. Synapse pipelines, which implement Data Factory, use the same mappings. To learn how the copy activity maps the source schema and data type to the sink, see [Schema and data type mappings](copy-activity-schema-and-type-mapping.md).
-| SQL Server data type | Azure Data Factory interim data type |
+| SQL Server data type | Data Factory interim data type |
|: |: | | bigint |Int64 | | binary |Byte[] |
When you copy data from/to SQL Server with [Always Encrypted](/sql/relational-da
6. **Verify connection**: To connect to SQL Server by using a fully qualified name, use SQL Server Management Studio from a different machine. An example is `"<machine>.<domain>.corp.<company>.com,1433"`. ## Next steps
-For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Square https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-square.md
description: Learn how to copy data from Square to supported sink data stores by
+ Last updated 08/03/2020
data-factory Connector Sybase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sybase.md
description: Learn how to copy data from Sybase to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 06/10/2020
data-factory Connector Teradata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-teradata.md
description: The Teradata Connector of the Data Factory service lets you copy data from a Teradata Vantage to data stores supported by Data Factory as sinks. + Last updated 01/22/2021
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-guide.md
Title: Troubleshoot Azure Data Factory connectors
+ Title: Troubleshoot connectors
-description: Learn how to troubleshoot connector issues in Azure Data Factory.
+description: Learn how to troubleshoot connector issues in Azure Data Factory and Azure Synapse Analytics.
+ Last updated 07/30/2021
-# Troubleshoot Azure Data Factory connectors
+# Troubleshoot Azure Data Factory and Azure Synapse Analytics connectors
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article explores common ways to troubleshoot problems with Azure Data Factory connectors.
+This article explores common ways to troubleshoot problems with Azure Data Factory and Azure Synapse connectors.
## Azure Blob Storage
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Symptoms**: When you import a schema for Azure Cosmos DB for column mapping, some columns are missing. -- **Cause**: Data Factory infers the schema from the first 10 Azure Cosmos DB documents. If some document columns or properties don't contain values, the schema isn't detected by Data Factory and consequently isn't displayed.
+- **Cause**: Azure Data Factory and Synapse pipelines infer the schema from the first 10 Azure Cosmos DB documents. If some document columns or properties don't contain values, the schema isn't detected and consequently isn't displayed.
- **Resolution**: You can tune the query as shown in the following code to force the column values to be displayed in the result set with empty values. Assume that the *impossible* column is missing in the first 10 documents). Alternatively, you can manually add the column for mapping.
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Cause**: One possible cause is that the service principal or managed identity you use doesn't have permission to access certain folders or files. -- **Resolution**: Grant appropriate permissions to all the folders and subfolders you need to copy. For more information, see [Copy data to or from Azure Data Lake Storage Gen1 using Azure Data Factory](connector-azure-data-lake-store.md#linked-service-properties).
+- **Resolution**: Grant appropriate permissions to all the folders and subfolders you need to copy. For more information, see [Copy data to or from Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#linked-service-properties).
### Error message: Failed to get access token by using service principal. ADAL Error: service_unavailable
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
| Cause analysis | Recommendation | | :-- | :-- | | If Azure Data Lake Storage Gen2 throws error indicating some operation failed.| Check the detailed error message thrown by Azure Data Lake Storage Gen2. If the error is a transient failure, retry the operation. For further help, contact Azure Storage support, and provide the request ID in error message. |
- | If the error message contains the string "Forbidden", the service principal or managed identity you use might not have sufficient permission to access Azure Data Lake Storage Gen2. | To troubleshoot this error, see [Copy and transform data in Azure Data Lake Storage Gen2 by using Azure Data Factory](./connector-azure-data-lake-storage.md#service-principal-authentication). |
+ | If the error message contains the string "Forbidden", the service principal or managed identity you use might not have sufficient permission to access Azure Data Lake Storage Gen2. | To troubleshoot this error, see [Copy and transform data in Azure Data Lake Storage Gen2](./connector-azure-data-lake-storage.md#service-principal-authentication). |
| If the error message contains the string "InternalServerError", the error is returned by Azure Data Lake Storage Gen2. | The error might be caused by a transient failure. If so, retry the operation. If the issue persists, contact Azure Storage support and provide the request ID from the error message. | ### Request to Azure Data Lake Storage Gen2 account caused a timeout error
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
| For Azure SQL, if the error message contains an SQL error code such as "SqlErrorNumber=[errorcode]", see the Azure SQL troubleshooting guide. | For a recommendation, see [Troubleshoot connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance](../azure-sql/database/troubleshoot-common-errors-issues.md). | | Check to see whether port 1433 is in the firewall allowlist. | For more information, see [Ports used by SQL Server](/sql/sql-server/install/configure-the-windows-firewall-to-allow-sql-server-access#ports-used-by-). | | If the error message contains the string "SqlException", SQL Database the error indicates that some specific operation failed. | For more information, search by SQL error code in [Database engine errors](/sql/relational-databases/errors-events/database-engine-events-and-errors). For further help, contact Azure SQL support. |
- | If this is a transient issue (for example, an instable network connection), add retry in the activity policy to mitigate. | For more information, see [Pipelines and activities in Azure Data Factory](./concepts-pipelines-activities.md#activity-policy). |
+ | If this is a transient issue (for example, an instable network connection), add retry in the activity policy to mitigate. | For more information, see [Pipelines and activities](./concepts-pipelines-activities.md#activity-policy). |
| If the error message contains the string "Client with IP address '...' is not allowed to access the server", and you're trying to connect to Azure SQL Database, the error is usually caused by an Azure SQL Database firewall issue. | In the Azure SQL Server firewall configuration, enable the **Allow Azure services and resources to access this server** option. For more information, see [Azure SQL Database and Azure Synapse IP firewall rules](../azure-sql/database/firewall-configure.md). | ### Error code: SqlOperationFailed
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
| :-- | :-- | | If the error message contains the string "SqlException", SQL Database throws an error indicating some specific operation failed. | If the SQL error is not clear, try to alter the database to the latest compatibility level '150'. It can throw the latest version SQL errors. For more information, see the [documentation](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level#backwardCompat). <br/> For more information about troubleshooting SQL issues, search by SQL error code in [Database engine errors](/sql/relational-databases/errors-events/database-engine-events-and-errors). For further help, contact Azure SQL support. | | If the error message contains the string "PdwManagedToNativeInteropException", it's usually caused by a mismatch between the source and sink column sizes. | Check the size of both the source and sink columns. For further help, contact Azure SQL support. |
- | If the error message contains the string "InvalidOperationException", it's usually caused by invalid input data. | To identify which row has encountered the problem, enable the fault tolerance feature on the copy activity, which can redirect problematic rows to the storage for further investigation. For more information, see [Fault tolerance of copy activity in Azure Data Factory](./copy-activity-fault-tolerance.md). |
+ | If the error message contains the string "InvalidOperationException", it's usually caused by invalid input data. | To identify which row has encountered the problem, enable the fault tolerance feature on the copy activity, which can redirect problematic rows to the storage for further investigation. For more information, see [Fault tolerance of copy activity](./copy-activity-fault-tolerance.md). |
### Error code: SqlUnauthorizedAccess
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Cause**: SQL Bulk Copy failed because it received an invalid column length from the bulk copy program utility (bcp) client. -- **Recommendation**: To identify which row has encountered the problem, enable the fault tolerance feature on the copy activity. This can redirect problematic rows to the storage for further investigation. For more information, see [Fault tolerance of copy activity in Azure Data Factory](./copy-activity-fault-tolerance.md).
+- **Recommendation**: To identify which row has encountered the problem, enable the fault tolerance feature on the copy activity. This can redirect problematic rows to the storage for further investigation. For more information, see [Fault tolerance of copy activity](./copy-activity-fault-tolerance.md).
### Error code: SqlConnectionIsClosed
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Message**: `Error thrown from driver. Sql code: '%code;'` -- **Cause**: If the error message contains the string "SQLSTATE=51002 SQLCODE=-805", follow the "Tip" in [Copy data from DB2 by using Azure Data Factory](./connector-db2.md#linked-service-properties).
+- **Cause**: If the error message contains the string "SQLSTATE=51002 SQLCODE=-805", follow the "Tip" in [Copy data from DB2](./connector-db2.md#linked-service-properties).
- **Recommendation**: Try to set "NULLID" in the `packageCollection` property.
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
| Cause analysis | Recommendation | | :-- | :-- | | The problematic row's column count is larger than the first row's column count. It might be caused by a data issue or incorrect column delimiter or quote char settings. | Get the row count from the error message, check the row's column, and fix the data. |
- | If the expected column count is "1" in an error message, you might have specified wrong compression or format settings, which caused Data Factory to parse your files incorrectly. | Check the format settings to make sure they match your source files. |
+ | If the expected column count is "1" in an error message, you might have specified wrong compression or format settings, which caused the files to be parsed incorrectly. | Check the format settings to make sure they match your source files. |
| If your source is a folder, the files under the specified folder might have a different schema. | Make sure that the files in the specified folder have an identical schema. |
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Symptoms**: Some columns are missing when you import a schema or preview data. Error message: `The valid structure information (column name and type) are required for Dynamics source.` -- **Cause**: This issue is by design, because Data Factory is unable to show columns that contain no values in the first 10 records. Make sure that the columns you've added are in the correct format.
+- **Cause**: This issue is by design, because Data Factory and Synapse pipelines are unable to show columns that contain no values in the first 10 records. Make sure that the columns you've added are in the correct format.
- **Recommendation**: Manually add the columns in the mapping tab.
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Message**: `Failed to read data from ftp: The remote server returned an error: 227 Entering Passive Mode (*,*,*,*,*,*).` -- **Cause**: Port range between 1024 to 65535 is not open for data transfer under passive mode that ADF supports.
+- **Cause**: Port range between 1024 to 65535 is not open for data transfer under passive mode supported by the data factory or Synapse pipeline.
- **Recommendation**: Check the firewall settings of the target server. Open port 1024-65535 or port range specified in FTP server to SHIR/Azure IR IP address.
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Message**: `Failed to read data from http server. Check the error from http server:%message;` -- **Cause**: This error occurs when Azure Data Factory talks to the HTTP server, but the HTTP request operation fails.
+- **Cause**: This error occurs when a data factory or a Synapse pipeline talks to the HTTP server, but the HTTP request operation fails.
- **Recommendation**: Check the HTTP status code in the error message, and fix the remote server issue.
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Message**: `Hour, Minute, and Second parameters describe an un-representable DateTime.` -- **Cause**: In Data Factory, DateTime values are supported in the range from 0001-01-01 00:00:00 to 9999-12-31 23:59:59. However, Oracle supports a wider range of DateTime values, such as the BC century or min/sec>59, which leads to failure in Data Factory.
+- **Cause**: In Azure Data Factory and Synapse pipelines, DateTime values are supported in the range from 0001-01-01 00:00:00 to 9999-12-31 23:59:59. However, Oracle supports a wider range of DateTime values, such as the BC century or min/sec>59, which leads to failure.
- **Recommendation**:
- To see whether the value in Oracle is in the range of Data Factory, run `select dump(<column name>)`.
+ To see whether the value in Oracle is in the supported range of dates, run `select dump(<column name>)`.
To learn the byte sequence in the result, see [How are dates stored in Oracle?](https://stackoverflow.com/questions/13568193/how-are-dates-stored-in-oracle).
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Message**: `Unsupported Parquet type. PrimitiveType: %primitiveType; OriginalType: %originalType;.` -- **Cause**: The Parquet format is not supported in Azure Data Factory.
+- **Cause**: The Parquet format is not supported in Azure Data Factory and Synapse pipelines.
-- **Recommendation**: Double-check the source data by going to [Supported file formats and compression codecs by copy activity in Azure Data Factory](./supported-file-formats-and-compression-codecs.md).
+- **Recommendation**: Double-check the source data by going to [Supported file formats and compression codecs by copy activity](./supported-file-formats-and-compression-codecs.md).
### Error code: ParquetMissedDecimalPrecisionScale
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Cause**: The data can't be converted into the type that's specified in mappings.source. -- **Recommendation**: Double-check the source data or specify the correct data type for this column in the copy activity column mapping. For more information, see [Supported file formats and compression codecs by copy activity in Azure Data Factory](./supported-file-formats-and-compression-codecs.md).
+- **Recommendation**: Double-check the source data or specify the correct data type for this column in the copy activity column mapping. For more information, see [Supported file formats and compression codecs by the copy activity](./supported-file-formats-and-compression-codecs.md).
### Error code: ParquetDataCountNotMatchColumnCount
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Message**: `Rest Endpoint responded with Failure from server. Check the error from server:%message;` -- **Cause**: This error occurs when Azure Data Factory talks to the REST endpoint over HTTP protocol, and the request operation fails.
+- **Cause**: This error occurs when a data factory or Synapse pipeline talks to the REST endpoint over HTTP protocol, and the request operation fails.
- **Recommendation**: Check the HTTP status code or the message in the error message and fix the remote server issue.
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
You can also use 'curl--help' for more advanced usage of the command.
- - If only the Data Factory REST connector returns an unexpected response, contact Microsoft support for further troubleshooting.
+ - If only the REST connector returns an unexpected response, contact Microsoft support for further troubleshooting.
- Note that 'curl' might not be suitable to reproduce an SSL certificate validation issue. In some scenarios, the 'curl' command was executed successfully without encountering any SSL certificate validation issues. But when the same URL is executed in a browser, no SSL certificate is actually returned for the client to establish trust with server.
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
If the private key content is from your key vault, the original key file can work if you upload it directly to the SFTP linked service.
- For more information, see [Copy data from and to the SFTP server by using Azure Data Factory](./connector-sftp.md#use-ssh-public-key-authentication). The private key content is base64 encoded SSH private key content.
+ For more information, see [Copy data from and to the SFTP server by using data factory or Synapse pipelines](./connector-sftp.md#use-ssh-public-key-authentication). The private key content is base64 encoded SSH private key content.
Encode *entire* original private key file with base64 encoding, and store the encoded string in your key vault. The original private key file is the one that can work on the SFTP linked service if you select **Upload** from the file.
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**:
- PKCS#8 format SSH private key (start with "--BEGIN ENCRYPTED PRIVATE KEY--") is currently not supported to access the SFTP server in Data Factory.
+ PKCS#8 format SSH private key (start with "--BEGIN ENCRYPTED PRIVATE KEY--") is currently not supported to access the SFTP server.
To convert the key to traditional SSH key format, starting with "--BEGIN RSA PRIVATE KEY--", run the following commands:
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Check the port of the target server. By default, SFTP uses port 22. -- **Cause**: If the error message contains the string "Server response does not contain SSH protocol identification", one possible cause is that the SFTP server throttled the connection. Data Factory will create multiple connections to download from the SFTP server in parallel, and sometimes it will encounter SFTP server throttling. Ordinarily, different servers return different errors when they encounter throttling.
+- **Cause**: If the error message contains the string "Server response does not contain SSH protocol identification", one possible cause is that the SFTP server throttled the connection. Multiple connections are created to download from the SFTP server in parallel, and sometimes it will encounter SFTP server throttling. Ordinarily, different servers return different errors when they encounter throttling.
- **Recommendation**:
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Resolution**: Learn [why weΓÇÖre not recommending ΓÇ£FIPS ModeΓÇ¥ anymore](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/why-we-8217-re-not-recommending-8220-fips-mode-8221-anymore/ba-p/701037), and evaluate whether you can disable FIPS on your self-hosted IR machine.
- Alternatively, if you only want to let Azure Data Factory bypass FIPS and make the activity runs succeed, do the following:
+ Alternatively, if you only want to bypass FIPS and make the activity runs succeed, do the following:
1. Open the folder where Self-hosted IR is installed. The path is usually *C:\Program Files\Microsoft Integration Runtime \<IR version>\Shared*.
data-factory Connector Vertica https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-vertica.md
description: Learn how to copy data from Vertica to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 09/04/2019
data-factory Connector Web Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-web-table.md
description: Learn about Web Table Connector of Azure Data Factory that lets you copy data from a web table to data stores supported by Data Factory as sinks. + Last updated 08/01/2019
data-factory Connector Xero https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-xero.md
description: Learn how to copy data from Xero to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 01/26/2021
data-factory Connector Zoho https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-zoho.md
description: Learn how to copy data from Zoho to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. + Last updated 08/03/2020
data-factory Continuous Integration Deployment Improvements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment-improvements.md
Title: Automated publishing for continuous integration and delivery description: Learn how to publish for continuous integration and delivery automatically. +
data-factory Continuous Integration Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment.md
Title: Continuous integration and delivery in Azure Data Factory description: Learn how to use continuous integration and delivery to move Data Factory pipelines from one environment (development, test, production) to another. +
data-factory Control Flow Append Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-append-variable-activity.md
Title: Append Variable Activity in Azure Data Factory
description: Learn how to set the Append Variable activity to add a value to an existing array variable defined in a Data Factory pipeline +
data-factory Control Flow Azure Function Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-azure-function-activity.md
+ Last updated 07/30/2021
data-factory Control Flow Execute Data Flow Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-execute-data-flow-activity.md
description: How to execute data flows from inside a data factory pipeline. +
data-factory Control Flow Execute Pipeline Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-execute-pipeline-activity.md
+ Last updated 01/10/2018
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-expression-language-functions.md
Title: Expression and functions in Azure Data Factory
+ Title: Expression and functions
-description: This article provides information about expressions and functions that you can use in creating data factory entities.
+description: This article provides information about expressions and functions that you can use in creating Azure Data Factory and Azure Synapse Analytics pipeline entities.
+ Last updated 07/16/2021
-# Expressions and functions in Azure Data Factory
+# Expressions and functions in Azure Data Factory and Azure Synapse Analytics
> [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"] > * [Version 1](v1/data-factory-functions-variables.md)
-> * [Current version](control-flow-expression-language-functions.md)
+> * [Current version/Synapse version](control-flow-expression-language-functions.md)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article provides details about expressions and functions supported by Azure Data Factory.
+This article provides details about expressions and functions supported by Azure Data Factory and Azure Synapse Analytics.
## Expressions
Baba's book store
``` ### Tutorial
-This [tutorial](https://azure.microsoft.com/mediahandler/files/resourcefiles/azure-data-factory-passing-parameters/Azure%20data%20Factory-Whitepaper-PassingParameters.pdf) walks you through how to pass parameters between a pipeline and activity as well as between the activities.
-
+This [tutorial](https://azure.microsoft.com/mediahandler/files/resourcefiles/azure-data-factory-passing-parameters/Azure%20data%20Factory-Whitepaper-PassingParameters.pdf) walks you through how to pass parameters between a pipeline and activity as well as between the activities. The tutorial specifically demonstrates steps for an Azure Data Factory although steps for a Synapse workspace are nearly equivalent but with a slightly different user interface.
## Functions
data-factory Control Flow Filter Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-filter-activity.md
+ Last updated 05/04/2018
data-factory Control Flow For Each Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-for-each-activity.md
+ Last updated 01/23/2019
data-factory Control Flow Get Metadata Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-get-metadata-activity.md
Title: Get Metadata activity in Azure Data Factory
+ Title: Get Metadata activity
-description: Learn how to use the Get Metadata activity in a Data Factory pipeline.
+description: Learn how to use the Get Metadata activity in an Azure Data Factory or Azure Synapse Analytics pipeline.
+ Last updated 02/25/2021
-# Get Metadata activity in Azure Data Factory
+# Get Metadata activity in Azure Data Factory or Azure Synapse Analytics
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-You can use the Get Metadata activity to retrieve the metadata of any data in Azure Data Factory. You can use the output from the Get Metadata activity in conditional expressions to perform validation, or consume the metadata in subsequent activities.
+You can use the Get Metadata activity to retrieve the metadata of any data in Azure Data Factory or a Synapse pipeline. You can use the output from the Get Metadata activity in conditional expressions to perform validation, or consume the metadata in subsequent activities.
## Supported capabilities
The Get Metadata results are shown in the activity output. Following are two sam
``` ## Next steps
-Learn about other control flow activities supported by Data Factory:
+Learn about other supported control flow activities:
- [Execute Pipeline activity](control-flow-execute-pipeline-activity.md) - [ForEach activity](control-flow-for-each-activity.md)
data-factory Control Flow If Condition Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-if-condition-activity.md
+ Last updated 01/10/2018
data-factory Control Flow Lookup Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-lookup-activity.md
Title: Lookup activity in Azure Data Factory
+ Title: Lookup activity
-description: Learn how to use Lookup activity to look up a value from an external source. This output can be further referenced by succeeding activities.
+description: Learn how to use the Lookup Activity in Azure Data Factory and Azure Synapse Analytics to look up a value from an external source. This output can be further referenced by succeeding activities.
+ Last updated 02/25/2021
-# Lookup activity in Azure Data Factory
+# Lookup activity in Azure Data Factory and Azure Synapse Analytics
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Lookup activity can retrieve a dataset from any of the Azure Data Factory-supported data sources. you can use it to dynamically determine which objects to operate on in a subsequent activity, instead of hard coding the object name. Some object examples are files and tables.
+Lookup activity can retrieve a dataset from any of the data sources supported by data factory and Synapse pipelines. You can use it to dynamically determine which objects to operate on in a subsequent activity, instead of hard coding the object name. Some object examples are files and tables.
Lookup activity reads and returns the content of a configuration file or table. It also returns the result of executing a query or stored procedure. The output can be a singleton value or an array of attributes, which can be consumed in a subsequent copy, transformation, or control flow activities like ForEach activity.
The lookup result is returned in the `output` section of the activity run result
In this example, the pipeline contains two activities: **Lookup** and **Copy**. The Copy Activity copies data from a SQL table in your Azure SQL Database instance to Azure Blob storage. The name of the SQL table is stored in a JSON file in Blob storage. The Lookup activity looks up the table name at runtime. JSON is modified dynamically by using this approach. You don't need to redeploy pipelines or datasets.
-This example demonstrates lookup for the first row only. For lookup for all rows and to chain the results with ForEach activity, see the samples in [Copy multiple tables in bulk by using Azure Data Factory](tutorial-bulk-copy.md).
+This example demonstrates lookup for the first row only. For lookup for all rows and to chain the results with ForEach activity, see the samples in [Copy multiple tables in bulk](tutorial-bulk-copy.md).
### Pipeline
Here are some limitations of the Lookup activity and suggested workarounds.
| | | ## Next steps
-See other control flow activities supported by Data Factory:
+See other control flow activities supported by Azure Data Factory and Synapse pipelines:
- [Execute Pipeline activity](control-flow-execute-pipeline-activity.md) - [ForEach activity](control-flow-for-each-activity.md)
data-factory Control Flow Power Query Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-power-query-activity.md
description: Learn how to use the Power Query activity for data wrangling featur
+ Last updated 01/18/2021
data-factory Control Flow Set Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-set-variable-activity.md
Title: Set Variable Activity in Azure Data Factory
description: Learn how to use the Set Variable activity to set the value of an existing variable defined in a Data Factory pipeline + Last updated 04/07/2020
data-factory Control Flow Switch Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-switch-activity.md
+ Last updated 06/23/2021
data-factory Control Flow System Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-system-variables.md
+ Last updated 06/12/2018
data-factory Control Flow Until Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-until-activity.md
+ Last updated 01/10/2018
data-factory Control Flow Validation Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-validation-activity.md
+ Last updated 03/25/2019
data-factory Control Flow Wait Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-wait-activity.md
description: The Wait activity pauses the execution of the pipeline for the spec
+ Last updated 01/12/2018
data-factory Control Flow Web Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-web-activity.md
description: Learn how you can use Web Activity, one of the control flow activit
+ Last updated 12/19/2018
data-factory Control Flow Webhook Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-webhook-activity.md
+ Last updated 03/25/2019
data-factory Copy Activity Data Consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-data-consistency.md
description: 'Learn about how to enable data consistency verification in copy activity in Azure Data Factory.' + Last updated 3/27/2020
data-factory Copy Activity Fault Tolerance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-fault-tolerance.md
description: 'Learn about how to add fault tolerance to copy activity in Azure Data Factory by skipping the incompatible data.' + Last updated 06/22/2020
data-factory Copy Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-log.md
Title: Session log in copy activity
description: 'Learn about how to enable session log in copy activity in Azure Data Factory.' + Last updated 11/11/2020
data-factory Copy Activity Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-monitoring.md
description: Learn about how to monitor the copy activity execution in Azure Data Factory. + Last updated 03/22/2021
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-overview.md
Title: Copy activity in Azure Data Factory
+ Title: Copy activity
-description: Learn about the Copy activity in Azure Data Factory. You can use it to copy data from a supported source data store to a supported sink data store.
+description: Learn about the Copy activity in Azure Data Factory and Azure Synapse Analytics. You can use it to copy data from a supported source data store to a supported sink data store.
+ Last updated 6/1/2021
-# Copy activity in Azure Data Factory
+# Copy activity in Azure Data Factory and Azure Synapse Analytics
> [!div class="op_single_selector" title1="Select the version of Data Factory that you're using:"] > * [Version 1](v1/data-factory-data-movement-activities.md)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-In Azure Data Factory, you can use the Copy activity to copy data among data stores located on-premises and in the cloud. After you copy the data, you can use other activities to further transform and analyze it. You can also use the Copy activity to publish transformation and analysis results for business intelligence (BI) and application consumption.
+In Azure Data Factory and Synapse pipelines, you can use the Copy activity to copy data among data stores located on-premises and in the cloud. After you copy the data, you can use other activities to further transform and analyze it. You can also use the Copy activity to publish transformation and analysis results for business intelligence (BI) and application consumption.
![The role of the Copy activity](media/copy-activity-overview/copy-activity.png)
You can use the Copy activity to copy files as-is between two file-based data st
## Supported regions
-The service that enables the Copy activity is available globally in the regions and geographies listed in [Azure integration runtime locations](concepts-integration-runtime.md#integration-runtime-location). The globally available topology ensures efficient data movement that usually avoids cross-region hops. See [Products by region](https://azure.microsoft.com/regions/#services) to check the availability of Data Factory and data movement in a specific region.
+The service that enables the Copy activity is available globally in the regions and geographies listed in [Azure integration runtime locations](concepts-integration-runtime.md#integration-runtime-location). The globally available topology ensures efficient data movement that usually avoids cross-region hops. See [Products by region](https://azure.microsoft.com/regions/#services) to check the availability of Data Factory, Synapse Workspaces and data movement in a specific region.
## Configuration [!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
-In general, to use the Copy activity in Azure Data Factory, you need to:
+In general, to use the Copy activity in Azure Data Factory or Synapse pipelines, you need to:
1. **Create linked services for the source data store and the sink data store.** You can find the list of supported connectors in the [Supported data stores and formats](#supported-data-stores-and-formats) section of this article. Refer to the connector article's "Linked service properties" section for configuration information and supported properties. 2. **Create datasets for the source and sink.** Refer to the "Dataset properties" sections of the source and sink connector articles for configuration information and supported properties.
The following template of a Copy activity contains a complete list of supported
## Monitoring
-You can monitor the Copy activity run in the Azure Data Factory both visually and programmatically. For details, see [Monitor copy activity](copy-activity-monitoring.md).
+You can monitor the Copy activity run in the Azure Data Factory and Synapse pipelines both visually and programmatically. For details, see [Monitor copy activity](copy-activity-monitoring.md).
## Incremental copy
-Data Factory enables you to incrementally copy delta data from a source data store to a sink data store. For details, see [Tutorial: Incrementally copy data](tutorial-incremental-copy-overview.md).
+Data Factory and Synapse pipelines enable you to incrementally copy delta data from a source data store to a sink data store. For details, see [Tutorial: Incrementally copy data](tutorial-incremental-copy-overview.md).
## Performance and tuning
-The [copy activity monitoring](copy-activity-monitoring.md) experience shows you the copy performance statistics for each of your activity run. The [Copy activity performance and scalability guide](copy-activity-performance.md) describes key factors that affect the performance of data movement via the Copy activity in Azure Data Factory. It also lists the performance values observed during testing and discusses how to optimize the performance of the Copy activity.
+The [copy activity monitoring](copy-activity-monitoring.md) experience shows you the copy performance statistics for each of your activity run. The [Copy activity performance and scalability guide](copy-activity-performance.md) describes key factors that affect the performance of data movement via the Copy activity. It also lists the performance values observed during testing and discusses how to optimize the performance of the Copy activity.
## Resume from last failed run
By default, the Copy activity stops copying data and returns a failure when sour
## Data consistency verification
-When you move data from source to destination store, Azure Data Factory copy activity provides an option for you to do additional data consistency verification to ensure the data is not only successfully copied from source to destination store, but also verified to be consistent between source and destination store. Once inconsistent files have been found during the data movement, you can either abort the copy activity or continue to copy the rest by enabling fault tolerance setting to skip inconsistent files. You can get the skipped file names by enabling session log setting in copy activity. See [Data consistency verification in copy activity](copy-activity-data-consistency.md) for details.
+When you move data from source to destination store, copy activity provides an option for you to do additional data consistency verification to ensure the data is not only successfully copied from source to destination store, but also verified to be consistent between source and destination store. Once inconsistent files have been found during the data movement, you can either abort the copy activity or continue to copy the rest by enabling fault tolerance setting to skip inconsistent files. You can get the skipped file names by enabling session log setting in copy activity. See [Data consistency verification in copy activity](copy-activity-data-consistency.md) for details.
## Session log You can log your copied file names, which can help you to further ensure the data is not only successfully copied from source to destination store, but also consistent between source and destination store by reviewing the copy activity session logs. See [Session log in copy activity](copy-activity-log.md) for details.
data-factory Copy Activity Performance Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance-features.md
description: Learn about the key features that help you optimize the copy activi
+ Last updated 09/24/2020
data-factory Copy Activity Performance Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance-troubleshooting.md
description: Learn about how to troubleshoot copy activity performance in Azure
+ Last updated 01/07/2021
data-factory Copy Activity Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance.md
Title: Copy activity performance and scalability guide
-description: Learn about key factors that affect the performance of data movement in Azure Data Factory when you use the copy activity.
+description: Learn about key factors that affect the performance of data movement in Azure Data Factory and Azure Synapse Analytics pipelines when you use the copy activity.
documentationcenter: '' - +
Last updated 09/15/2020
Sometimes you want to perform a large-scale data migration from data lake or enterprise data warehouse (EDW), to Azure. Other times you want to ingest large amounts of data, from different sources into Azure, for big data analytics. In each case, it is critical to achieve optimal performance and scalability.
-Azure Data Factory (ADF) provides a mechanism to ingest data. ADF has the following advantages:
+Azure Data Factory and Azure Synapse Analytics pipelines provide a mechanism to ingest data, with the following advantages:
* Handles large amounts of data * Is highly performant * Is cost-effective
-These advantages make ADF an excellent fit for data engineers who want to build scalable data ingestion pipelines that are highly performant.
+These advantages are an excellent fit for data engineers who want to build scalable data ingestion pipelines that are highly performant.
After reading this article, you will be able to answer the following questions:
-* What level of performance and scalability can I achieve using ADF copy activity for data migration and data ingestion scenarios?
-* What steps should I take to tune the performance of ADF copy activity?
-* What ADF perf optimization knobs can I utilize to optimize performance for a single copy activity run?
-* What other factors outside ADF to consider when optimizing copy performance?
+* What level of performance and scalability can I achieve using copy activity for data migration and data ingestion scenarios?
+* What steps should I take to tune the performance of the copy activity?
+* What performance optimizations can I utilize for a single copy activity run?
+* What other external factors to consider when optimizing copy performance?
> [!NOTE] > If you aren't familiar with the copy activity in general, see the [copy activity overview](copy-activity-overview.md) before you read this article.
-## Copy performance and scalability achievable using ADF
+## Copy performance and scalability achievable using Azure Data Factory and Synapse pipelines
-ADF offers a serverless architecture that allows parallelism at different levels.
+Azure Data Factory and Synapse pipelines offer a serverless architecture that allows parallelism at different levels.
This architecture allows you to develop pipelines that maximize data movement throughput for your environment. These pipelines fully utilize the following resources:
This full utilization means you can estimate the overall throughput by measuring
The table below shows the calculation of data movement duration. The duration in each cell is calculated based on a given network and data store bandwidth and a given data payload size. > [!NOTE]
-> The duration provided below are meant to represent achievable performance in an end-to-end data integration solution implemented using ADF, by using one or more performance optimization techniques described in [Copy performance optimization features](#copy-performance-optimization-features), including using ForEach to partition and spawn off multiple concurrent copy activities. We recommend you to follow steps laid out in [Performance tuning steps](#performance-tuning-steps) to optimize copy performance for your specific dataset and system configuration. You should use the numbers obtained in your performance tuning tests for production deployment planning, capacity planning, and billing projection.
+> The duration provided below are meant to represent achievable performance in an end-to-end data integration solution by using one or more performance optimization techniques described in [Copy performance optimization features](#copy-performance-optimization-features), including using ForEach to partition and spawn off multiple concurrent copy activities. We recommend you to follow steps laid out in [Performance tuning steps](#performance-tuning-steps) to optimize copy performance for your specific dataset and system configuration. You should use the numbers obtained in your performance tuning tests for production deployment planning, capacity planning, and billing projection.
&nbsp;
The table below shows the calculation of data movement duration. The duration in
| **10 PB** | 647.3 mo | 323.6 mo | 64.7 mo | 31.6 mo | 6.5 mo | 3.2 mo | 0.6 mo | | | | | | | | | |
-ADF copy is scalable at different levels:
+Copy is scalable at different levels:
-![how ADF copy scales](media/copy-activity-performance/adf-copy-scalability.png)
+![How copy scales](media/copy-activity-performance/adf-copy-scalability.png)
-* ADF control flow can start multiple copy activities in parallel, for example using [For Each loop](control-flow-for-each-activity.md).
+* Control flow can start multiple copy activities in parallel, for example using [For Each loop](control-flow-for-each-activity.md).
* A single copy activity can take advantage of scalable compute resources. * When using Azure integration runtime (IR), you can specify [up to 256 data integration units (DIUs)](#data-integration-units) for each copy activity, in a serverless manner.
ADF copy is scalable at different levels:
## Performance tuning steps
-Take the following steps to tune the performance of your Azure Data Factory service with the copy activity:
+Take the following steps to tune the performance of your service with the copy activity:
1. **Pick up a test dataset and establish a baseline.**
Take the following steps to tune the performance of your Azure Data Factory serv
3. **How to maximize aggregate throughput by running multiple copies concurrently:**
- By now you have maximized the performance of a single copy activity. If you have not yet achieved the throughput upper limits of your environment, you can run multiple copy activities in parallel. You can run in parallel by using ADF control flow constructs. One such construct is the [For Each loop](control-flow-for-each-activity.md). For more information, see the following articles about solution templates:
+ By now you have maximized the performance of a single copy activity. If you have not yet achieved the throughput upper limits of your environment, you can run multiple copy activities in parallel. You can run in parallel by using control flow constructs. One such construct is the [For Each loop](control-flow-for-each-activity.md). For more information, see the following articles about solution templates:
* [Copy files from multiple containers](solution-template-copy-files-multiple-containers.md) * [Migrate data from Amazon S3 to ADLS Gen2](solution-template-migration-s3-azure.md)
Take the following steps to tune the performance of your Azure Data Factory serv
## Troubleshoot copy activity performance
-Follow the [Performance tuning steps](#performance-tuning-steps) to plan and conduct performance test for your scenario. And learn how to troubleshoot each copy activity run's performance issue in Azure Data Factory from [Troubleshoot copy activity performance](copy-activity-performance-troubleshooting.md).
+Follow the [Performance tuning steps](#performance-tuning-steps) to plan and conduct performance test for your scenario. And learn how to troubleshoot each copy activity run's performance issue from [Troubleshoot copy activity performance](copy-activity-performance-troubleshooting.md).
## Copy performance optimization features
-Azure Data Factory provides the following performance optimization features:
+The service provides the following performance optimization features:
* [Data Integration Units](#data-integration-units) * [Self-hosted integration runtime scalability](#self-hosted-integration-runtime-scalability)
Azure Data Factory provides the following performance optimization features:
### Data Integration Units
-A Data Integration Unit (DIU) is a measure that represents the power of a single unit in Azure Data Factory. Power is a combination of CPU, memory, and network resource allocation. DIU only applies to [Azure integration runtime](concepts-integration-runtime.md#azure-integration-runtime). DIU does not apply to [self-hosted integration runtime](concepts-integration-runtime.md#self-hosted-integration-runtime). [Learn more here](copy-activity-performance-features.md#data-integration-units).
+A Data Integration Unit (DIU) is a measure that represents the power of a single unit in Azure Data Factory and Synapse pipelines. Power is a combination of CPU, memory, and network resource allocation. DIU only applies to [Azure integration runtime](concepts-integration-runtime.md#azure-integration-runtime). DIU does not apply to [self-hosted integration runtime](concepts-integration-runtime.md#self-hosted-integration-runtime). [Learn more here](copy-activity-performance-features.md#data-integration-units).
### Self-hosted integration runtime scalability
data-factory Copy Activity Preserve Metadata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-preserve-metadata.md
description: 'Learn about how to preserve metadata and ACLs during copy using copy activity in Azure Data Factory.' + Last updated 09/23/2020
data-factory Copy Activity Schema And Type Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-schema-and-type-mapping.md
Title: Schema and data type mapping in copy activity
-description: Learn about how copy activity in Azure Data Factory maps schemas and data types from source data to sink data.
+description: Learn about how copy activity in Azure Data Factory and Azure Synapse Analytics pipelines map schemas and data types from source data to sink data.
+ Last updated 06/22/2020
Learn more about:
- [Hierarchical source to tabular sink](#hierarchical-source-to-tabular-sink) - [Tabular/Hierarchical source to hierarchical sink](#tabularhierarchical-source-to-hierarchical-sink)
-You can configure the mapping on Data Factory authoring UI -> copy activity -> mapping tab, or programmatically specify the mapping in copy activity -> `translator` property. The following properties are supported in `translator` -> `mappings` array -> objects -> `source` and `sink`, which points to the specific column/field to map data.
+You can configure the mapping on the Authoring UI -> copy activity -> mapping tab, or programmatically specify the mapping in copy activity -> `translator` property. The following properties are supported in `translator` -> `mappings` array -> objects -> `source` and `sink`, which points to the specific column/field to map data.
| Property | Description | Required | | -- | | -- | | name | Name of the source or sink column/field. Apply for tabular source and sink. | Yes | | ordinal | Column index. Start from 1. <br>Apply and required when using delimited text without header line. | No | | path | JSON path expression for each field to extract or map. Apply for hierarchical source and sink, for example, Cosmos DB, MongoDB, or REST connectors.<br>For fields under the root object, the JSON path starts with root `$`; for fields inside the array chosen by `collectionReference` property, JSON path starts from the array element without `$`. | No |
-| type | Data Factory interim data type of the source or sink column. In general, you don't need to specify or change this property. Learn more about [data type mapping](#data-type-mapping). | No |
+| type | Interim data type of the source or sink column. In general, you don't need to specify or change this property. Learn more about [data type mapping](#data-type-mapping). | No |
| culture | Culture of the source or sink column. Apply when type is `Datetime` or `Datetimeoffset`. The default is `en-us`.<br>In general, you don't need to specify or change this property. Learn more about [data type mapping](#data-type-mapping). | No | | format | Format string to be used when type is `Datetime` or `Datetimeoffset`. Refer to [Custom Date and Time Format Strings](/dotnet/standard/base-types/custom-date-and-time-format-strings) on how to format datetime. In general, you don't need to specify or change this property. Learn more about [data type mapping](#data-type-mapping). | No |
And you want to copy it into a text file in the following format with header lin
You can define such mapping on Data Factory authoring UI:
-1. On copy activity -> mapping tab, click **Import schemas** button to import both source and sink schemas. As Data Factory samples the top few objects when importing schema, if any field doesn't show up, you can add it to the correct layer in the hierarchy - hover on an existing field name and choose to add a node, an object, or an array.
+1. On copy activity -> mapping tab, click **Import schemas** button to import both source and sink schemas. As the service samples the top few objects when importing schema, if any field doesn't show up, you can add it to the correct layer in the hierarchy - hover on an existing field name and choose to add a node, an object, or an array.
2. Select the array from which you want to iterate and extract data. It will be auto populated as **Collection reference**. Note only single array is supported for such operation.
-3. Map the needed fields to sink. Data Factory automatically determines the corresponding JSON paths for the hierarchical side.
+3. Map the needed fields to sink. The service automatically determines the corresponding JSON paths for the hierarchical side.
> [!NOTE] > For records where the array marked as collection reference is empty and the check box is selected, the entire record is skipped.
If explicit mapping is needed, you can:
Copy activity performs source types to sink types mapping with the following flow:
-1. Convert from source native data types to Azure Data Factory interim data types.
+1. Convert from source native data types to interim data types used by Azure Data Factory and Synapse pipelines.
2. Automatically convert interim data type as needed to match corresponding sink types, applicable for both [default mapping](#default-mapping) and [explicit mapping](#explicit-mapping).
-3. Convert from Azure Data Factory interim data types to sink native data types.
+3. Convert from interim data types to sink native data types.
Copy activity currently supports the following interim data types: Boolean, Byte, Byte array, Datetime, DatetimeOffset, Decimal, Double, GUID, Int16, Int32, Int64, SByte, Single, String, Timespan, UInt16, UInt32, and UInt64.
The following properties are supported in copy activity for data type conversion
| Property | Description | Required | | -- | | -- |
-| typeConversion | Enable the new data type conversion experience. <br>Default value is false due to backward compatibility.<br><br>For new copy activities created via Data Factory authoring UI since late June 2020, this data type conversion is enabled by default for the best experience, and you can see the following type conversion settings on copy activity -> mapping tab for applicable scenarios. <br>To create pipeline programmatically, you need to explicitly set `typeConversion` property to true to enable it.<br>For existing copy activities created before this feature is released, you won't see type conversion options on Data Factory authoring UI for backward compatibility. | No |
+| typeConversion | Enable the new data type conversion experience. <br>Default value is false due to backward compatibility.<br><br>For new copy activities created via Data Factory authoring UI since late June 2020, this data type conversion is enabled by default for the best experience, and you can see the following type conversion settings on copy activity -> mapping tab for applicable scenarios. <br>To create pipeline programmatically, you need to explicitly set `typeConversion` property to true to enable it.<br>For existing copy activities created before this feature is released, you won't see type conversion options on the authoring UI for backward compatibility. | No |
| typeConversionSettings | A group of type conversion settings. Apply when `typeConversion` is set to `true`. The following properties are all under this group. | No | | *Under `typeConversionSettings`* | | | | allowDataTruncation | Allow data truncation when converting source data to sink with different type during copy, for example, from decimal to integer, from DatetimeOffset to Datetime. <br>Default value is true. | No |
The following properties are supported in copy activity for data type conversion
## Legacy models > [!NOTE]
-> The following models to map source columns/fields to sink are still supported as is for backward compatibility. We suggest that you use the new model mentioned in [schema mapping](#schema-mapping). Data Factory authoring UI has switched to generating the new model.
+> The following models to map source columns/fields to sink are still supported as is for backward compatibility. We suggest that you use the new model mentioned in [schema mapping](#schema-mapping). The authoring UI has switched to generating the new model.
### Alternative column-mapping (legacy model)
data-factory Copy Clone Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-clone-data-factory.md
Title: Copy or clone a data factory in Azure Data Factory description: Learn how to copy or clone a data factory in Azure Data Factory +
data-factory Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-data-tool.md
description: 'Provides information about the Copy Data tool in Azure Data Factor
+ Last updated 06/04/2021
data-factory Create Azure Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-azure-integration-runtime.md
Title: Create Azure integration runtime in Azure Data Factory
description: Learn how to create Azure integration runtime in Azure Data Factory, which is used to copy data and dispatch transform activities. + Last updated 06/04/2021
data-factory Create Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-azure-ssis-integration-runtime.md
Title: Create an Azure-SSIS integration runtime in Azure Data Factory description: Learn how to create an Azure-SSIS integration runtime in Azure Data Factory so you can deploy and run SSIS packages in Azure. + Last updated 07/19/2021
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-self-hosted-integration-runtime.md
Title: Create a self-hosted integration runtime
-description: Learn how to create a self-hosted integration runtime in Azure Data Factory, which lets data factories access data stores in a private network.
+description: Learn how to create a self-hosted integration runtime in Azure Data Factory and Azure Synapse Analytics, which lets pipelines access data stores in a private network.
+
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-The integration runtime (IR) is the compute infrastructure that Azure Data Factory uses to provide data-integration capabilities across different network environments. For details about IR, see [Integration runtime overview](concepts-integration-runtime.md).
+The integration runtime (IR) is the compute infrastructure that Azure Data Factory and Synapse pipelines use to provide data-integration capabilities across different network environments. For details about IR, see [Integration runtime overview](concepts-integration-runtime.md).
A self-hosted integration runtime can run copy activities between a cloud data store and a data store in a private network. It also can dispatch transform activities against compute resources in an on-premises network or an Azure virtual network. The installation of a self-hosted integration runtime needs an on-premises machine or a virtual machine inside a private network.
This article describes how you can create and configure a self-hosted IR.
## Considerations for using a self-hosted IR -- You can use a single self-hosted integration runtime for multiple on-premises data sources. You can also share it with another data factory within the same Azure Active Directory (Azure AD) tenant. For more information, see [Sharing a self-hosted integration runtime](./create-shared-self-hosted-integration-runtime-powershell.md).-- You can install only one instance of a self-hosted integration runtime on any single machine. If you have two data factories that need to access on-premises data sources, either use the [self-hosted IR sharing feature](./create-shared-self-hosted-integration-runtime-powershell.md) to share the self-hosted IR, or install the self-hosted IR on two on-premises computers, one for each data factory.
+- You can use a single self-hosted integration runtime for multiple on-premises data sources. You can also share it with another data factory or Synapse workspace within the same Azure Active Directory (Azure AD) tenant. For more information, see [Sharing a self-hosted integration runtime](./create-shared-self-hosted-integration-runtime-powershell.md).
+- You can install only one instance of a self-hosted integration runtime on any single machine. If you have two data factories or Synapse workspaces that need to access on-premises data sources, either use the [self-hosted IR sharing feature](./create-shared-self-hosted-integration-runtime-powershell.md) to share the self-hosted IR, or install the self-hosted IR on two on-premises computers, one for each data factory or Synapse workspace.
- The self-hosted integration runtime doesn't need to be on the same machine as the data source. However, having the self-hosted integration runtime close to the data source reduces the time for the self-hosted integration runtime to connect to the data source. We recommend that you install the self-hosted integration runtime on a machine that differs from the one that hosts the on-premises data source. When the self-hosted integration runtime and data source are on different machines, the self-hosted integration runtime doesn't compete with the data source for resources. - You can have multiple self-hosted integration runtimes on different machines that connect to the same on-premises data source. For example, if you have two self-hosted integration runtimes that serve two data factories, the same on-premises data source can be registered with both data factories. - Use a self-hosted integration runtime to support data integration within an Azure virtual network.
Here is a high-level summary of the data-flow steps for copying with a self-host
![The high-level overview of data flow](media/create-self-hosted-integration-runtime/high-level-overview.png)
-1. A data developer first creates a self-hosted integration runtime within an Azure data factory by using the Azure portal or the PowerShell cmdlet. Then the data developer creates a linked service for an on-premises data store, specifying the self-hosted integration runtime instance that the service should use to connect to data stores.
+1. A data developer first creates a self-hosted integration runtime within an Azure data factory or Synapse workspace by using the Azure portal or the PowerShell cmdlet. Then the data developer creates a linked service for an on-premises data store, specifying the self-hosted integration runtime instance that the service should use to connect to data stores.
2. The self-hosted integration runtime node encrypts the credentials by using Windows Data Protection Application Programming Interface (DPAPI) and saves the credentials locally. If multiple nodes are set for high availability, the credentials are further synchronized across other nodes. Each node encrypts the credentials by using DPAPI and stores them locally. Credential synchronization is transparent to the data developer and is handled by the self-hosted IR.
-3. Azure Data Factory communicates with the self-hosted integration runtime to schedule and manage jobs. Communication is via a control channel that uses a shared [Azure Relay](../azure-relay/relay-what-is-it.md#wcf-relay) connection. When an activity job needs to be run, Data Factory queues the request along with any credential information. It does so in case credentials aren't already stored on the self-hosted integration runtime. The self-hosted integration runtime starts the job after it polls the queue.
+3. Azure Data Factory and Synapse pipelines communicate with the self-hosted integration runtime to schedule and manage jobs. Communication is via a control channel that uses a shared [Azure Relay](../azure-relay/relay-what-is-it.md#wcf-relay) connection. When an activity job needs to be run, the service queues the request along with any credential information. It does so in case credentials aren't already stored on the self-hosted integration runtime. The self-hosted integration runtime starts the job after it polls the queue.
4. The self-hosted integration runtime copies data between an on-premises store and cloud storage. The direction of the copy depends on how the copy activity is configured in the data pipeline. For this step, the self-hosted integration runtime directly communicates with cloud-based storage services like Azure Blob storage over a secure HTTPS channel.
To create and set up a self-hosted integration runtime, use the following proced
``` > [!NOTE] > Run PowerShell command in Azure government, please see [Connect to Azure Government with PowerShell](../azure-government/documentation-government-get-started-connect-with-ps.md).
-### Create a self-hosted IR via Azure Data Factory UI
-Use the following steps to create a self-hosted IR using Azure Data Factory UI.
+### Create a self-hosted IR via UI
-1. On the home page of Azure Data Factory UI, select the [Manage tab](./author-management-hub.md) from the leftmost pane.
+Use the following steps to create a self-hosted IR using the Azure Data Factory or Azure Synapse UI.
+
+# [Azure Data Factory](#tab/data-factory)
+
+1. On the home page of the Azure Data Factory UI, select the [Manage tab](./author-management-hub.md) from the leftmost pane.
:::image type="content" source="media/doc-common-process/get-started-page-manage-button.png" alt-text="The home page Manage button":::
Use the following steps to create a self-hosted IR using Azure Data Factory UI.
1. On the **Integration runtime setup** page, select **Azure, Self-Hosted**, and then select **Continue**. 1. On the following page, select **Self-Hosted** to create a Self-Hosted IR, and then select **Continue**.
- :::image type="content" source="media/create-self-hosted-integration-runtime/new-selfhosted-integration-runtime.png" alt-text="Create a selfhosted IR":::
+ :::image type="content" source="media/create-self-hosted-integration-runtime/new-self-hosted-integration-runtime.png" alt-text="Create a selfhosted IR":::
+
+# [Azure Synapse](#tab/synapse-analytics)
+
+1. On the home page of the Azure Synapse UI, select the Manage tab from the leftmost pane.
+
+ :::image type="content" source="media/doc-common-process/get-started-page-manage-button-synapse.png" alt-text="The home page Manage button":::
+
+1. Select **Integration runtimes** on the left pane, and then select **+New**.
+
+ :::image type="content" source="media/doc-common-process/manage-new-integration-runtime-synapse.png" alt-text="Create an integration runtime":::
+
+1. On the following page, select **Self-Hosted** to create a Self-Hosted IR, and then select **Continue**.
+ :::image type="content" source="media/create-self-hosted-integration-runtime/new-self-hosted-integration-runtime-synapse.png" alt-text="Create a selfhosted IR":::
+++
+### Configure a self-hosted IR via UI
1. Enter a name for your IR, and select **Create**.
Here are details of the application's actions and arguments:
|ACTION|args|Description| ||-|--| |`-rn`,<br/>`-RegisterNewNode`|"`<AuthenticationKey>`" ["`<NodeName>`"]|Register a self-hosted integration runtime node with the specified authentication key and node name.|
-|`-era`,<br/>`-EnableRemoteAccess`|"`<port>`" ["`<thumbprint>`"]|Enable remote access on the current node to set up a high-availability cluster. Or enable setting credentials directly against the self-hosted IR without going through Azure Data Factory. You do the latter by using the **New-AzDataFactoryV2LinkedServiceEncryptedCredential** cmdlet from a remote machine in the same network.|
+|`-era`,<br/>`-EnableRemoteAccess`|"`<port>`" ["`<thumbprint>`"]|Enable remote access on the current node to set up a high-availability cluster. Or enable setting credentials directly against the self-hosted IR without going through an Azure Data Factory or Azure Synapse workspace. You do the latter by using the **New-AzDataFactoryV2LinkedServiceEncryptedCredential** cmdlet from a remote machine in the same network.|
|`-erac`,<br/>`-EnableRemoteAccessInContainer`|"`<port>`" ["`<thumbprint>`"]|Enable remote access to the current node when the node runs in a container.| |`-dra`,<br/>`-DisableRemoteAccess`||Disable remote access to the current node. Remote access is needed for multinode setup. The **New-AzDataFactoryV2LinkedServiceEncryptedCredential** PowerShell cmdlet still works even when remote access is disabled. This behavior is true as long as the cmdlet is executed on the same machine as the self-hosted IR node.| |`-k`,<br/>`-Key`|"`<AuthenticationKey>`"|Overwrite or update the previous authentication key. Be careful with this action. Your previous self-hosted IR node can go offline if the key is of a new integration runtime.|
If you move your cursor over the icon or message in the notification area, you c
You can associate a self-hosted integration runtime with multiple on-premises machines or virtual machines in Azure. These machines are called nodes. You can have up to four nodes associated with a self-hosted integration runtime. The benefits of having multiple nodes on on-premises machines that have a gateway installed for a logical gateway are: -- Higher availability of the self-hosted integration runtime so that it's no longer the single point of failure in your big data solution or cloud data integration with Data Factory. This availability helps ensure continuity when you use up to four nodes.
+- Higher availability of the self-hosted integration runtime so that it's no longer the single point of failure in your big data solution or cloud data integration. This availability helps ensure continuity when you use up to four nodes.
- Improved performance and throughput during data movement between on-premises and cloud data stores. Get more information on [performance comparisons](copy-activity-performance.md). You can associate multiple nodes by installing the self-hosted integration runtime software from [Download Center](https://www.microsoft.com/download/details.aspx?id=39717). Then, register it by using either of the authentication keys that were obtained from the **New-AzDataFactoryV2IntegrationRuntimeKey** cmdlet, as described in the [tutorial](tutorial-hybrid-copy-powershell.md).
You also need to make sure that Microsoft Azure is in your company's allowlist.
### Possible symptoms for issues related to the firewall and proxy server
-If you see error messages like the following ones, the likely reason is improper configuration of the firewall or proxy server. Such configuration prevents the self-hosted integration runtime from connecting to Data Factory to authenticate itself. To ensure that your firewall and proxy server are properly configured, refer to the previous section.
+If you see error messages like the following ones, the likely reason is improper configuration of the firewall or proxy server. Such configuration prevents the self-hosted integration runtime from connecting to Data Factory or Synapse pipelines to authenticate itself. To ensure that your firewall and proxy server are properly configured, refer to the previous section.
- When you try to register the self-hosted integration runtime, you receive the following error message: "Failed to register this Integration Runtime node! Confirm that the Authentication key is valid and the integration service host service is running on this machine." - When you open Integration Runtime Configuration Manager, you see a status of **Disconnected** or **Connecting**. When you view Windows event logs, under **Event Viewer** > **Application and Services Logs** > **Microsoft Integration Runtime**, you see error messages like this one:
At the Windows firewall level or machine level, these outbound ports are normall
> [!NOTE] > As currently Azure Relay doesn't support service tag, you have to use service tag **AzureCloud** or **Internet** in NSG rules for the communication to Azure Relay.
-> For the communication to Azure Data Factory, you can use service tag **DataFactoryManagement** in the NSG rule setup.
+> For the communication to Azure Data Factory and Synapse workspaces, you can use service tag **DataFactoryManagement** in the NSG rule setup.
Based on your source and sinks, you might need to allow additional domains and outbound ports in your corporate firewall or Windows firewall.
For some cloud databases, such as Azure SQL Database and Azure Data Lake, you mi
### Get URL of Azure Relay
-One required domain and port that need to be put in the allowlist of your firewall is for the communication to Azure Relay. The self-hosted integration runtime uses it for interactive authoring such as test connection, browse folder list and table list, get schema, and preview data. If you don't want to allow **.servicebus.windows.net** and would like to have more specific URLs, then you can see all the FQDNs that are required by your self-hosted integration runtime from the ADF portal. Follow these steps:
+One required domain and port that need to be put in the allowlist of your firewall is for the communication to Azure Relay. The self-hosted integration runtime uses it for interactive authoring such as test connection, browse folder list and table list, get schema, and preview data. If you don't want to allow **.servicebus.windows.net** and would like to have more specific URLs, then you can see all the FQDNs that are required by your self-hosted integration runtime from the service portal. Follow these steps:
-1. Go to ADF portal and select your self-hosted integration runtime.
+1. Go to the service portal and select your self-hosted integration runtime.
2. In Edit page, select **Nodes**. 3. Select **View Service URLs** to get all FQDNs.
data-factory Create Shared Self Hosted Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md
Title: Create a shared self-hosted integration runtime with PowerShell description: Learn how to create a shared self-hosted integration runtime in Azure Data Factory, so multiple data factories can access the integration runtime. +
data-factory Data Access Strategies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-access-strategies.md
description: Azure Data Factory now supports Static IP address ranges.
+ Last updated 05/28/2020
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-private-link.md
description: Learn about how Azure Private Link works in Azure Data Factory.
+ Last updated 06/16/2021
data-factory Data Factory Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-service-identity.md
Title: Managed identity for Data Factory
description: Learn about managed identity for Azure Data Factory. + Last updated 07/19/2021
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-troubleshoot-guide.md
Title: Troubleshoot Azure Data Factory | Microsoft Docs
+ Title: General Troubleshooting
-description: Learn how to troubleshoot external control activities in Azure Data Factory.
+description: Learn how to troubleshoot external control activities in Azure Data Factory and Azure Synapse Analytics pipelines.
+ Last updated 06/18/2021
-# Troubleshoot Azure Data Factory
+# Troubleshoot Azure Data Factory and Synapse pipelines
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article explores common troubleshooting methods for external control activities in Azure Data Factory.
+This article explores common troubleshooting methods for external control activities in Azure Data Factory and Synapse pipelines.
## Connector and copy activity
-For connector issues such as an encounter error using the copy activity, refer to [Troubleshoot Azure Data Factory Connectors](connector-troubleshoot-guide.md).
+For connector issues such as an encounter error using the copy activity, refer to the [Troubleshoot Connectors](connector-troubleshoot-guide.md) article.
## Azure Databricks
The following table applies to U-SQL.
- **Cause**: This error is caused by throttling on Data Lake Analytics. -- **Recommendation**: Reduce the number of submitted jobs to Data Lake Analytics. Either change Data Factory triggers and concurrency settings on activities, or increase the limits on Data Lake Analytics.
+- **Recommendation**: Reduce the number of submitted jobs to Data Lake Analytics. Either change triggers and concurrency settings on activities, or increase the limits on Data Lake Analytics.
<br/>
The following table applies to U-SQL.
- **Cause**: This error is caused by throttling on Data Lake Analytics. -- **Recommendation**: Reduce the number of submitted jobs to Data Lake Analytics. Either change Data Factory triggers and concurrency settings on activities, or increase the limits on Data Lake Analytics.
+- **Recommendation**: Reduce the number of submitted jobs to Data Lake Analytics. Either change triggers and concurrency settings on activities, or increase the limits on Data Lake Analytics.
### Error code: 2705
The following table applies to U-SQL.
- **Message**: `Response Content is not a valid JObject.` -- **Cause**: The Azure function that was called didn't return a JSON Payload in the response. Azure Data Factory (ADF) Azure function activity only supports JSON response content.
+- **Cause**: The Azure function that was called didn't return a JSON Payload in the response. Azure Data Factory and Synapse pipeline Azure function activity only support JSON response content.
- **Recommendation**: Update the Azure function to return a valid JSON Payload such as a C# function may return `(ActionResult)new OkObjectResult("{\"Id\":\"123\"}");`
The following table applies to Azure Batch.
- **Recommendation**: The problem could be either general HDInsight connectivity or network connectivity. First confirm that the HDInsight Ambari UI is available from any browser. Then check that your credentials are still valid.
- If you're using a self-hosted integrated runtime (IR), perform this step from the VM or machine where the self-hosted IR is installed. Then try submitting the job from Data Factory again.
+ If you're using a self-hosted integrated runtime (IR), perform this step from the VM or machine where the self-hosted IR is installed. Then try submitting the job again.
For more information, read [Ambari Web UI](../hdinsight/hdinsight-hadoop-manage-ambari.md#ambari-web-ui).
The following table applies to Azure Batch.
- **Cause**: When the error message contains a message similar to `Unable to service the submit job request as templeton service is busy with too many submit job requests` or `Queue root.joblauncher already has 500 applications, cannot accept submission of application`, too many jobs are being submitted to HDInsight at the same time. -- **Recommendation**: Limit the number of concurrent jobs submitted to HDInsight. Refer to Data Factory activity concurrency if the jobs are being submitted by the same activity. Change the triggers so the concurrent pipeline runs are spread out over time.
+- **Recommendation**: Limit the number of concurrent jobs submitted to HDInsight. Refer to activity concurrency if the jobs are being submitted by the same activity. Change the triggers so the concurrent pipeline runs are spread out over time.
Refer to [HDInsight documentation](../hdinsight/hdinsight-hadoop-templeton-webhcat-debug-errors.md) to adjust `templeton.parallellism.job.submit` as the error suggests.
The following table applies to Azure Batch.
- **Cause**: HDInsight cluster or service has issues. -- **Recommendation**: This error occurs when ADF doesn't receive a response from HDInsight cluster when attempting to request the status of the running job. This issue might be on the cluster itself, or HDInsight service might have an outage.
+- **Recommendation**: This error occurs when the service doesn't receive a response from HDInsight cluster when attempting to request the status of the running job. This issue might be on the cluster itself, or HDInsight service might have an outage.
Refer to [HDInsight troubleshooting documentation](../hdinsight/hdinsight-troubleshoot-guide.md), or contact Microsoft support for further assistance.
The following table applies to Azure Batch.
- **Message**: `Failed to initialize the HDInsight client for the cluster '%cluster;'. Error: '%message;'` -- **Cause**: The connection information for the HDI cluster is incorrect, the provided user doesn't have permissions to perform the required action, or the HDInsight service has issues responding to requests from ADF.
+- **Cause**: The connection information for the HDI cluster is incorrect, the provided user doesn't have permissions to perform the required action, or the HDInsight service has issues responding to requests from the service.
- **Recommendation**: Verify that the user information is correct, and that the Ambari UI for the HDI cluster can be opened in a browser from the VM where the IR is installed (for a self-hosted IR), or can be opened from any machine (for Azure IR).
The following table applies to Azure Batch.
- **Message**: `Failed to submit Spark job. Error: '%message;'` -- **Cause**: ADF tried to create a batch on a Spark cluster using Livy API (livy/batch), but received an error.
+- **Cause**: The service tried to create a batch on a Spark cluster using Livy API (livy/batch), but received an error.
-- **Recommendation**: Follow the error message to fix the issue. If there isn't enough information to get it resolved, contact the HDI team and provide them the batch ID and job ID, which can be found in the activity run Output in ADF Monitoring page. To troubleshoot further, collect the full log of the batch job.
+- **Recommendation**: Follow the error message to fix the issue. If there isn't enough information to get it resolved, contact the HDI team and provide them the batch ID and job ID, which can be found in the activity run Output in the service Monitoring page. To troubleshoot further, collect the full log of the batch job.
For more information on how to collect the full log, see [Get the full log of a batch job](/rest/api/hdinsightspark/hdinsight-spark-batch-job#get-the-full-log-of-a-batch-job). ### Error code: 2312 -- **Message**: `Spark job failed, batch id:%batchId;. Please follow the links in the activity run Output from ADF Monitoring page to troubleshoot the run on HDInsight Spark cluster. Please contact HDInsight support team for further assistance.`
+- **Message**: `Spark job failed, batch id:%batchId;. Please follow the links in the activity run Output from the service Monitoring page to troubleshoot the run on HDInsight Spark cluster. Please contact HDInsight support team for further assistance.`
- **Cause**: The job failed on the HDInsight Spark cluster. -- **Recommendation**: Follow the links in the activity run Output in ADF Monitoring page to troubleshoot the run on HDInsight Spark cluster. Contact HDInsight support team for further assistance.
+- **Recommendation**: Follow the links in the activity run Output in the service Monitoring page to troubleshoot the run on HDInsight Spark cluster. Contact HDInsight support team for further assistance.
For more information on how to collect the full log, see [Get the full log of a batch job](/rest/api/hdinsightspark/hdinsight-spark-batch-job#get-the-full-log-of-a-batch-job).
The following table applies to Azure Batch.
Connect to the VM where the IR is installed and open the Ambari UI in a browser. Use the private URL for the cluster. This connection should work from the browser. If it doesn't, contact HDInsight support team for further assistance. 1. If self-hosted IR isn't being used, then the HDI cluster should be accessible publicly. Open the Ambari UI in a browser and check that it opens up. If there are any issues with the cluster or the services on it, contact HDInsight support team for assistance.
- The HDI cluster URL used in ADF linked service must be accessible for ADF IR (self-hosted or Azure) in order for the test connection to pass, and for runs to work. This state can be verified by opening the URL from a browser either from VM, or from any public machine.
+ The HDI cluster URL used in the linked service must be accessible for the IR (self-hosted or Azure) in order for the test connection to pass, and for runs to work. This state can be verified by opening the URL from a browser either from VM, or from any public machine.
### Error code: 2343
The following table applies to Azure Batch.
- **Message**: `Failed to read the content of the hive script. Error: '%message;'` -- **Cause**: The script file doesn't exist or ADF couldn't connect to the location of the script.
+- **Cause**: The script file doesn't exist or the service couldn't connect to the location of the script.
- **Recommendation**: Verify that the script exists, and that the associated linked service has the proper credentials for a connection.
The following table applies to Azure Batch.
- **Message**: `Failed to create ODBC connection to the HDI cluster with error message '%message;'.` -- **Cause**: ADF tried to establish an Open Database Connectivity (ODBC) connection to the HDI cluster, and it failed with an error.
+- **Cause**: The service tried to establish an Open Database Connectivity (ODBC) connection to the HDI cluster, and it failed with an error.
- **Recommendation**:
The following table applies to Azure Batch.
- **Message**: `Hive execution through ODBC failed with error message '%message;'.` -- **Cause**: ADF submitted the hive script for execution to the HDI cluster via ODBC connection, and the script has failed on HDI.
+- **Cause**: The service submitted the hive script for execution to the HDI cluster via ODBC connection, and the script has failed on HDI.
- **Recommendation**:
The following table applies to Azure Batch.
- **Cause**: The credentials provided to connect to the storage where the files should be located are incorrect, or the files do not exist there. -- **Recommendation**: This error occurs when ADF prepares for HDI activities, and tries to copy files to the main storage before submitting the job to HDI. Check that files exist in the provided location, and that the storage connection is correct. As ADF HDI activities do not support MSI authentication on storage accounts related to HDI activities, verify that those linked services have full keys or are using Azure Key Vault.
+- **Recommendation**: This error occurs when the service prepares for HDI activities, and tries to copy files to the main storage before submitting the job to HDI. Check that files exist in the provided location, and that the storage connection is correct. As HDI activities do not support MSI authentication on storage accounts related to HDI activities, verify that those linked services have full keys or are using Azure Key Vault.
### Error code: 2351
The following table applies to Azure Batch.
- **Message**: `Failed to create on demand HDI cluster. Cluster name is '%clusterName;'.` -- **Cause**: The cluster creation failed, and ADF did not get an error back from HDInsight service.
+- **Cause**: The cluster creation failed, and the service did not get an error back from HDInsight service.
- **Recommendation**: Open the Azure portal and try to find the HDI resource with provided name, then check the provisioning status. Contact HDInsight support team for further assistance.
The following table applies to Azure Batch.
- **Recommendation**: Provide an Azure Blob storage account as an additional storage for HDInsight on-demand linked service.
-### SSL error when ADF linked service using HDInsight ESP cluster
+### SSL error when linked service using HDInsight ESP cluster
- **Message**: `Failed to connect to HDInsight cluster: 'ERROR [HY000] [Microsoft][DriverSupport] (1100) SSL certificate verification failed because the certificate is missing or incorrect.`
When you observe that the activity is running much longer than your normal runs
**Error message:** `The payload including configurations on activity/dataSet/linked service is too large. Please check if you have settings with very large value and try to reduce its size.`
-**Cause:** The payload for each activity run includes the activity configuration, the associated dataset(s), and linked service(s) configurations if any, and a small portion of system properties generated per activity type. The limit of such payload size is 896 KB as mentioned in [Data Factory limits](../azure-resource-manager/management/azure-subscription-service-limits.md#data-factory-limits) section.
+**Cause:** The payload for each activity run includes the activity configuration, the associated dataset(s), and linked service(s) configurations if any, and a small portion of system properties generated per activity type. The limit of such payload size is 896 KB as mentioned in the Azure limits documentation for [Data Factory](../azure-resource-manager/management/azure-subscription-service-limits.md#data-factory-limits) and [Azure Synapse Analytics](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-synapse-analytics-limits).
**Recommendation:** You hit this limit likely because you pass in one or more large parameter values from either upstream activity output or external, especially if you pass actual data across activities in control flow. Check if you can reduce the size of large parameter values, or tune your pipeline logic to avoid passing such values across activities and handle it inside the activity instead.
data-factory Data Factory Tutorials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-tutorials.md
description: A list of tutorials demonstrating Azure Data Factory concepts
+ Last updated 03/16/2021
data-factory Data Factory Ux Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-ux-troubleshoot-guide.md
Title: Troubleshoot Azure Data Factory UX
description: Learn how to troubleshoot Azure Data Factory UX issues. + Last updated 06/01/2021
data-factory Data Flow Aggregate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-aggregate.md
+ Last updated 09/14/2020
data-factory Data Flow Alter Row https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-alter-row.md
+ Last updated 05/06/2020
data-factory Data Flow Conditional Split https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-conditional-split.md
+ Last updated 05/21/2020
data-factory Data Flow Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-create.md
description: How to create an Azure Data Factory mapping data flow
+ Last updated 07/05/2021
data-factory Data Flow Derived Column https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-derived-column.md
description: Learn how to transform data at scale in Azure Data Factory with the
+ Last updated 09/14/2020
data-factory Data Flow Exists https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-exists.md
+ Last updated 05/07/2020
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expression-functions.md
description: Learn about expression functions in mapping data flow.
+ Last updated 07/04/2021
Last updated 07/04/2021
## Expression functions
-In Data Factory, use the expression language of the mapping data flow feature to configure data transformations.
+In Data Factory and Synapse pipelines, use the expression language of the mapping data flow feature to configure data transformations.
___ ### <code>abs</code> <code><b>abs(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
data-factory Data Flow Filter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-filter.md
+ Last updated 05/26/2020
data-factory Data Flow Flatten https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-flatten.md
ms.review: daperlov + Last updated 03/09/2020
data-factory Data Flow Join https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-join.md
+ Last updated 05/15/2020
data-factory Data Flow Lookup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-lookup.md
+ Last updated 02/19/2021
data-factory Data Flow New Branch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-new-branch.md
description: Replicating data streams in mapping data flow with multiple branche
+ Last updated 04/16/2021
data-factory