Updates from: 05/14/2021 03:05:06
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Id Token Hint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/id-token-hint.md
The following technical profile validates the token and extracts the claims. Cha
<Metadata> <!-- Replace with your endpoint location --> <Item Key="METADATA">https://your-app.azurewebsites.net/.well-known/openid-configuration</Item>
- <Item Key="IdTokenAudience">your_optional_audience</Item> -->
+ <Item Key="IdTokenAudience">your_optional_audience</Item>
<!-- <Item Key="issuer">your_optional_token_issuer_override</Item> --> </Metadata> <OutputClaims>
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
Previously updated : 02/10/2021 Last updated : 5/12/2021 # Tutorial: Configure Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C
-In this sample tutorial, we provide guidance on how to integrate [Microsoft Dynamics 365 Fraud Protection](/dynamics365/fraud-protection/overview) (DFP) with the Azure Active Directory (AD) B2C.
+In this sample tutorial, learn how to integrate [Microsoft Dynamics 365 Fraud Protection](https://docs.microsoft.com/dynamics365/fraud-protection/overview) (DFP) with Azure Active Directory (AD) B2C.
-Microsoft DFP provides clients with the capability to assess if the risk of attempts to create new accounts and attempts to login to client's ecosystem are fraudulent. Microsoft DFP assessment can be used by the customer to block or challenge suspicious attempts to create new fake accounts or to compromise existing accounts. Account protection includes artificial intelligence empowered device fingerprinting, APIs for real-time risk assessment, rule and list experience to optimize risk strategy as per client's business needs, and a scorecard to monitor fraud protection effectiveness and trends in client's ecosystem.
+Microsoft DFP provides organizations with the capability to assess the risk of attempts to create fraudulent accounts and log-ins. Microsoft DFP assessment can be used by the customer to block or challenge suspicious attempts to create new fake accounts or to compromise existing accounts.
-In this sample, we'll be integrating the account protection features of Microsoft DFP with an Azure AD B2C user flow. The service will externally fingerprint every sign-in or sign up attempt and watch for any past or present suspicious behavior. Azure AD B2C invokes a decision endpoint from Microsoft DFP, which returns a result based on all past and present behavior from the identified user, and also the custom rules specified within the Microsoft DFP service. Azure AD B2C makes an approval decision based on this result and passes the same back to Microsoft DFP.
+This sample demonstrates how to incorporate the Microsoft DFP device fingerprinting and account creation and sign-in assessment API endpoints into an Azure AD B2C custom policy.
## Prerequisites
To get started, you'll need:
Microsoft DFP integration includes the following components: -- **Azure AD B2C tenant**: Authenticates the user and acts as a client of Microsoft DFP. Hosts a fingerprinting script collecting identification and diagnostic data of every user that executes a target policy. Later blocks or challenges sign-in or sign-up attempts if Microsoft DFP finds them suspicious.
+- **Azure AD B2C tenant**: Authenticates the user and acts as a client of Microsoft DFP. Hosts a fingerprinting script collecting identification and diagnostic data of every user that executes a target policy. Later blocks or challenges sign-in or sign-up attempts based on the rule evaluation result returned by Microsoft DFP.
-- **Custom app service**: A web application that serves two purposes.-
- - Serves HTML pages to be used as Identity Experience Framework's UI. Responsible for embedding the Microsoft Dynamics 365 fingerprinting script.
-
- - An API controller with RESTful endpoints that connects Microsoft DFP to Azure AD B2C. Handle's data processing, structure, and adheres to the security requirements of both.
+- **Custom UI templates**: Used to customize the HTML content of the pages rendered by Azure AD B2C. These pages include the JavaScript snippet required for Microsoft DFP fingerprinting
- **Microsoft DFP fingerprinting service**: Dynamically embedded script, which logs device telemetry and self-asserted user details to create a uniquely identifiable fingerprint for the user to be used later in the decision-making process. -- **Microsoft DFP API endpoints**: Provides the decision result and accepts a final status reflecting the operation undertaken by the client application. Azure AD B2C doesn't communicate with the endpoints directly because of varying security and API payload requirements, instead uses the app service as an intermediate.
+- **Microsoft DFP API endpoints**: Provides the decision result and accepts a final status reflecting the operation undertaken by the client application. Azure AD B2C communicates directly with the Microsoft DFP endpoints using REST API connectors. API authentication occurs via a client_credentials grant to the Azure AD tenant in which Microsoft DFP is licensed and installed to obtain a bearer token.
The following architecture diagram shows the implementation.
The following architecture diagram shows the implementation.
|Step | Description | |:--| :--| | 1. | The user arrives at a login page. Users select sign-up to create a new account and enter information into the page. Azure AD B2C collects user attributes.
-| 2. | Azure AD B2C calls the middle layer API and passes on the user attributes.
-| 3. | Middle layer API collects user attributes and transforms it into a format that Microsoft DFP API could consume. Then after sends it to Microsoft DFP API.
-| 4. | After Microsoft DFP API consumes the information and processes it, it returns the result to the middle layer API.
-| 5. | The middle layer API processes the information and sends back relevant information to Azure AD B2C.
-| 6. | Azure AD B2C receives information back from the middle layer API. If it shows a Failure response, an error message is displayed to the user. If it shows a Success response, the user is authenticated and written into the directory.
+| 2. | Azure AD B2C calls the Microsoft DFP API and passes on the user attributes.
+| 3. | After Microsoft DFP API consumes the information and processes it, it returns the result to Azure AD B2C.
+| 4. | Azure AD B2C receives information back from the Microsoft DFP API. If it shows a Failure response, an error message is displayed to the user. If it shows a Success response, the user is authenticated and written into the directory.
## Set up the solution
The following architecture diagram shows the implementation.
[Set up your Azure AD tenant](/dynamics365/fraud-protection/integrate-real-time-api) to use Microsoft DFP.
-## Deploy to the web application
+## Set up your custom domain
-### Implement Microsoft DFP service fingerprinting
+In a production environment, you must use a [custom domain for Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-domain?pivots=b2c-custom-policy) and for the [Microsoft DFP fingerprinting service](https://docs.microsoft.com/dynamics365/fraud-protection/device-fingerprinting#set-up-dns). The domain for both services should be in the same root DNS zone to prevent browser privacy settings from blocking cross-domain cookies, isn't necessary in a non-production environment.
-[Microsoft DFP device fingerprinting](/dynamics365/fraud-protection/device-fingerprinting) is a requirement for Microsoft DFP account protection.
+Following is an example:
->[!NOTE]
->In addition to Azure AD B2C UI pages, customer may also implement the fingerprinting service inside app code for more comprehensive device profiling. Fingerprinting service in app code is not included in this sample.
+| Environment | Service | Domain |
+|:|:|:|
+| Development | Azure AD B2C | contoso-dev.b2clogin.com |
+| Development | Microsoft DFP Fingerprinting | fpt.dfp.microsoft-int.com |
+| UAT | Azure AD B2C | contoso-uat.b2clogin.com |
+| UAT | Microsoft DFP Fingerprinting | fpt.dfp.microsoft.com |
+| Production | Azure AD B2C | login.contoso.com |
+| Production | Microsoft DFP Fingerprinting | fpt.login.contoso.com |
-### Deploy the Azure AD B2C API code
+## Deploy the UI templates
-Deploy the [provided API code](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Dynamics-Fraud-Protection/API) to an Azure service. The code can be [published from Visual Studio](/visualstudio/deployment/quickstart-deploy-to-azure).
+1. Deploy the provided [Azure AD B2C UI templates](https://github.com/azure-ad-b2c/partner-integrations/blob/adstoffe/remove-middle-layer-api/samples/Dynamics-Fraud-Protection/ui-templates) to a public facing internet hosting service such as Azure Blob Storage.
-Set-up CORS, add **Allowed Origin** `https://{your_tenant_name}.b2clogin.com`
+2. Replace the value `https://<YOUR-UI-BASE-URL>/` with the root URL for your deployment location.
->[!NOTE]
->You'll later need the URL of the deployed service to configure Azure AD with the required settings.
+ >[!NOTE]
+ >You'll later need the base URL to configure the Azure AD B2C policies.
-See [App service documentation](../app-service/app-service-web-tutorial-rest-api.md) to learn more.
+3. In the `ui-templates/js/dfp.js` file, replace `<YOUR-DFP-INSTANCE-ID>` with your Microsoft DFP instance ID.
-### Add context-dependent configuration settings
+4. Ensure CORS is enabled for your Azure AD B2C domain name `https://{your_tenant_name}.b2clogin.com` or `your custom domain`.
-Configure the application settings in the [App service in Azure](../app-service/configure-common.md#configure-app-settings). This allows settings to be securely configured without checking them into a repository. The Rest API needs the following settings provided:
+See [UI customization documentation](https://docs.microsoft.com/azure/active-directory-b2c/customize-ui-with-html?pivots=b2c-custom-policy) to learn more.
-| Application settings | Source | Notes |
-| :-- | :| :--|
-| FraudProtectionSettings:InstanceId | Microsoft DFP Configuration | |
-| FraudProtectionSettings:DeviceFingerprintingCustomerId | Your Microsoft device fingerprinting customer ID | |
-| FraudProtectionSettings:ApiBaseUrl | Your Base URL from Microsoft DFP Portal | Remove '-int' to call the production API instead|
-| FraudProtectionSettings:TokenProviderConfig:Resource | Your Base URL - `https://api.dfp.dynamics-int.com` | Remove '-int' to call the production API instead|
-| FraudProtectionSettings:TokenProviderConfig:ClientId |Your Fraud Protection merchant Azure AD client app ID | |
-| FraudProtectionSettings:TokenProviderConfig:Authority | https://login.microsoftonline.com/<directory_ID> | Your Fraud Protection merchant Azure AD tenant authority |
-| FraudProtectionSettings:TokenProviderConfig:CertificateThumbprint* | The thumbprint of the certificate to use to authenticate against your merchant Azure AD client app |
-| FraudProtectionSettings:TokenProviderConfig:ClientSecret* | The secret for your merchant Azure AD client app | Recommended to use a secrets manager |
+## Azure AD B2C configuration
-*Only set 1 of the 2 marked parameters depending on if you authenticate with a certificate or a secret such as a password.
+### Add policy keys for your Microsoft DFP client app ID and secret
-## Azure AD B2C configuration
+1. In the Azure AD tenant where Microsoft DFP is set up, create an [Azure AD application and grant admin consent](https://docs.microsoft.com/dynamics365/fraud-protection/integrate-real-time-api#create-azure-active-directory-applications).
+2. Create a secret value for this application registration and note the application's client ID and client secret value.
+3. Save the client ID and client secret values as [policy keys in your Azure AD B2C tenant](https://docs.microsoft.com/azure/active-directory-b2c/policy-keys-overview).
+
+ >[!NOTE]
+ >You'll later need the policy keys to configure your Azure AD B2C policies.
### Replace the configuration values
In the provided [custom policies](https://github.com/azure-ad-b2c/partner-integr
| Placeholder | Replace with | Notes | | :-- | :| :--|
-|{your_tenant_name} | Your tenant short name | "yourtenant" from yourtenant.onmicrosoft.com |
-|{your_tenantId} | Tenant ID of your Azure AD B2C tenant | 01234567-89ab-cdef-0123-456789abcdef |
-| {your_tenant_IdentityExperienceFramework_appid} | App ID of the IdentityExperienceFramework app configured in your Azure AD B2C tenant | 01234567-89ab-cdef-0123-456789abcdef |
-| {your_tenant_ ProxyIdentityExperienceFramework _appid} | App ID of the ProxyIdentityExperienceFramework app configured in your Azure AD B2C tenant | 01234567-89ab-cdef-0123-456789abcdef |
-| {your_tenant_extensions_appid} | App ID of your tenant's storage application | 01234567-89ab-cdef-0123-456789abcdef |
-| {your_tenant_extensions_app_objectid} | Object ID of your tenant's storage application | 01234567-89ab-cdef-0123-456789abcdef |
-| {your_app_insights_instrumentation_key} | Instrumentation key of your app insights instance* | 01234567-89ab-cdef-0123-456789abcdef |
-| {your_ui_base_url} | Endpoint in your app service from where your UI files are served | `https://yourapp.azurewebsites.net/B2CUI/GetUIPage` |
-| {your_app_service_url} | URL of your app service | `https://yourapp.azurewebsites.net` |
-| {your-facebook-app-id} | App ID of the facebook app you configured for federation with Azure AD B2C | 000000000000000 |
-| {your-facebook-app-secret} | Name of the policy key you've saved facebook's app secret as | B2C_1A_FacebookAppSecret |
-
-*App insights can be in a different tenant. This step is optional. Remove the corresponding TechnicalProfiles and OrechestrationSteps if not needed.
-
-### Call Microsoft DFP label API
-
-Customers need to [implement label API](/dynamics365/fraud-protection/integrate-ap-api). See [Microsoft DFP API](https://apidocs.microsoft.com/services/dynamics365fraudprotection#/AccountProtection/v1.0) to learn more.
-
-`URI: < API Endpoint >/v1.0/label/account/create/<userId>`
-
-The value of the userID needs to be the same as the one in the corresponding Azure AD B2C configuration value (ObjectID).
+|{Settings:Production} | Whether to deploy the policies in production mode | `true` or `false` |
+|{Settings:Tenant} | Your tenant short name | `your-tenant` - from your-tenant.onmicrosoft.com |
+| {Settings:DeploymentMode} | Application Insights deployment mode to use | `Production` or `Development` |
+| {Settings:DeveloperMode} | Whether to deploy the policies in Application Insights developer mode | `true` or `false` |
+| {Settings:AppInsightsInstrumentationKey} | Instrumentation key of your Application Insights instance* | `01234567-89ab-cdef-0123-456789abcdef` |
+| {Settings:IdentityExperienceFrameworkAppId} | App ID of the IdentityExperienceFramework app configured in your Azure AD B2C tenant | `01234567-89ab-cdef-0123-456789abcdef`|
+| {Settings:ProxyIdentityExperienceFrameworkAppId} | App ID of the ProxyIdentityExperienceFramework app configured in your Azure AD B2C tenant | `01234567-89ab-cdef-0123-456789abcdef`|
+| {Settings:FacebookClientId} | App ID of the Facebook app you configured for federation with B2C | `000000000000000` |
+| {Settings:FacebookClientSecretKeyContainer} | Name of the policy key-in which you saved Facebook's app secret | `B2C_1A_FacebookAppSecret` |
+| {Settings:ContentDefinitionBaseUri} | Endpoint in where you deployed the UI files | `https://<my-storage-account>.blob.core.windows.net/<my-storage-container>` |
+| {Settings:DfpApiBaseUrl} | The base path for your DFP API instance - found in the DFP portal | `https://tenantname-01234567-89ab-cdef-0123-456789abcdef.api.dfp.dynamics.com/v1.0/` |
+| {Settings:DfpApiAuthScope} | The client_credentials scope for the DFP API service | `https://api.dfp.dynamics-int.com/.default or https://api.dfp.dynamics.com/.default` |
+| {Settings:DfpTenantId} | The ID of the Azure AD tenant (not B2C) where DFP is licensed and installed | `01234567-89ab-cdef-0123-456789abcdef` or `consoto.onmicrosoft.com` |
+| {Settings:DfpAppClientIdKeyContainer} | Name of the policy key-in which you save the DFP client ID | `B2C_1A_DFPClientId` |
+| {Settings:DfpAppClientSecretKeyContainer} | Name of the policy key-in which you save the DFP client secret | `B2C_1A_DFPClientSecret` |
+
+*Application insights can be set up in any Azure AD tenant/subscription. This value is optional but [recommended to assist with debugging](https://docs.microsoft.com/azure/active-directory-b2c/troubleshoot-with-application-insights).
>[!NOTE] >Add consent notification to the attribute collection page. Notify that the users' telemetry and user identity information will be recorded for account protection purposes. ## Configure the Azure AD B2C policy
-1. Go to the [Azure AD B2C policy](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Dynamics-Fraud-Protection/Policies) in the Policies folder.
+1. Go to the [Azure AD B2C policy](https://github.com/azure-ad-b2c/partner-integrations/tree/adstoffe/remove-middle-layer-api/samples/Dynamics-Fraud-Protection/policies) in the Policies folder.
2. Follow this [document](./tutorial-create-user-flows.md?pivots=b2c-custom-policy?tabs=applications#custom-policy-starter-pack) to download [LocalAccounts starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts)
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory Howto Password Ban Bad On Premises Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md
The following requirements apply to the Azure AD Password Protection proxy servi
* If .NET 4.7.2 is not already installed, download and run the installer found at [The .NET Framework 4.7.2 offline installer for Windows](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2). * All machines that host the Azure AD Password Protection proxy service must be configured to grant domain controllers the ability to log on to the proxy service. This ability is controlled via the "Access this computer from the network" privilege assignment. * All machines that host the Azure AD Password Protection proxy service must be configured to allow outbound TLS 1.2 HTTP traffic.
-* A *Global Administrator* or *Security Administrator* account to register the Azure AD Password Protection proxy service and forest with Azure AD.
+* A *Global Administrator* account is required to register the Azure AD Password Protection proxy service for the first time in a given tenant. Subsequent proxy and forest registrations with Azure AD may use an account with either *Global Administrator* or *Security Administrator* credentials.
* Network access must be enabled for the set of ports and URLs specified in the [Application Proxy environment setup procedures](../app-proxy/application-proxy-add-on-premises-application.md#prepare-your-on-premises-environment). ### Microsoft Azure AD Connect Agent Updater prerequisites
To install the Azure AD Password Protection proxy service, complete the followin
1. The proxy service is running on the machine, but doesn't have credentials to communicate with Azure AD. Register the Azure AD Password Protection proxy server with Azure AD using the `Register-AzureADPasswordProtectionProxy` cmdlet.
- This cmdlet requires either *Global Administrator* or *Security Administrator* credentials for your Azure tenant. This cmdlet must also be run using an account with local administrator privileges.
+ This cmdlet requires *Global Administrator* credentials the first time any proxy is registered for a given tenant. Subsequent proxy registrations in that tenant, whether for the same or different proxies, may use either *Global Administrator* or *Security Administrator* credentials.
After this command succeeds once, additional invocations will also succeed but are unnecessary.
active-directory Claims Challenge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/claims-challenge.md
Title: "Claims challenges and claims requests"
+ Title: Claims challenges, claims requests, and client capabilities
-description: Explanation of claims challenges and requests on the Microsoft identity platform.
+description: Explanation of claims challenges, claims requests, and client capabilities in the Microsoft identity platform.
Last updated 05/11/2021
-# Customer intent: As an application developer, I want to learn how to claims challenges returned from APIs protected by the Microsoft identity platform.
+# Customer intent: As an application developer, I want to learn how to handle claims challenges returned from APIs protected by the Microsoft identity platform.
-# Claims challenges and claims requests
+# Claims challenges, claims requests, and client capabilities
-A **claims challenge** is a response sent from an API indicating that an access token sent by a client application has insufficient claims. This can be because the token does not satisfy the conditional access policies set for that API, or the access token has been revoked.
+A *claims challenge* is a response sent from an API indicating that an access token sent by a client application has insufficient claims. This can be because the token does not satisfy the conditional access policies set for that API, or the access token has been revoked.
-A **claims request** is made by the client application to redirect the user back to the identity provider to retrieve a new token with claims that will satisfy the additional requirements that were not met.
+A *claims request* is made by the client application to redirect the user back to the identity provider to retrieve a new token with claims that will satisfy the additional requirements that were not met.
-Applications that use enhanced security features such as [Continuous Access Evaluation (CAE)](../conditional-access/concept-continuous-access-evaluation.md) and [Conditional Access authentication context](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/granular-conditional-access-for-sensitive-data-and-actions/ba-p/1751775) must be prepared to handle claims challenges.
+Applications that use enhanced security features like [Continuous Access Evaluation (CAE)](../conditional-access/concept-continuous-access-evaluation.md) and [Conditional Access authentication context](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/granular-conditional-access-for-sensitive-data-and-actions/ba-p/1751775) must be prepared to handle claims challenges.
-Your application will only receive claims challenges if it declares that it can handle them using **client capabilities**.
-
-To receive information about whether client applications can handle claims challenges, an API implementer must request **xms_cc** as an optional claim in its application manifest.
+Your application will receive claims challenges from popular services like [Microsoft Graph](/graph/overview) only if it declares its [client capabilities](#client-capabilities) in its calls to the service.
## Claims challenge header format
-The claims challenge is a directive in the www-authenticate header returned by an API when an access token is not authorized, and a new access token is required. The claims challenge comprises multiple parts: the HTTP status code of the response and the www-authenticate header, which itself has multiple parts and must contain a claims directive.
+The claims challenge is a directive as a `www-authenticate` header returned by an API when an [access token](access-tokens.md) presented to it isn't authorized, and a new access token with the right capabilities is required instead. The claims challenge comprises multiple parts: the HTTP status code of the response and the `www-authenticate` header, which itself has multiple parts and must contain a claims directive.
+
+Here's an example:
-``` https
+```https
HTTP 401; Unauthorized www-authenticate =Bearer realm="", authorization_uri="https://login.microsoftonline.com/common/oauth2/authorize", error="insufficient_claims", claims="eyJhY2Nlc3NfdG9rZW4iOnsiYWNycyI6eyJlc3NlbnRpYWwiOnRydWUsInZhbHVlIjoiYzEifX19"
www-authenticate =Bearer realm="", authorization_uri="https://login.microsoftonl
| `error` | Required | Must be "insufficient_claims" when a claims challenge should be generated. | | `claims` | Required when error is "insufficient_claims". | A quoted string containing a base 64 encoded [claims request](https://openid.net/specs/openid-connect-core-1_0.html#ClaimsParameter). The claims request should request claims for the "access_token" at the top level of the JSON object. The value (claims requested) will be context-dependent and specified later in this document. For size reasons, relying party applications SHOULD minify the JSON before base 64 encoding. The raw JSON of the example above is {"access_token":{"acrs":{"essential":true,"value":"cp1"}}}. |
-The 401 response may contain more than one www-authenticate header. All above fields must be contained within the same www-authenticate header. The www-authenticate header with the claims challenge MAY contain other fields. Fields in the header are unordered. According to RFC 7235, each parameter name must occur only once per authentication scheme challenge.
+The **401** response may contain more than one `www-authenticate` header. All fields in the preceding table must be contained within the same `www-authenticate` header. The `www-authenticate` header that contains the claims challenge *can* contain other fields. Fields in the header are unordered. According to RFC 7235, each parameter name must occur only once per authentication scheme challenge.
## Claims request
-When an application receives a claims challenge indicating that the prior access token is no longer considered valid, the application should clear the token from any local cache or user session. Then, it should redirect the signed-in user back to Azure AD to retrieve a new token using the [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md) with a **claims** parameter that will satisfy the additional requirements that were not met.
+When an application receives a claims challenge, it indicates that the prior access token is no longer considered valid. In this scenario, the application should clear the token from any local cache or user session. Then, it should redirect the signed-in user back to Azure Active Directory (Azure AD) to retrieve a new token by using the [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md) with a *claims* parameter that will satisfy the additional requirements that were not met.
-An example is provided below:
+Here's an example:
-``` https
+```https
GET https://login.microsoftonline.com/14c2f153-90a7-4689-9db7-9543bf084dad/oauth2/v2.0/authorize ?client_id=2810aca2-a927-4d26-8bca-5b32c1ef5ea9 &redirect_uri=https%3A%2F%contoso.com%3A44321%2Fsignin-oidc
To populate the claims parameter, the developer has to:
Upon completion of this flow, the application will receive an Access Token that has the additional claims that prove that the user satisfied the conditions required.
-## Client Capabilities
+## Client capabilities
+
+Client capabilities help a resources provider like a Web API detect whether the calling client application understands the claims challenge and can then customize its response accordingly. This capability might be useful when not all API clients are capable of handling claim challenges, and some earlier versions still expect a different response.
-Your application will only receive claims challenges if it declares that it can handle them using **client capabilities**.
+Some popular applications like [Microsoft Graph](/graph/overview) send claims challenges only if the calling client app declares that it's capable of handling them by using *client capabilities*.
To avoid extra traffic or impacts to user experience, Azure AD does not assume that your app can handle claims challenged unless you explicitly opt in. An application will not receive claims challenges (and will not be able to use the related features such as CAE tokens) unless it declares it is ready to handle them with the "cp1" capability.
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
Refer to the following list to configure managed identity for Azure Functions (i
Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet | | | :-: | :-: | :-: | :-: | | System assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
-| User assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
+| User assigned | ![Available][check] | Not available | Not available | Not available |
Refer to the following list to configure managed identity for Azure IoT Hub (in regions where available):
active-directory Insite Lms Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/insite-lms-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Insite LMS for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Insite LMS.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: c4dbe83d-b5b4-4089-be89-b357e8d6f359
+++
+ na
+ms.devlang: na
+ Last updated : 04/30/2021+++
+# Tutorial: Configure Insite LMS for automatic user provisioning
+
+This tutorial describes the steps you need to do in both Insite LMS and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Insite LMS](https://www.insite-it.net/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Insite LMS
+> * Remove users in Insite LMS when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Insite LMS
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A [Insite LMS tenant](https://www.insite-it.net/).
+* A user account in Insite LMS with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Insite LMS](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Insite LMS to support provisioning with Azure AD
+
+1. Navigate to `https://portal.insitelms.net/<OrganizationName>`.
+1. Download and install the Desktop Client.
+1. Log in with your Admin Account and Navigate to **Users** Module.
+1. Select the User `scim@insitelms.net` and press the button **Generate Access Token**. If you can't find the scim-User, contact the Support-Team
+ 1. Choose **AzureAdScimProvisioning** and press **generate**
+ 1. Copy the **AccessToken**
+1. The **Tenant Url** is `https://web.insitelms.net/<OrganizationName>/api/scim`.
+
+## Step 3. Add Insite LMS from the Azure AD application gallery
+
+Add Insite LMS from the Azure AD application gallery to start managing provisioning to Insite LMS. If you have previously setup Insite LMS for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Insite LMS, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to Insite LMS
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Insite LMS app based on user and group assignments in Azure AD.
+
+### To configure automatic user provisioning for Insite LMS in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Insite LMS**.
+
+ ![The Insite LMS link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, enter your Insite LMS **Tenant URL** and **Secret token** information. Select **Test Connection** to ensure that Azure AD can connect to Insite LMS. If the connection fails, ensure that your Insite LMS account has admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications. Select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Insite LMS**.
+
+1. Review the user attributes that are synchronized from Azure AD to Insite LMS in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Insite LMS for update operations. If you change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Insite LMS API supports filtering users based on that attribute. Select **Save** to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;|
+ |emails[type eq "work"].value|String|&check;|
+ |active|Boolean|
+ |name.givenName|String|
+ |name.familyName|String|
+ |phoneNumbers[type eq "work"].value|String|
+
+1. To configure scoping filters, see the instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Insite LMS, change **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users or groups that you want to provision to Insite LMS by selecting the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you're ready to provision, select **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to do than next cycles, which occur about every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+
+After you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users were provisioned successfully or unsuccessfully.
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion.
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. To learn more about quarantine states, see [Application provisioning status of quarantine](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
automation Quickstart Create Automation Account Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/quickstart-create-automation-account-template.md
Azure Automation delivers a cloud-based automation and configuration service tha
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-automation%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.automation%2F101-automation%2Fazuredeploy.json)
## Prerequisites
After you complete these steps, you need to [configure diagnostic settings](auto
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-automation/). ### API versions
If you're new to Azure Automation and Azure Monitor, it's important that you und
1. Select the following image to sign in to Azure and open a template. The template creates an Azure Automation account, a Log Analytics workspace, and links the Automation account to the workspace.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-automation%2Fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.automation%2F101-automation%2Fazuredeploy.json)
2. Enter the values.
If you're new to Azure Automation and Azure Monitor, it's important that you und
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the Azure portal, open the Automation account you just created.
+2. In the Azure portal, open the Automation account you just created.
3. From the left-pane, select **Runbooks**. On the **Runbooks** page, listed are three tutorial runbooks created with the Automation account.
azure-arc Tutorial Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md
To avoid having to set an imagePullSecret for every Pod, consider adding the ima
| ENVIRONMENT_NAME | Dev | | MANIFESTS_BRANCH | `master` | | MANIFESTS_REPO | The Git connection string for your GitOps repo |
-| PAT | A [created PAT token](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate#create-a-pat) with Read/Write source permissions. Save it to use later when creating the `stage` variable group. |
+| ORGANIZATION_NAME | Name of Azure DevOps organization |
+| PROJECT_NAME | Name of GitOps project in Azure DevOps |
+| REPO_URL | Full URL for GitOps repo |
| SRC_FOLDER | `azure-vote` | | TARGET_CLUSTER | `arc-cicd-cluster` | | TARGET_NAMESPACE | `dev` |
-> [!IMPORTANT]
-> Mark your PAT as a secret type. In your applications, consider linking secrets from an [Azure KeyVault](/azure/devops/pipelines/library/variable-groups#link-secrets-from-an-azure-key-vault).
->
### Stage environment variable group 1. Clone the **az-vote-app-dev** variable group.
To avoid having to set an imagePullSecret for every Pod, consider adding the ima
You're now ready to deploy to the `dev` and `stage` environments.
+## Give More Permissions to the Build Service
+The CD pipeline uses the security token of the running build to authenticate to the GitOps repository. More permissions are needed for the pipeline to create a new branch, push changes, and create pull requests.
+
+1. Go to `Project settings` from the Azure DevOps project main page.
+1. Select `Repositories`.
+1. Select `<GitOps Repo Name>`.
+1. Select `Security`.
+1. For the `<Project Name> Build Service (<Organization Name>)`, allow `Contribute`, `Contribute to pull requests`, and `Create branch`.
+
+For more information, see:
+- [Grant VC Permissions to the Build Service](https://docs.microsoft.com/azure/devops/pipelines/scripts/git-commands?view=azure-devops&tabs=yaml&preserve-view=true#version-control )
+- [Manage Build Service Account Permissions](https://docs.microsoft.com/azure/devops/pipelines/process/access-tokens?view=azure-devops&tabs=yaml&preserve-view=true#manage-build-service-account-permissions)
++ ## Deploy the dev environment for the first time With the CI and CD pipelines created, run the CI pipeline to deploy the app for the first time.
The CI pipeline:
* Verifies the Docker image has changed and the new image is pushed. ### CD pipeline
+During the initial CD pipeline run, you'll be asked to give the pipeline access to the GitOps repository. Select View when prompted that the pipeline needs permission to access a resource. Then, select Permit to grant permission to use the GitOps repository for the current and future runs of the pipeline.
+ The successful CI pipeline run triggers the CD pipeline to complete the deployment process. You'll deploy to each environment incrementally. > [!TIP]
azure-functions Functions Create Function Linux Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-function-linux-custom-image.md
In *host.json*, modify the `customHandler` section to configure the custom handl
To test the function locally, start the local Azure Functions runtime host in the root of the project folder: ::: zone pivot="programming-language-csharp" ```console
-func start --build
+func start
``` ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python"
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-run-local.md
To run a Functions project, run the Functions host. The host enables triggers fo
# [C\#](#tab/csharp) ```
-func start --build
+func start
``` # [Java](#tab/java)
npm start
>[!NOTE]
-> Version 1.x of the Functions runtime requires the `host` command, as in the following example:
->
-> ```
-> func host start
-> ```
+> Version 1.x of the Functions runtime instead requires `func host start`.
`func start` supports the following options:
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-unified-log.md
For example, if your rule [**Aggregation granularity**](#aggregation-granularity
## State and resolving alerts
-Log alerts can either be stateless or stateful (currently in preview when using the API).
+Log alerts can either be stateless or stateful (currently in preview).
Stateless alerts fire each time the condition is met, even if fired previously. You can [mark the alert as closed](../alerts/alerts-managing-alert-states.md) once the alert instance is resolved. You can also mute actions to prevent them from triggering for a period after an alert rule fired. In Log Analytics Workspaces and Application Insights, it's called **Suppress Alerts**. In all other resource types, it's called **Mute Actions**.
-See this alert evaluation example:
+See this alert stateless evaluation example:
| Time | Log condition evaluation | Result | - | -| -| -
See this alert evaluation example:
| 00:15 | TRUE | Alert fires and action groups called. New alert state ACTIVE. | 00:20 | FALSE | Alert doesn't fire. No actions called. Pervious alerts state remains ACTIVE.
-Stateful alerts fire once per incident and resolve. When creating new or updating existing log alert rules, add the `autoMitigate` flag with value `true` of type `Boolean`, under the `properties` section. You can use this feature in these API versions: `2018-04-16` and `2020-05-01-preview`.
+Stateful alerts fire once per incident and resolve. This feature is currently in preview in the Azure public cloud. You can set this using **Automatically resolve alerts** in the alert details section.
## Location selection in log alerts
azure-monitor Itsmc Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-troubleshoot-overview.md
The following sections identify common symptoms, possible causes, and resolution
**Cause**: There can be several reasons for this symptom:
-* Templates are not shown as a part of the action definition.
-* Incedents/Events are not created in ServiceNow.
+* Templates are not shown as a part of the action definition dropdown and an error message is shown: "Can't retrieve the template configuration, see the connector logs for more information."
+* Values are not shown in the dropdowns of the default fields as a part of the action definition and an error message is shown: "No values found for the following fields: <field names>."
+* Incidents/Events are not created in ServiceNow.
-**Resolution**: [Sync the connector](itsmc-resync-servicenow.md).
+**Resolution**:
+* [Sync the connector](itsmc-resync-servicenow.md).
+* Check the [dashboard](itsmc-dashboard.md) and review the errors in the section for connector status. Then review the [common errors and their resolutions](itsmc-dashboard-errors.md)
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/convert-classic-resource.md
The migration process is **permanent, and cannot be reversed**. Once you migrate
If you don't need to migrate an existing resource, and instead want to create a new workspace-based Application Insights resource use the [workspace-based resource creation guide](create-workspace-resource.md).
-## Pre-requisites
+## Pre-requisites
- A Log Analytics workspace with the access control mode set to the **`use resource or workspace permissions`** setting.
Once your resource is migrated, you will see the corresponding workspace info in
Clicking the blue link text will take you to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment.
+> [!NOTE]
+> After migrating to a workspace-based Application Insights resource we recommend using the [workspace's daily cap](../logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume) to limit ingestion and costs instead of the cap in Application Insights.
+ ## Understanding log queries We still provide full backwards compatibility for your Application Insights classic resource queries, workbooks, and log-based alerts within the Application Insights experience.
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-in-process-agent.md
The 3.0 agent supports Java 8 and above.
> Please review all the [configuration options](./java-standalone-config.md) carefully, > as the json structure has completely changed, in addition to the file name itself which went all lowercase.
-Download [applicationinsights-agent-3.0.3.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.0.3/applicationinsights-agent-3.0.3.jar)
+> [!WARNING]
+> **If you are upgrading from 3.0.x**
+>
+> The operation names and request telemetry names are now prefixed by the http method (`GET`, `POST`, etc.).
+> This can affect custom dashboards or alerts if they relied on the previous unprefixed values.
+> See the [3.1.0 release notes](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.1.0)
+> for more details.
+
+Download [applicationinsights-agent-3.1.0.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.1.0/applicationinsights-agent-3.1.0.jar)
**2. Point the JVM to the agent**
-Add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to your application's JVM args
+Add `-javaagent:path/to/applicationinsights-agent-3.1.0.jar` to your application's JVM args
Typical JVM args include `-Xmx512m` and `-XX:+UseG1GC`. So if you know where to add these, then you already know where to add this.
Point the agent to your Application Insights resource, either by setting an envi
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=... ```
-Or by creating a configuration file named `applicationinsights.json`, and placing it in the same directory as `applicationinsights-agent-3.0.3.jar`, with the following content:
+Or by creating a configuration file named `applicationinsights.json`, and placing it in the same directory as `applicationinsights-agent-3.1.0.jar`, with the following content:
```json {
to enable this preview feature and auto-collect the telemetry emitted by these A
* [Communication Chat](/java/api/overview/azure/communication-chat-readme) 1.0.0+ * [Communication Common](/java/api/overview/azure/communication-common-readme) 1.0.0+ * [Communication Identity](/java/api/overview/azure/communication-identity-readme) 1.0.0+
-* [Communication Sms](/java/api/overview/azure/communication-sms-readme) 1.0.0+
+* [Communication SMS](/java/api/overview/azure/communication-sms-readme) 1.0.0+
* [Cosmos DB](/java/api/overview/azure/cosmos-readme) 4.13.0+ * [Event Grid](/java/api/overview/azure/messaging-eventgrid-readme) 4.0.0+ * [Event Hubs](/java/api/overview/azure/messaging-eventhubs-readme) 5.6.0+
RequestTelemetry requestTelemetry = ThreadContext.getRequestTelemetryContext().g
requestTelemetry.setName("myname"); ```
-### Get the request telemetry id and the operation id using the 2.x SDK
+### Get the request telemetry Id and the operation Id using the 2.x SDK
> [!NOTE] > This feature is only in 3.0.3 and later
Add `applicationinsights-web-2.6.2.jar` to your application (all 2.x versions ar
</dependency> ```
-and get the request telemetry id and the operation id in your code:
+and get the request telemetry Id and the operation Id in your code:
```java import com.microsoft.applicationinsights.web.internal.ThreadContext;
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-arguments.md
Configure [App Services](../../app-service/configure-language-java.md#set-java-r
## Spring Boot
-Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.1.0.jar` somewhere before `-jar`, for example:
```
-java -javaagent:path/to/applicationinsights-agent-3.0.3.jar -jar <myapp.jar>
+java -javaagent:path/to/applicationinsights-agent-3.1.0.jar -jar <myapp.jar>
``` ## Spring Boot via Docker entry point
-If you are using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.0.3.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you are using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.1.0.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.0.3.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.1.0.jar", "-jar", "<myapp.jar>"]
```
-If you are using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` somewhere before `-jar`, for example:
+If you are using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.1.0.jar` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.0.3.jar -jar <myapp.jar>
+ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.1.0.jar -jar <myapp.jar>
``` ## Tomcat 8 (Linux)
ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.0.3.jar -jar <mya
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.0.3.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.1.0.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.0.3.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.0.3.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.1.0.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.1.0.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.0.3.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.1.0.jar
``` Quotes are not necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.0.3.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.1.0.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.1.0.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.1.0.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.1.0.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="<b>-javaagent:path/to/applicationinsights-agent-3.0.3.jar</b> -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="<b>-javaagent:path/to/applicationinsights-agent-3.1.0.jar</b> -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.1.0.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.0.3.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.1.0.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.0.3.jar
+-javaagent:path/to/applicationinsights-agent-3.1.0.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.1.0.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.0.3.jar>
+ -javaagent:path/to/applicationinsights-agent-3.1.0.jar>
</jvm-options> ... </java-config>
Java and Process Management > Process definition > Java Virtual Machine
``` In "Generic JVM arguments" add the following: ```--javaagent:path/to/applicationinsights-agent-3.0.3.jar
+-javaagent:path/to/applicationinsights-agent-3.1.0.jar
``` After that, save and restart the application server.
After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.0.3.jar
+-javaagent:path/to/applicationinsights-agent-3.1.0.jar
```
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
You will find more details and additional configuration options below.
## Configuration file path
-By default, Application Insights Java 3.0 expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.0.3.jar`.
+By default, Application Insights Java 3.0 expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.1.0.jar`.
You can specify your own configuration file path using either * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.0.3.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.1.0.jar` is located.
## Connection string
and the console, corresponding to this configuration:
`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.0.3.jar` is located.
+`applicationinsights-agent-3.1.0.jar` is located.
`maxSizeMb` is the max size of the log file before it rolls over.
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-troubleshoot.md
In this article, we cover some of the common issues that you might face while in
## Check the self-diagnostic log file
-By default, the Java 3.0 agent for Application Insights produces a log file named `applicationinsights.log` in the same directory that holds the `applicationinsights-agent-3.0.3.jar` file.
+By default, the Java 3.0 agent for Application Insights produces a log file named `applicationinsights.log` in the same directory that holds the `applicationinsights-agent-3.1.0.jar` file.
This log file is the first place to check for hints to any issues you might be experiencing.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
using different connection strings or different role names yet.
## Operation names
+In the 2.x SDK, in some cases, the operation names contained the full path, e.g.
++ Operation names in 3.0 have changed to generally provide a better aggregated view
-in the Application Insights Portal U/X.
+in the Application Insights Portal U/X, e.g.
However, for some applications, you may still prefer the aggregated view in the U/X that was provided by the previous operation names, in which case you can use the [telemetry processors](./java-standalone-telemetry-processors.md) (preview) feature in 3.0 to replicate the previous behavior.
-### Prefix the operation name with the http method (`GET`, `POST`, etc.)
-
-In the 2.x SDK, the operation names were prefixed by the http method (`GET`, `POST`, etc.), e.g
--
-Starting in 3.0.3, you can bring back this 2.x behavior using
-
-```json
-{
- "preview": {
- "httpMethodInOperationName": true
- }
-}
-```
-
-### Set the operation name to the full path
-
-Also, in the 2.x SDK, in some cases, the operation names contained the full path, e.g.
--
-The snippet below configures 4 telemetry processors that combine to replicate the previous behavior.
+The snippet below configures 3 telemetry processors that combine to replicate the previous behavior.
The telemetry processors perform the following actions (in order):
-1. The first telemetry processor is a span processor (has type `span`),
- which means it applies to `requests` and `dependencies`.
-
- It will match any span that has an attribute named `http.url`.
-
- Then it will update the span name with the `http.url` attribute value.
-
- This would be the end of it, except that `http.url` looks something like `http://host:port/path`,
- and it's likely that you only want the `/path` part.
-
-2. The second telemetry processor is also a span processor.
+1. The first telemetry processor is an attribute processor (has type `attribute`),
+ which means it applies to all telemetry which has attributes
+ (currently `requests` and `dependencies`, but soon also `traces`).
- It will match any span that has an attribute named `http.url`
- (in other words, any span that the first processor matched).
+ It will match any telemetry that has attributes named `http.method` and `http.url`.
- Then it will extract the path portion of the span name into an attribute named `tempName`.
+ Then it will extract the path portion of the `http.url` attribute into a new attribute named `tempName`.
-3. The third telemetry processor is also a span processor.
+2. The second telemetry processor is a span processor (has type `span`),
+ which means it applies to `requests` and `dependencies`.
It will match any span that has an attribute named `tempPath`. Then it will update the span name from the attribute `tempPath`.
-4. The last telemetry processor is an attribute processor (has type `attribute`),
- which means it applies to all telemetry which has attributes
- (currently `requests`, `dependencies` and `traces`).
+3. The last telemetry processor is an attribute processor, same type as the first telemetry processor.
It will match any telemetry that has an attribute named `tempPath`.
The telemetry processors perform the following actions (in order):
"preview": { "processors": [ {
- "type": "span",
- "include": {
- "matchType": "strict",
- "attributes": [
- { "key": "http.url" }
- ]
- },
- "name": {
- "fromAttributes": [ "http.url" ]
- }
- },
- {
- "type": "span",
+ "type": "attribute",
"include": { "matchType": "strict", "attributes": [
+ { "key": "http.method" },
{ "key": "http.url" } ] },
- "name": {
- "toAttributes": {
- "rules": [ "https?://[^/]+(?<tempPath>/[^?]*)" ]
+ "actions": [
+ {
+ "key": "http.url",
+ "pattern": "https?://[^/]+(?<tempPath>/[^?]*)",
+ "action": "extract"
}
- }
+ ]
}, { "type": "span",
The telemetry processors perform the following actions (in order):
] }, "name": {
- "fromAttributes": [ "tempPath" ]
+ "fromAttributes": [ "http.method", "tempPath" ],
+ "separator": " "
} }, {
azure-percept Azure Percept Audio Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-audio-datasheet.md
Last updated 02/16/2021
|Product Specification |Value | |--|--|
-|Performance |180 Degrees Far-field at 4m, 63dB |
+|Performance |180 Degrees Far-field at 4 m, 63 dB |
|Target Industries |Hospitality <br> Healthcare <br> Smart Buildings <br> Automotive <br> Retail <br> Manufacturing | |Hero Scenarios |In-room Virtual Concierge <br> Vehicle Voice Assistant and Command/Control <br> Point of Sale Services and Quality Control <br> Warehouse Task Tracking| |Included in Box |1x Azure Percept Audio SoM <br> 1x Developer (Interposer) Board <br> 1x FPC Cable <br> 1x USB 2.0 Type A to Micro USB Cable <br> 1x Mechanical Plate|
-|External Dimensions |90mm x170mm x 25mm |
+|External Dimensions |90 mm x170mm x 25 mm |
|Product Weight |0.42 Kg | |Management Control Plane |Azure Device Update (ADU) | |Supported Software and Services |Customizable Keywords and Commands <br> Azure Speech SDK <br> [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) <br> [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) | |Audio Codec |XMOS XUF208 Codec | |Sensors, Visual Indicators, and Components |4x MEM Sensing Microsystems Microphones (MSM261D3526Z1CM) <br> 2x Buttons <br> USB Hub <br> DAC <br> 3x LEDs <br> LED Driver | |Security Crypto-Controller |ST-Microelectronics STM32L462CE |
-|Ports |1x USB 2.0 Type Micro B <br> 3.5mm Audio Out |
-|Certification |FCC <br> IC <br> RoHS <br> REACH <br> UL |
-|Operating Temperature |0 to 35 degrees C |
-|Non-Operating Temperature |-40 to 85 degrees C |
+|Ports |1x USB 2.0 Type Micro B <br> 3.5 mm Audio Out |
+|Certification |FCC <br> IC <br> RoHS <br> REACH <br> UL <br> CE <br> ACMA <br> FCC <br> IC <br> VCCI <br> NRTL <br> CB |
+|Operating Temperature |0 degrees to 35 degrees C |
+|Non-Operating Temperature |-40 degrees to 85 degrees C |
|Relative Humidity |10% to 95% |
azure-percept Azure Percept Dk Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-dk-datasheet.md
Last updated 02/16/2021
|Operating Temperature |0 degrees to 35 degrees C | |Non-Operating Temperature |-40 degrees to 85 degrees C | |Relative Humidity |10% to 95% |
-|Certification  |FCC <br> IC <br> RoHS <br> REACH <br> UL |
+|Certification  |FCC <br> IC <br> RoHS <br> REACH <br> UL <br> CE <br> ACMA <br> FCC <br> IC <br> NCC <br> VCCI + MIC <br> NRTL <br> CB |
|Power Supply |19 VDC at 3.42A (65 W) |
azure-percept Azure Percept Vision Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-vision-datasheet.md
Specifications listed below are for the Azure Percept Vision device, included in
|Power   |3.5 W | |Ports |1x USB 3.0 Type C <br> 2x MIPI 4 Lane (up to 1.5 Gbps per lane) | |Control Interfaces |2x I2C <br> 2x SPI <br> 6x PWM (GPIOs: 2x clock, 2x frame sync, 2x unused) <br> 2x spare GPIO |
-|Certification |FCC <br> IC <br> RoHS <br> REACH <br> UL |
+|Certification |FCC <br> IC <br> RoHS <br> REACH <br> UL <br> CE <br> ACMA <br> FCC <br> IC <br> NCC <br> VCCI + MIC <br> NRTL <br> CB |
|Operating Temperature    |0 degrees to 27 degrees C (Azure Percept Vision SoM assembly with housing) <br> -10 degrees to 70 degrees C (Vision SoM chip) | |Touch Temperature |<= 48 degrees C | |Relative Humidity   |8% to 90% |
azure-resource-manager Convert To Template Spec https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/convert-to-template-spec.md
To see if you have any templates to convert, view the [template gallery in the p
To simplify converting templates in the template gallery, use a PowerShell script from the Azure Quickstart Templates repo. When you run the script, you can either create a new template spec for each template or download a template that creates the template spec. The script doesn't delete the template from the template gallery. 1. Copy the [migration script](https://github.com/Azure/azure-quickstart-templates/blob/master/201-templatespec-migrate-create/Migrate-GalleryItems.ps1). Save a local copy with the name *Migrate-GalleryItems.ps1*.
-1. To create new template specs, provide values for the `-ResourceGroupName` and `-Location` parameters.
+1. To create new template specs, provide values for the `-ResourceGroupName` and `-Location` parameters.
Set `ItemsToExport` to `MyGalleryItems` to export your templates. Set it to `AllGalleryItems` to export all templates you have access to.
To simplify converting templates in the template gallery, use a PowerShell scrip
.\Migrate-GalleryItems.ps1 -ItemsToExport MyGalleryItems -ExportToFile ```
- To learn how to deploy the template that creates the template spec, see [Quickstart: Create and deploy template spec (Preview)](quickstart-create-template-specs.md).
+ To learn how to deploy the template that creates the template spec, see [Quickstart: Create and deploy template spec](quickstart-create-template-specs.md).
For more information about the script and its parameters, see [Create TemplateSpecs from Template Gallery Templates](https://github.com/Azure/azure-quickstart-templates/tree/master/201-templatespec-migrate-create).
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cli.md
To avoid conflicts with concurrent deployments and to ensure unique entries in t
> [!NOTE] > Currently, Azure CLI doesn't support creating template specs by providing Bicep files. However you can create a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. Here is an [example](https://github.com/Azure/azure-docs-json-samples/blob/master/create-template-spec-using-template/azuredeploy.bicep).
-Instead of deploying a local or remote template, you can create a [template spec](template-specs.md). The template spec is a resource in your Azure subscription that contains an ARM template. It makes it easy to securely share the template with users in your organization. You use Azure role-based access control (Azure RBAC) to grant access to the template spec. This feature is currently in preview.
+Instead of deploying a local or remote template, you can create a [template spec](template-specs.md). The template spec is a resource in your Azure subscription that contains an ARM template. It makes it easy to securely share the template with users in your organization. You use Azure role-based access control (Azure RBAC) to grant access to the template spec.
The following examples show how to create and deploy a template spec.
az deployment group create \
--template-spec $id ```
-For more information, see [Azure Resource Manager template specs (Preview)](template-specs.md).
+For more information, see [Azure Resource Manager template specs](template-specs.md).
## Preview changes
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-powershell.md
Title: Deploy resources with PowerShell and template description: Use Azure Resource Manager and Azure PowerShell to deploy resources to Azure. The resources are defined in a Resource Manager template or a Bicep file. Previously updated : 03/25/2021 Last updated : 03/25/2021
For more information, see [Use relative path for linked templates](./linked-temp
> [!NOTE] > Currently, Azure PowerShell doesn't support creating template specs by providing Bicep files. However you can create a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. Here is an [example](https://github.com/Azure/azure-docs-json-samples/blob/master/create-template-spec-using-template/azuredeploy.bicep).
-Instead of deploying a local or remote template, you can create a [template spec](template-specs.md). The template spec is a resource in your Azure subscription that contains an ARM template. It makes it easy to securely share the template with users in your organization. You use Azure role-based access control (Azure RBAC) to grant access to the template spec. This feature is currently in preview.
+Instead of deploying a local or remote template, you can create a [template spec](template-specs.md). The template spec is a resource in your Azure subscription that contains an ARM template. It makes it easy to securely share the template with users in your organization. You use Azure role-based access control (Azure RBAC) to grant access to the template spec.
The following examples show how to create and deploy a template spec.
New-AzResourceGroupDeployment `
-TemplateSpecId $id ```
-For more information, see [Azure Resource Manager template specs (Preview)](template-specs.md).
+For more information, see [Azure Resource Manager template specs](template-specs.md).
## Preview changes
azure-resource-manager Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/frequently-asked-questions.md
This article answers frequently asked questions about Azure Resource Manager tem
To learn about the new template language, [sign up for notifications](https://aka.ms/armLangUpdates).
- To learn about template specs, see [Azure Resource Manager template specs (Preview)](template-specs.md).
+ To learn about template specs, see [Azure Resource Manager template specs](template-specs.md).
## Creating and testing templates
This article answers frequently asked questions about Azure Resource Manager tem
## Template Specs
-* **How can I get started with the preview release of Template Specs?**
-
- Install the latest version of PowerShell or Azure CLI. For Azure PowerShell, use [version 5.0.0 or later](/powershell/azure/install-az-ps). For Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
- * **How are template specs and Azure Blueprints related?** Azure Blueprints will use template specs in its implementation by replacing the `blueprint definition` resource with a `template spec` resource. We'll provide a migration path to convert the blueprint definition into a template spec, but the blueprint definition APIs will still be supported. There are no changes to the `blueprint assignment` resource. Blueprints will remain a user-experience to compose a governed environment in Azure.
This article answers frequently asked questions about Azure Resource Manager tem
* **Can I include a script in my template to do tasks that aren't possible in a template?**
- Yes, use [deployment scripts](deployment-script-template.md). You can include Azure PowerShell or Azure CLI scripts in your templates. The feature is in preview.
+ Yes, use [deployment scripts](deployment-script-template.md). You can include Azure PowerShell or Azure CLI scripts in your templates.
* **Can I still use custom script extensions and desired state configuration (DSC)?**
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 04/01/2021 Last updated : 05/13/2021 # Resource functions for ARM templates
You can use the response from pickZones to determine whether to provide null for
Returns an object representing a resource's runtime state.
+When referencing a resource that is deployed in the same template in Bicep, directly use the symbolic name of the resource to get the properties from the resource. For example:
+
+```bicep
+output storageEndpoint object = myStorageAccount.properties.primaryEndpoints
+```
+
+In the preceding example, *myStorageAccount* is the symbolic name of the storage account resource.
+
+For more information, see [Reference resources](./compare-template-syntax.md#reference-resources).
+ ### Parameters | Parameter | Required | Type | Description |
Typically, you use the **reference** function to return a particular value from
# [Bicep](#tab/bicep) ```bicep
-output BlobUri string = reference(resourceId('Microsoft.Storage/storageAccounts', storageAccountName)).primaryEndpoints.blob
-output FQDN string = reference(resourceId('Microsoft.Network/publicIPAddresses', ipAddressName)).dnsSettings.fqdn
+output BlobUri string = myStorageAccount.properties.primaryEndpoints.blob
+output FQDN string = myPublicIp.properties.dnsSettings.fqdn
```
+In the preceding example, *myStorageAccount* is the symbolic name of the storage account resource. *myPublicIp* is the symbolic name of the public IP address resource.
+ Use `'Full'` when you need resource values that aren't part of the properties schema. For example, to set key vault access policies, get the identity properties for a virtual machine.
When referencing a resource that is deployed in the same template, provide the n
"value": "[reference(parameters('storageAccountName'))]" ```
-# [Bicep](#tab/bicep)
-
-```bicep
-value: reference(storageAccountName)
-```
--- When referencing a resource that isn't deployed in the same template, provide the resource ID and `apiVersion`.
-# [JSON](#tab/json)
- ```json "value": "[reference(resourceId(parameters('storageResourceGroup'), 'Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2018-07-01')]" ```
When referencing a resource that isn't deployed in the same template, provide th
# [Bicep](#tab/bicep) ```bicep
-value: reference(resourceId(storageResourceGroup, 'Microsoft.Storage/storageAccounts', storageAccountName), '2018-07-01')]"
+value: myStorageAccount
+```
+
+In the preceding example, *myStorageAccount* is the symbolic name of the storage account resource.
+
+When referencing a resource that isn't deployed in the same template using Bicep, use the `existing` keyword. For example:
+
+```bicep
+resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' existing = {
+ name: storageAccountName
+}
+
+stg.id
```
To avoid ambiguity about which resource you're referencing, you can provide a fu
# [Bicep](#tab/bicep) ```bicep
-value: reference(resourceId('Microsoft.Network/publicIPAddresses', ipAddressName))
+value: myPublicIp
```
+In the preceding example, *myPublicIp* is the symbolic name of the public IP address resource.
+ When constructing a fully qualified reference to a resource, the order to combine segments from the type and name isn't simply a concatenation of the two. Instead, after the namespace, use a sequence of *type/name* pairs from least specific to most specific:
The following [example template](https://github.com/Azure/azure-docs-json-sample
} ```
-# [Bicep](#tab/bicep)
-
-```bicep
-param storageAccountName string
-
-resource myStorage 'Microsoft.Storage/storageAccounts@2016-12-01' = {
- name: storageAccountName
- location: resourceGroup().location
- sku: {
- name: 'Standard_LRS'
- }
- kind: 'Storage'
- tags: {}
- properties: {}
-}
-
-output referenceOutput object = reference(storageAccountName)
-output fullReferenceOutput object = reference(storageAccountName, '2016-12-01', 'Full')
-```
--- The preceding example returns the two objects. The properties object is in the following format: ```json
The full object is in the following format:
} ```
+# [Bicep](#tab/bicep)
+
+```bicep
+param storageAccountName string
+
+resource myStorageAccount 'Microsoft.Storage/storageAccounts@2016-12-01' = {
+ name: storageAccountName
+ location: resourceGroup().location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ tags: {}
+ properties: {}
+}
+
+output referenceOutput object = myStorageAccount
+```
+
+The preceding example returns the an object that is the same as using Full for JSON:
+
+```json
+{
+ "apiVersion":"2016-12-01",
+ "location":"southcentralus",
+ "sku": {
+ "name":"Standard_LRS",
+ "tier":"Standard"
+ },
+ "tags":{},
+ "kind":"Storage",
+ "properties": {
+ "creationTime":"2017-10-09T18:55:40.5863736Z",
+ "primaryEndpoints": {
+ "blob":"https://examplestorage.blob.core.windows.net/",
+ "file":"https://examplestorage.file.core.windows.net/",
+ "queue":"https://examplestorage.queue.core.windows.net/",
+ "table":"https://examplestorage.table.core.windows.net/"
+ },
+ "primaryLocation":"southcentralus",
+ "provisioningState":"Succeeded",
+ "statusOfPrimary":"available",
+ "supportsHttpsTrafficOnly":false
+ },
+ "subscriptionId":"<subscription-id>",
+ "resourceGroupName":"functionexamplegroup",
+ "resourceId":"Microsoft.Storage/storageAccounts/examplestorage",
+ "referenceApiVersion":"2016-12-01",
+ "condition":true,
+ "isConditionTrue":true,
+ "isTemplateResource":false,
+ "isAction":false,
+ "provisioningOperation":"Read"
+}
+```
+++ The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/reference.json) references a storage account that isn't deployed in this template. The storage account already exists within the same subscription. # [JSON](#tab/json)
The preceding example returns an object in the following format:
Returns the unique identifier of a resource. You use this function when the resource name is ambiguous or not provisioned within the same template. The format of the returned identifier varies based on whether the deployment happens at the scope of a resource group, subscription, management group, or tenant.
+In Bicep, you can often use the `id` property instead of using the resourceId function. To get the id property, use the symbolic name for a new or existing resource. For example:
+
+```bicep
+myStorageAccount.id
+```
+
+In the preceding example, *myStorageAccount* is the symbolic name of the storage account resource.
+
+To get the resource ID for a resource that isn't deployed in the Bicep file, use the existing keyword.
+
+```bicep
+resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' existing = {
+ name: storageAccountName
+}
+
+stg.id
+```
+ ### Parameters | Parameter | Required | Type | Description |
Often, you need to use this function when using a storage account or virtual net
] } ```+ # [Bicep](#tab/bicep) ```bicep
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-identity.md
Title: Concepts - Identity and access description: Learn about the identity and access concepts of Azure VMware Solution Previously updated : 05/11/2021 Last updated : 05/13/2021 # Azure VMware Solution identity concepts
Use the *admin* account to access NSX-T Manager. It has full privileges and lets
Now that you've covered Azure VMware Solution access and identity concepts, you may want to learn about: -- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md)
+- [How to enable Azure VMware Solution resource](deploy-azure-vmware-solution.md#step-1-register-the-microsoftavs-resource-provider)
- [Details of each privilege](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html) - [How Azure VMware Solution monitors and repairs private clouds](/azure/azure-vmware/concepts-private-clouds-clusters#host-monitoring-and-remediation)
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-networking.md
Title: Concepts - Network interconnectivity description: Learn about key aspects and use cases of networking and interconnectivity in Azure VMware Solution. Previously updated : 03/11/2021 Last updated : 05/13/2021 # Azure VMware Solution networking and interconnectivity concepts
Now that you've covered Azure VMware Solution network and interconnectivity conc
- [Azure VMware Solution storage concepts](concepts-storage.md) - [Azure VMware Solution identity concepts](concepts-identity.md)-- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md)
+- [How to enable Azure VMware Solution resource](deploy-azure-vmware-solution.md#step-1-register-the-microsoftavs-resource-provider)
<!-- LINKS - external --> [enable Global Reach]: ../expressroute/expressroute-howto-set-global-reach.md
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-private-clouds-clusters.md
Title: Concepts - Private clouds and clusters description: Learn about the key capabilities of Azure VMware Solution software-defined data centers and vSphere clusters. Previously updated : 04/27/2021 Last updated : 05/13/2021 # Azure VMware Solution private cloud and cluster concepts
Now that you've covered Azure VMware Solution private cloud concepts, you may wa
- [Azure VMware Solution networking and interconnectivity concepts](concepts-networking.md) - [Azure VMware Solution storage concepts](concepts-storage.md)-- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md)
+- [How to enable Azure VMware Solution resource](deploy-azure-vmware-solution.md#step-1-register-the-microsoftavs-resource-provider)
<!-- LINKS - internal --> [concepts-networking]: ./concepts-networking.md
azure-vmware Deploy Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-azure-vmware-solution.md
Title: Deploy and configure Azure VMware Solution
description: Learn how to use the information gathered in the planning stage to deploy and configure the Azure VMware Solution private cloud. Previously updated : 04/23/2021 Last updated : 05/13/2021 # Deploy and configure Azure VMware Solution
In this article, you'll use the information from the [planning section](producti
>[!IMPORTANT] >It's information that you've gone through the [planning section](production-ready-deployment-steps.md) before continuing.
+The diagram shows the deployment workflow of Azure VMware Solution.
++ ## Step 1. Register the **Microsoft.AVS** resource provider [!INCLUDE [register-resource-provider-steps](includes/register-resource-provider-steps.md)]
azure-vmware Production Ready Deployment Steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/production-ready-deployment-steps.md
Title: Plan the Azure VMware Solution deployment
description: This article outlines an Azure VMware Solution deployment workflow. The final result is an environment ready for virtual machine (VM) creation and migration. Previously updated : 04/27/2021 Last updated : 05/13/2021 # Plan the Azure VMware Solution deployment
The steps outlined give you a production-ready environment for creating virtual
## Request a host quota It's important to request a host quota early as you prepare to create your Azure VMware Solution resource. You can request a host quota now, so when the planning process is finished, you're ready to deploy the Azure VMware Solution private cloud. After the support team receives your request for a host quota, it takes up to five business days to confirm your request and allocate your hosts. If you have an existing Azure VMware Solution private cloud and want more hosts allocated, you complete the same process. For more information, see the following links, depending on the type of subscription you have:-- [EA customers](enable-azure-vmware-solution.md?tabs=azure-portal#request-host-quota-for-ea-customers)-- [CSP customers](enable-azure-vmware-solution.md?tabs=azure-portal#request-host-quota-for-csp-customers)
+- [EA customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-ea-customers)
+- [CSP customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-csp-customers)
## Identify the subscription Identify the subscription you plan to use to deploy Azure VMware Solution. You can either create a new subscription or reuse an existing one. >[!NOTE]
->The subscription must be associated with a Microsoft Enterprise Agreement or a Cloud Solution Provider Azure plan. For more information, see [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
+>The subscription must be associated with a Microsoft Enterprise Agreement or a Cloud Solution Provider Azure plan. For more information, see [How to enable Azure VMware Solution resource](deploy-azure-vmware-solution.md#step-1-register-the-microsoftavs-resource-provider).
## Identify the resource group
azure-vmware Request Host Quota Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/request-host-quota-azure-vmware-solution.md
+
+ Title: Request host quota for Azure VMware Solution
+description: Learn how to request host quota/capacity for Azure VMware Solution. You can also request more hosts in an existing Azure VMware Solution private cloud.
++ Last updated : 05/13/2021++
+# Request host quota for Azure VMware Solution
+
+In this how-to, you'll request host quot). You'll submit a support ticket to have your hosts allocated. If you have an existing Azure VMware Solution private cloud and want more hosts allocated, you'll follow the same process.
+
+>[!IMPORTANT]
+>It can take a few days to allocate the hosts, depending on the number requested. So request what is needed for provisioning, so you don't need to request a quota increase as often.
+
+## Eligibility criteria
+
+You'll need an Azure account in an Azure subscription. The Azure subscription must adhere to one of the following criteria:
+
+- A subscription under an [Azure Enterprise Agreement (EA)](../cost-management-billing/manage/ea-portal-agreements.md) with Microsoft.
+- A Cloud Solution Provider (CSP) managed subscription under an existing CSP Azure offers contract or an Azure plan.
+
+## Request host quota for EA customers
+
+1. In your Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information for the ticket:
+ - **Issue type:** Technical
+ - **Subscription:** Select your subscription
+ - **Service:** All services > Azure VMware Solution
+ - **Resource:** General question
+ - **Summary:** Need capacity
+ - **Problem type:** Capacity Management Issues
+ - **Problem subtype:** Customer Request for Additional Host Quota/Capacity
+
+1. In the **Description** of the support ticket, on the **Details** tab, provide:
+
+ - POC or Production
+ - Region Name
+ - Number of hosts
+ - Any other details
+
+ >[!NOTE]
+ >Azure VMware Solution recommends a minimum of three hosts to spin up your private cloud and for redundancy N+1 hosts.
+
+1. Select **Review + Create** to submit the request.
++
+## Request host quota for CSP customers
+
+CSPs must use [Microsoft Partner Center](https://partner.microsoft.com) to enable Azure VMware Solution for their customers. This article uses [CSP Azure plan](/partner-center/azure-plan-lp) as an example to illustrate the purchase procedure for partners.
+
+Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from Partner Center.
+
+>[!IMPORTANT]
+>Azure VMware Solution service does not provide a multi-tenancy required. Hosting partners requiring it are not supported.
+
+1. Configure the CSP Azure plan:
+
+ 1. In **Partner Center**, select **CSP** to access the **Customers** area.
+
+ :::image type="content" source="media/enable-azure-vmware-solution/csp-customers-screen.png" alt-text="Microsoft Partner Center customers area" lightbox="media/enable-azure-vmware-solution/csp-customers-screen.png":::
+
+ 1. Select your customer and then select **Add products**.
+
+ :::image type="content" source="media/enable-azure-vmware-solution/csp-partner-center.png" alt-text="Microsoft Partner Center" lightbox="media/enable-azure-vmware-solution/csp-partner-center.png":::
+
+ 1. Select **Azure plan** and then select **Add to cart**.
+
+ 1. Review and finish the general setup of the Azure plan subscription for your customer. For more information, see [Microsoft Partner Center documentation](/partner-center/azure-plan-manage).
+
+1. After you configure the Azure plan and you have the needed [Azure RBAC permissions](/partner-center/azure-plan-manage) in place for the subscription, you'll request the quota for your Azure plan subscription.
+
+ 1. Access Azure portal from [Microsoft Partner Center](https://partner.microsoft.com) using the **Admin On Behalf Of** (AOBO) procedure.
+
+ 1. Select **CSP** to access the **Customers** area.
+
+ 1. Expand customer details and select **Microsoft Azure Management Portal**.
+
+ 1. In Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information for the ticket:
+ - **Issue type:** Technical
+ - **Subscription:** Select your subscription
+ - **Service:** All services > Azure VMware Solution
+ - **Resource:** General question
+ - **Summary:** Need capacity
+ - **Problem type:** Capacity Management Issues
+ - **Problem subtype:** Customer Request for Additional Host Quota/Capacity
+
+ 1. In the **Description** of the support ticket, on the **Details** tab, provide:
+
+ - POC or Production
+ - Region Name
+ - Number of hosts
+ - Any other details
+ - Is intended to host multiple customers?
+
+ >[!NOTE]
+ >Azure VMware Solution recommends a minimum of three hosts to spin up your private cloud and for redundancy N+1 hosts.
+
+ 1. Select **Review + Create** to submit the request.
++
+## Next steps
+
+Before you can deploy Azure VMware Solution, you must first [register the resource provider](deploy-azure-vmware-solution.md#step-1-register-the-microsoftavs-resource-provider) with your subscription to enable the service.
+
azure-vmware Reserved Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/reserved-instance.md
Title: Reserved instances of Azure VMware Solution description: Learn how to buy a reserved instance for Azure VMware Solution. The reserved instance covers only the compute part of your usage and includes software licensing costs. Previously updated : 04/09/2021 Last updated : 05/13/2021 # Save costs with Azure VMware Solution
Reserved instances are available with some exceptions.
- **Clouds** - Reservations are available only in the regions listed on the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-vmware) page. -- **Insufficient quota** - A reservation scoped to a single/shared subscription must have hosts quota available in the subscription for the new reserved instance. You can [create quota increase request](enable-azure-vmware-solution.md) to resolve this issue.
+- **Insufficient quota** - A reservation scoped to a single/shared subscription must have hosts quota available in the subscription for the new reserved instance. You can [create quota increase request](request-host-quota-azure-vmware-solution.md) to resolve this issue.
- **Offer eligibility**- You'll need an [Azure Enterprise Agreement (EA)](../cost-management-billing/manage/ea-portal-agreements.md) with Microsoft.
azure-vmware Tutorial Create Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-create-private-cloud.md
Title: Tutorial - Deploy an Azure VMware Solution private cloud description: Learn how to create and deploy an Azure VMware Solution private cloud Previously updated : 04/23/2021 Last updated : 05/13/2021 # Tutorial: Deploy an Azure VMware Solution private cloud
In this tutorial, you'll learn how to:
- Appropriate administrative rights and permission to create a private cloud. You must be at minimum contributor level in the subscription. - Follow the information you gathered in the [planning](production-ready-deployment-steps.md) article to deploy Azure VMware Solution. - Ensure you have the appropriate networking configured as described in [Network planning checklist](tutorial-network-checklist.md).-- Hosts have been provisioned and the Microsoft.AVS resource provider has been registered as described in [Request hosts and enable the Microsoft.AVS resource provider](enable-azure-vmware-solution.md).
+- Hosts have been provisioned and the Microsoft.AVS [resource provider has been registered](deploy-azure-vmware-solution.md#step-1-register-the-microsoftavs-resource-provider).
## Create a private cloud
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/archive-tier-support.md
Title: Archive Tier support (Preview) description: Learn about Archive Tier Support for Azure Backup Previously updated : 02/18/2021 Last updated : 05/13/2021 # Archive Tier support (Preview)
Supported clients:
`$bckItm = $BackupItemList | Where-Object {$_.Name -match '<dbName>' -and $_.ContainerName -match '<vmName>'}`
+1. Add the date range for which you want to view the recovery points. For example, if you want to view the recovery points from the last 60 days to last 30 days, use the following command:
+
+ ```azurepowershell
+ $startDate = (Get-Date).AddDays(-59)
+ $endDate = (Get-Date).AddDays(-30)
+
+ ```
+ >[!NOTE]
+ >The span of the start date and the end date should not be more than 30 days.
## Use PowerShell ### Check archivable recovery points ```azurepowershell
-$rp = Get-AzRecoveryServicesBackupRecoveryPoint -VaultId $vault.ID -Item $bckItm -IsReadyForMove $true -TargetTier VaultArchive
+$rp = Get-AzRecoveryServicesBackupRecoveryPoint -VaultId $vault.ID -Item $bckItm -StartDate $startdate.ToUniversalTime() -EndDate $enddate.ToUniversalTime() -IsReadyForMove $true -TargetTier VaultArchive
```
-This will list all the recovery points associated with a particular backup item that are ready to be moved to archive.
+This will list all recovery points associated with a particular backup item that are ready to be moved to archive (from the start date to the end date). You can also modify the start dates and the end dates.
### Check why a recovery point cannot be moved to archive
backup Selective Disk Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/selective-disk-backup-restore.md
Title: Selective disk backup and restore for Azure virtual machines description: In this article, learn about selective disk backup and restore using the Azure virtual machine backup solution. Previously updated : 05/03/2021 Last updated : 05/13/2021
This solution is useful particularly in the following scenarios:
1. If you have critical data to be backed up in only one disk, or a subset of the disks and donΓÇÖt want to back up the rest of the disks attached to a VM to minimize the backup storage costs. 2. If you have other backup solutions for part of your VM or data. For example, if you back up your databases or data using a different workload backup solution and you want to use Azure VM level backup for the rest of the data or disks to build an efficient and robust system using the best capabilities available.
-Using PowerShell or Azure CLI, you can configure selective disk backup of the Azure VM. Using a script, you can include or exclude data disks using their LUN numbers. Currently, the ability to configure selective disks backup through the Azure portal is limited to the **Backup OS Disk only** option. So you can configure backup of your Azure VM with OS disk, and exclude all the data disks attached to it.
+Using PowerShell or Azure CLI, you can configure selective disk backup of the Azure VM. Using a script, you can include or exclude data disks using their LUN numbers. Currently, the ability to configure selective disks backup through the Azure portal is limited to the **Backup OS Disk only** option. So you can configure backup of your Azure VM with OS disk, and exclude all the data disks attached to it.
>[!NOTE] > The OS disk is by default added to the VM backup and can't be excluded.
The restore options to **Create new VM** and **Replace existing** aren't support
Currently, Azure VM backup doesn't support VMs with ultra-disks or shared disks attached to them. Selective disk backup can't be used to in such cases, which exclude the disk and backup the VM.
+If you use disk exclusion or selective disks while backing up Azure VM, _[stop protection and retain backup data](backup-azure-manage-vms.md#stop-protection-and-retain-backup-data)_. When resuming backup for this resource, you need to set up disk exclusion settings again.
+ ## Billing Azure virtual machine backup follows the existing pricing model, explained in detail [here](https://azure.microsoft.com/pricing/details/backup/).
batch Batch Aad Auth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-aad-auth.md
Title: Authenticate Azure Batch services with Azure Active Directory description: Batch supports Azure AD for authentication from the Batch service. Learn how to authenticate in one of two ways. Previously updated : 10/20/2020- Last updated : 05/13/2021+ # Authenticate Batch service solutions with Active Directory
To authenticate with a service principal, you need to assign Azure RBAC to your
1. In the Azure portal, navigate to the Batch account used by your application. 1. In the **Settings** section of the Batch account, select **Access Control (IAM)**.
-1. Select the **Role assignments** tab.
-1. Select **Add role assignment**.
-1. From the **Role** drop-down, choose either the *Contributor* or *Reader* role for your application. For more information on these roles, see [Get started with Azure role-based access control in the Azure portal](../role-based-access-control/overview.md).
-1. In the **Select** field, enter the name of your application. Select your application from the list, and then select **Save**.
+1. Assign either the [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Reader](../role-based-access-control/built-in-roles.md#reader) role to the application. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
Your application should now appear in your access control settings with an Azure role assigned.
-![Assign an Azure role to your application](./media/batch-aad-auth/app-rbac-role.png)
- ### Assign a custom role A custom role grants granular permission to a user for submitting jobs, tasks, and more. This provides the ability to prevent users from performing operations that affect cost, such as creating pools or modifying nodes.
Use the service principal credentials to open a **BatchServiceClient** object. T
- Learn about [Application and service principal objects in Azure Active Directory](../active-directory/develop/app-objects-and-service-principals.md) and [how to create an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). - Learn about [authenticating Batch Management solutions with Active Directory](batch-aad-auth-management.md). - For a Python example of how to create a Batch client authenticated using an Azure AD token, see the [Deploying Azure Batch Custom Image with a Python Script](https://github.com/azurebigcompute/recipes/blob/master/Azure%20Batch/CustomImages/CustomImagePython.md) sample.-
batch Virtual File Mount https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/virtual-file-mount.md
new PoolAddParameter
} ```
-### Azure Blob file system
+### Azure Blob container
Another option is to use Azure Blob storage via [blobfuse](../storage/blobs/storage-how-to-mount-container-linux.md). Mounting a blob file system requires an `AccountKey` or `SasKey` for your storage account. For information on getting these keys, see [Manage storage account access keys](../storage/common/storage-account-keys-manage.md) or [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../storage/common/storage-sas-overview.md). For more information and tips on using blobfuse, see the blobfuse .
new PoolAddParameter
AccountName = "StorageAccountName", ContainerName = "containerName", AccountKey = "StorageAccountKey",
- SasKey = "",
+ SasKey = "SasKey",
RelativeMountPath = "RelativeMountPath", BlobfuseOptions = "-o attr_timeout=240 -o entry_timeout=240 -o negative_timeout=120 " },
new PoolAddParameter
### Network File System
-Network File Systems (NFS) can be mounted to pool nodes, allowing traditional file systems to be accessed by Azure Batch. This could be a single NFS server deployed in the cloud, or an on-premises NFS server accessed over a virtual network. Alternatively, you can use the [Avere vFXT](../avere-vfxt/avere-vfxt-overview.md) distributed in-memory cache solution for data-intensive high-performance computing (HPC) tasks
+Network File Systems (NFS) can be mounted to pool nodes, allowing traditional file systems to be accessed by Azure Batch. This could be a single NFS server deployed in the cloud, or an on-premises NFS server accessed over a virtual network. NFS mounts support [Avere vFXT](../avere-vfxt/avere-vfxt-overview.md) distributed in-memory cache solution for data-intensive high-performance computing (HPC) tasks as well as other standard NFS compliant interfaces such as [NFS for Azure Blob](https://docs.microsoft.com/azure/storage/blobs/network-file-system-protocol-support) and [NFS for Azure Files](https://docs.microsoft.com/azure/storage/files/storage-files-how-to-mount-nfs-shares).
```csharp new PoolAddParameter
new PoolAddParameter
{ Source = "source", RelativeMountPath = "RelativeMountPath",
- MountOptions = "options ver=1.0"
+ MountOptions = "options ver=3.0"
}, } }
new PoolAddParameter
### Common Internet File System
-Mounting [Common Internet File Systems (CIFS)](/windows/desktop/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) to pool nodes is another way to provide access to traditional file systems. CIFS is a file-sharing protocol that provides an open and cross-platform mechanism for requesting network server files and services. CIFS is based on the enhanced version of the [Server Message Block (SMB)](/windows-server/storage/file-server/file-server-smb-overview) protocol for internet and intranet file sharing, and can be used to mount external file systems on Windows nodes.
+Mounting [Common Internet File Systems (CIFS)](/windows/desktop/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) to pool nodes is another way to provide access to traditional file systems. CIFS is a file-sharing protocol that provides an open and cross-platform mechanism for requesting network server files and services. CIFS is based on the enhanced version of the [Server Message Block (SMB)](/windows-server/storage/file-server/file-server-smb-overview) protocol for internet and intranet file sharing.
```csharp new PoolAddParameter
If a mount configuration fails, the compute node in the pool will fail and the n
To get the log files for debugging, use [OutputFiles](batch-task-output-files.md) to upload the `*.log` files. The `*.log` files contain information about the file system mount at the `AZ_BATCH_NODE_MOUNTS_DIR` location. Mount log files have the format: `<type>-<mountDirOrDrive>.log` for each mount. For example, a `cifs` mount at a mount directory named `test` will have a mount log file named: `cifs-test.log`.
-## Supported SKUs
-
-| Publisher | Offer | SKU | Azure Files Share | Blobfuse | NFS mount | CIFS mount |
-||||||||
-| batch | rendering-centos73 | rendering | :heavy_check_mark: <br>Note: Compatible with CentOS 7.7</br>| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Canonical | UbuntuServer | 16.04-LTS, 18.04-LTS | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Credativ | Debian | 8| :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| Credativ | Debian | 9 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| microsoft-ads | linux-data-science-vm | linuxdsvm | :heavy_check_mark: <br>Note: Compatible with CentOS 7.4. </br> | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| microsoft-azure-batch | centos-container | 7.6 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| microsoft-azure-batch | centos-container-rdma | 7.4 | :heavy_check_mark: <br>Note: Supports A_8 or 9 storage</br> | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| microsoft-azure-batch | ubuntu-server-container | 16.04-LTS | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| microsoft-dsvm | linux-data-science-vm-ubuntu | linuxdsvmubuntu | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| OpenLogic | CentOS | 7.6 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| OpenLogic | CentOS-HPC | 7.4, 7.3, 7.1 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Oracle | Oracle-Linux | 7.6 | :x: | :x: | :x: | :x: |
-| Windows | WindowsServer | 2012, 2016, 2019 | :heavy_check_mark: | :x: | :x: | :x: |
+## Support Matrix
+
+Azure Batch supports the following virtual file system types for node agents produced for their respective publisher and offer.
+
+| OS Type | Azure Files Share | Azure Blob container | NFS mount | CIFS mount |
+||||||
+| Linux | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Windows | :heavy_check_mark: | :x: | :x: | :x: |
## Networking requirements When using virtual file mounts with [Azure Batch pools in a virtual network](batch-virtual-network.md), keep in mind the following requirements and ensure no required traffic is blocked. -- **Azure Files**:
+- **Azure File shares**:
- Requires TCP port 445 to be open for traffic to/from the "storage" service tag. For more information, see [Use an Azure file share with Windows](../storage/files/storage-how-to-use-files-windows.md#prerequisites).-- **Blobfuse**:
+- **Azure Blob containers**:
- Requires TCP port 443 to be open for traffic to/from the "storage" service tag. - VMs must have access to https://packages.microsoft.com in order to download the blobfuse and gpg packages. Depending on your configuration, you may also need access to other URLs to download additional packages. - **Network File System (NFS)**: - Requires access to port 2049 (by default; your configuration may have other requirements). - VMs must have access to the appropriate package manager in order to download the nfs-common (for Debian or Ubuntu) or nfs-utils (for CentOS) package. This URL may vary based on your OS version. Depending on your configuration, you may also need access to other URLs to download additional packages.
+ - Mounting Azure Blob or Azure Files via NFS may require additional networking requirements such as compute nodes sharing the same designated subnet of a virtual network as the storage account.
- **Common Internet File System (CIFS)**: - Requires access to TCP port 445. - VMs must have access to the appropriate package manager(s) in order to download the cifs-utils package. This URL may vary based on your OS version.
cloud-services-extended-support Enable Key Vault Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/enable-key-vault-virtual-machine.md
+
+ Title: Apply the Key Vault VM Extension in Azure Cloud Services (extended support)
+description: Enable KeyVault VM Extension for Cloud Services (extended support)
+++++ Last updated : 05/12/2021+++
+# Apply the Key Vault VM extension to Azure Cloud Services (extended support)
+
+## What is the Key Vault VM Extension?
+The Key Vault VM extension provides automatic refresh of certificates stored in an Azure Key Vault. Specifically, the extension monitors a list of observed certificates stored in key vaults, and upon detecting a change, retrieves, and installs the corresponding certificates. For more details, see [Key Vault VM extension for Windows](../virtual-machines/extensions/key-vault-windows.md).
+
+## What's new in the Key Vault VM Extension?
+The Key Vault VM extension is now supported on the Azure Cloud Services (extended support) platform to enable the management of certificates end to end. The extension can now pull certificates from a configured Key Vault at a pre-defined polling interval and install them for use by the service.
+
+## How can I leverage the Key Vault VM extension?
+The following tutorial will show you how to install the Key Vault VM extension on PaaSV1 services by first creating a bootstrap certificate in your vault to get a token from AAD that will help in the authentication of the extension with the vault. Once the authentication process is set up and the extension is installed all latest certificates will be pulled down automatically at regular polling intervals.
++
+## Prerequisites
+To use the Azure Key Vault VM extension, you need to have an Azure Active Directory tenant. For more information on setting up a new Active Directory tenant, see [Setup your AAD tenant](../active-directory/develop/quickstart-create-new-tenant.md)
+
+## Enable the Azure Key Vault VM extension
+
+1. [Generate a certificate](../key-vault/certificates/create-certificate-signing-request.md) in your vault and download the .cer for that certificate.
+
+2. In the [Azure portal](https://portal.azure.com) navigate to **App Registrations**.
+
+ :::image type="content" source="media/app-registration-0.jpg" alt-text="Shows selecting app registration in the portal.":::
+
+
+3. In the App Registrations page select **New Registration** on the top left corner of the page
+
+ :::image type="content" source="media/app-registration-1.png" alt-text="Shows the app registration sin the Azure portal.":::
+
+4. On the next page you can fill the form and complete the app creation.
+
+5. Upload the .cer of the certificate to the Azure Active Directory app portal.
+
+ - Optionally you can also leverage the [Key Vault Event Grid notification feature](https://azure.microsoft.com/updates/azure-key-vault-event-grid-integration-is-now-available/) to upload the certificate.
+
+6. Grant the Azure Active Directory app secret list/get permissions in Key Vault:
+ - If you are using RBAC preview, search for the name of the AAD app you created and assign it to the Key Vault Secrets User (preview) role.
+ - If you are using vault access policies, then assign **Secret-Get** permissions to the AAD app you created. For more information, see [Assign access policies](../key-vault/general/assign-access-policy-portal.md)
+
+7. Install first version of the certificates created in the first step and the Key Vault VM extension using the ARM template as shown below:
+
+ ```json
+ {
+ "osProfile":{
+ "secrets":[
+ {
+ "sourceVault":{
+ "id":"[parameters('sourceVaultValue')]"
+ },
+ "vaultCertificates":[
+ {
+ "certificateUrl":"[parameters('bootstrpCertificateUrlValue')]"
+ }
+ ]
+ }
+ ]
+ }{
+ "name":"KVVMExtensionForPaaS",
+ "properties":{
+ "type":"KeyVaultForPaaS",
+ "autoUpgradeMinorVersion":true,
+ "typeHandlerVersion":"1.0",
+ "publisher":"Microsoft.Azure.KeyVault",
+ "settings":{
+ "secretsManagementSettings":{
+ "pollingIntervalInS":"3600",
+ "certificateStoreName":"My",
+ "certificateStoreLocation":"LocalMachine",
+ "linkOnRenewal":false,
+ "requireInitialSync":false,
+ "observedCertificates":"[parameters('keyVaultObservedCertificates']"
+ },
+ "authenticationSettings":{
+ "clientId":"Your AAD app ID",
+ "clientCertificateSubjectName":"Your boot strap certificate subject name [Do not include the 'CN=' in the subject name]"
+ }
+ }
+ }
+ }
+ ```
+ You might need to specify the certificate store for boot strap certificate in ServiceDefinition.csdef like below:
+
+ ```xml
+ <Certificates>
+ <Certificate name="bootstrapcert" storeLocation="LocalMachine" storeName="My" />
+ </Certificates>
+ ```
+
+## Next steps
+Further improve your deployment by [enabling monitoring in Cloud Services (extended support)](enable-alerts.md)
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 4/30/2021 Last updated : 5/12/2021 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## May 2021 Guest OS
+
+>[!NOTE]
+
+>The May Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the May Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 21-05 | [5003171] | Latest Cumulative Update(LCU) | 6.31 | 5/11/2021 |
+| Rel 21-05 | [4580325] | Flash update | 3.97, 4.90, 5.55, 6.31 | Oct 13, 2020 |
+| Rel 21-05 | [5003165] | IE Cumulative Updates | 2.110, 3.97, 4.90 | 5/11/2021 |
+| Rel 21-05 | [5003197] | Latest Cumulative Update(LCU) | 5.55 | 5/11/2021 |
+| Rel 21-05 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | 2.110 | Oct 13, 2020 |
+| Rel 21-05 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | 2.110 | Oct 13, 2020 |
+| Rel 21-05 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | 4.90 | Oct 13, 2020 |
+| Rel 21-05 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | 4.90 | Oct 13, 2020 |
+| Rel 21-05 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | 3.97 | Oct 13, 2020 |
+| Rel 21-05 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | 3.97 | Oct 13, 2020 |
+| Rel 21-05 | [4601060] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | 6.31 | Feb 9, 2021 |
+| Rel 21-05 | [5003233] | Monthly Rollup  | 2.110 | May 11, 2021 |
+| Rel 21-05 | [5003208] | Monthly Rollup  | 3.97 | May 11, 2021 |
+| Rel 21-05 | [5003209] | Monthly Rollup  | 4.90 | May 11, 2021 |
+| Rel 21-05 | [5001401] | Servicing Stack update  | 3.97 | Apr 13, 2021 |
+| Rel 21-05 | [5001403] | Servicing Stack update  | 4.90 | Apr 13, 2021 |
+| Rel 21-05 OOB | [4578013] | Standalone Security Update  | 4.90 | Aug 19, 2020 |
+| Rel 21-05 | [5001402] | Servicing Stack update  | 5.55 | Apr 13, 2021 |
+| Rel 21-05 | [4592510] | Servicing Stack update  | 2.110 | Dec 8, 2020 |
+| Rel 21-05 | [5003243] | Servicing Stack update  | 6.31 | May 11, 2021 |
+| Rel 21-05 | [4494175] | Microcode  | 5.55 | Sep 1, 2020 |
+| Rel 21-05 | [4494174] | Microcode  | 6.31 | Sep 1, 2020 |
+
+[5003171]: https://support.microsoft.com/kb/5003171
+[4580325]: https://support.microsoft.com/kb/4580325
+[5003165]: https://support.microsoft.com/kb/5003165
+[5003197]: https://support.microsoft.com/kb/5003197
+[4578952]: https://support.microsoft.com/kb/4578952
+[4578955]: https://support.microsoft.com/kb/4578955
+[4578953]: https://support.microsoft.com/kb/4578953
+[4578956]: https://support.microsoft.com/kb/4578956
+[4578950]: https://support.microsoft.com/kb/4578950
+[4578954]: https://support.microsoft.com/kb/4578954
+[4601060]: https://support.microsoft.com/kb/4601060
+[5003233]: https://support.microsoft.com/kb/5003233
+[5003208]: https://support.microsoft.com/kb/5003208
+[5003209]: https://support.microsoft.com/kb/5003209
+[5001401]: https://support.microsoft.com/kb/5001401
+[5001403]: https://support.microsoft.com/kb/5001403
+[4578013]: https://support.microsoft.com/kb/4578013
+[5001402]: https://support.microsoft.com/kb/5001402
+[4592510]: https://support.microsoft.com/kb/4592510
+[5003243]: https://support.microsoft.com/kb/5003243
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
++ ## April 2021 Guest OS
cognitive-services Luis How To Start New App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-how-to-start-new-app.md
Previously updated : 05/18/2020 Last updated : 05/13/2021
You can create a new app with the authoring APIs in a couple of ways:
## Create new app in LUIS 1. On **My Apps** page, select your **Subscription**, and **Authoring resource** then **+ Create**. -
-> [!div class="mx-imgBorder"]
-> ![LUIS apps list](./media/create-app-in-portal.png)
+
+ :::image type="content" source="media/create-app-in-portal.png" alt-text="LUIS apps list" lightbox="media/create-app-in-portal.png":::
1. In the dialog box, enter the name of your application, such as `Pizza Tutorial`.
- ![Create new app dialog](./media/create-pizza-tutorial-app-in-portal.png)
+ :::image type="content" source="media/create-pizza-tutorial-app-in-portal.png" alt-text="Create new app dialog" lightbox="media/create-pizza-tutorial-app-in-portal.png":::
1. Choose your application culture, and then select **Done**. The description and prediction resource are optional at this point. You can set then at any time in the **Manage** section of the portal.
You can create a new app with the authoring APIs in a couple of ways:
After the app is created, the LUIS portal shows the **Intents** list with the `None` intent already created for you. You now have an empty app.
- > [!div class="mx-imgBorder"]
- > ![Intents list with None intent created with no example utterances.](media/pizza-tutorial-new-app-empty-intent-list.png)
+ :::image type="content" source="media/pizza-tutorial-new-app-empty-intent-list.png" alt-text="Intents list with a None intent and no example utterances" lightbox="media/pizza-tutorial-new-app-empty-intent-list.png":::
## Other actions available on My Apps page
cognitive-services Create Faq Bot For Multiple Domains https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Tutorials/create-faq-bot-for-multiple-domains.md
+
+ Title: "Tutorial: Create a FAQ bot for multiple domains with Azure Bot Service"
+description: In this tutorial, create a no code FAQ Bot for production use cases with QnA Maker and Azure Bot Service.
++++ Last updated : 03/31/2021++
+# Add multiple domains to your FAQ bot
+
+When building a FAQ bot, you may encounter use cases that require you to address queries across multiple domains. Let's say the marketing team at Microsoft wants to build a customer support bot that answers common user queries on multiple Surface Products. For the sake of simplicity here, we will be using a FAQ URL each on [Surface Pen](https://support.microsoft.com/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98) and [Surface Earbuds](https://support.microsoft.com/surface/use-surface-earbuds-aea108c3-9344-0f11-e5f5-6fc9f57b21f9) to create the Knowledge Base.
+
+You can design your bot to handle queries across multiple domains with QnA Maker in the following ways:
+
+* Create a single knowledge base and tag QnA pairs into distinct domains with metadata.
+* Create a separate knowledge base for each domain.
+* Create a separate QnA Maker resource for each domain.
+
+## Create a single knowledge base and tag QnA pairs into distinct domains with metadata.
+
+The content authors can use documents to extract QnAs or add custom QnAs to the knowledgebase. In order to group these QnAs into specific domains or categories, you can add [metadata](../How-To/query-knowledge-base-with-metadata.md) to the QnA pairs.
+
+For the bot on surface products, you can take the following steps to create a bot that answers queries for both product types:
+
+1. Add the following FAQ URLs on Surface products in the STEP 3 of the Create KB page and click on 'Create your KB'. A new knowledgebase is created after extracting QnA Pairs from these sources.
+
+ [Surface Pen FAQ](https://support.microsoft.com/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98)<br>[Surface Earbuds FAQ](https://support.microsoft.com/surface/use-surface-earbuds-aea108c3-9344-0f11-e5f5-6fc9f57b21f9)
+
+2. After having created the KB we can go to **View Options** and click on **Show metadata**. This open up a metadata column for the QnAs.
+
+ >[!div class="mx-imgBorder"]
+ >[![Show Metadata]( ../media/qnamaker-tutorial-updates/show-metadata.png) ]( ../media/qnamaker-tutorial-updates/expand/show-metadata.png#lightbox)
++
+3. In this knowledge base, we have QnAs on two products and we would like to distinguish them such that we can search for responses amongst QnAs for a given product. In order to do that, we should update the metadata field for the QnA pairs accordingly.
+
+ As you can see in the example below, we have added a metadata with **product** as key and **surface_pen** or **surface_earbuds** as values wherever applicable. You can extend this example to extract data on multiple products and add a different value for each product.
+
+ >[!div class="mx-imgBorder"]
+ >[![Metadata]( ../media/qnamaker-tutorial-updates/metadata-example-2.png) ]( ../media/qnamaker-tutorial-updates/expand/metadata-example-2.png#lightbox)
+
+4. Now, in order to to restrict the system to search for the response across a particular product you would need to pass that product as a strict filter in the generate Answer API.
+
+ Learn [how to use the GenerateAnswer API](../How-To/metadata-generateanswer-usage.md). The GenerateAnswer URL has the following format:
+ ```
+ https://{QnA-Maker-endpoint}/knowledgebases/{knowledge-base-ID}/generateAnswer
+ ```
+
+ In the JSON body for the API call, we have passed *surface_pen* as value for the metadata *product*. So, the system will only look for the response among the QnA pairs with the same metadata.
+
+ ```json
+ {
+ "question": "What is the price?",
+ "top": 6,
+ "isTest": true,
+ "scoreThreshold": 30,
+ "rankerType": "" // values: QuestionOnly
+ "strictFilters": [
+ {
+ "name": "product",
+ "value": "surface_pen"
+ }],
+ "userId": "sd53lsY="
+ }
+ ```
+
+ You can obtain metadata value based on user input in the following ways:
+
+ * Explicitly take the domain as input from the user through the bot client. For instance as shown below, you can take product category as input from the user when the conversation is initiated.
+
+ ![Take metadata input](../media/qnamaker-tutorial-updates/expand/explicit-metadata-input.png)
+
+ * Implicitly identify domain based on bot context. For instance, in case the previous question was on a particular Surface product, it can be saved as context by the client. If the user doesn't specify the product in the next query, you could pass on the bot context as metadata to the Generate Answer API.
+
+ ![Pass context]( ../media/qnamaker-tutorial-updates/expand/extract-metadata-from-context.png)
+
+ * Extract entity from user query to identify domain to be used for metadata filter. You can use other Cognitive Services such as Text Analytics and LUIS for entity extraction.
+
+ ![Extract metadata from query]( ../media/qnamaker-tutorial-updates/expand/extract-metadata-from-query.png)
+
+### How large can our knowledge bases be?
+
+You add upto 50000 QnA pairs to a single knowledge base. If your data exceeds 50,000 QnA pairs, you should consider splitting the knowledge base.
+
+## Create a separate knowledge base for each domain
+
+You can also create a separate knowledge base for each domain and maintain the knowledge bases separately. All APIs require for the user to pass on the knowledge base ID to make any update to the knowledge base or fetch an answer to the user's question.
+
+When the user question is received by the service, you would need to pass on the KB ID in the Generate Answer endpoint shown above to fetch a response from the relevant knowledgebase. You can locate the knowledge base ID in the **Publish** page section as shown below.
+
+>[!div class="mx-imgBorder"]
+>![Fetch KB ID](../media/qnamaker-tutorial-updates/fetch-kb-id.png)
+
+## Create a separate QnA Maker resource for each domain.
+
+Let's say the marketing team at Microsoft wants to build a customer support bot that answers user queries on Surface and Xbox products. They plan to assign distinct teams to access knowledge bases on Surface and Xbox. In this case, it is advised to create two QnA Maker resources - one for Surface and another for Xbox. You can however define distinct roles for users accessing the same resource. Read more about [Role based access control](../How-To/manage-qna-maker-app.md).
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-speech-overview.md
# What is Custom Speech?
-[Custom Speech](https://aka.ms/customspeech) is a UI-based tool that allows you to evaluate and improve the Microsoft speech-to-text accuracy for your applications and products. All it takes to get started is a handful of test audio files. Follow the links in this article to start creating a custom speech-to-text experience.
+Custom Speech allows you to evaluate and improve the Microsoft speech-to-text accuracy for your applications and products. Follow the links in this article to start creating a custom speech-to-text experience.
## What's in Custom Speech?
cognitive-services Cancel Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/cancel-translation.md
Request parameters passed on the query string are:
|Query parameter|Required|Description| |--|--|--|
-|id|True|The operation-id.|
+|id|True|The operation-ID.|
## Request headers
The following information is returned in a successful response.
|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>| |message|string|Gets high-level error message.| |target|string|Gets the source of the error. For example, it would be "documents" or "document id" for an invalid document.|
-|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message, and optional properties target, details(key value pair), inner error (can be nested).|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message, and optional properties target, details (key value pair), inner error (this can be nested).|
|innerError.code|string|Gets code error string.|
-|inner.Eroor.message|string|Gets high-level error message.|
+|innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example, it would be "documents" or "document ID" if there was an invalid document.|
## Examples
cognitive-services Get Document Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-document-status.md
The following are the possible HTTP status codes that a request returns.
|Name|Type|Description| | | | | |path|string|Location of the document or folder.|
+|sourcePath|string|Location of the source document.|
|createdDateTimeUtc|string|Operation created date time.| |lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.| |status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
-|to|string|Two letter language code of To Language. See the list of languages.|
+|to|string|Two letter language code of To Language. [See the list of languages](../../language-support.md).|
|progress|number|Progress of the translation if available| |id|string|Document ID.| |characterCharged|integer|Characters charged by the API.|
The following are the possible HTTP status codes that a request returns.
| | | | |code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>| |message|string|Gets high-level error message.|
-|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message, and optional properties target, details(key value pair), inner error (can be nested).|
+|innerError|InnerTranslationError|New Inner Error format which conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(this can be nested).|
|innerError.code|string|Gets code error string.| |innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document id" in case of invalid document.|
## Examples
The following JSON object is an example of a successful response.
```JSON { "path": "https://myblob.blob.core.windows.net/destinationContainer/fr/mydoc.txt",
+ "sourcePath": "https://myblob.blob.core.windows.net/sourceContainer/fr/mydoc.txt",
"createdDateTimeUtc": "2020-03-26T00:00:00Z", "lastActionDateTimeUtc": "2020-03-26T01:00:00Z", "status": "Running",
cognitive-services Get Documents Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-documents-status.md
# Get documents status
-The Get documents status method returns the status for all documents in a batch document translation request.
+If the number of documents in the response exceeds our paging limit, server-side paging is used. Paginated responses indicate a partial result and include a continuation token in the response. The absence of a continuation token means that no other pages are available.
-The documents included in the response are sorted by document ID in descending order. If the number of documents in the response exceeds our paging limit, server-side paging is used. Paginated responses indicate a partial result and include a continuation token in the response. The absence of a continuation token means that no additional pages are available.
+$top, $skip and $maxpagesize query parameters can be used to specify a number of results to return and an offset for the collection.
-$top and $skip query parameters can be used to specify a number of results to return and an offset for the collection. The server honors the values specified by the client. However, clients must be prepared to handle responses that contain a different page size or contain a continuation token.
+$top indicates the total number of records the user wants to be returned across all pages. $skip indicates the number of records to skip from the list of document status held by the server based on the sorting method specified. By default, we sort by descending start time. $maxpagesize is the maximum items returned in a page. If more items are requested via $top (or $top is not specified and there are more items to be returned), @nextLink will contain the link to the next page.
-When both $top and $skip are included, the server should first apply $skip and then $top on the collection.
+$orderBy query parameter can be used to sort the returned list (ex "$orderBy=createdDateTimeUtc asc" or "$orderBy=createdDateTimeUtc desc"). The default sorting is descending by createdDateTimeUtc. Some query parameters can be used to filter the returned list (ex: "status=Succeeded,Cancelled") will only return succeeded and canceled documents. createdDateTimeUtcStart and createdDateTimeUtcEnd can be used combined or separately to specify a range of datetime to filter the returned list by. The supported filtering query parameters are (status, IDs, createdDateTimeUtcStart, createdDateTimeUtcEnd).
+
+When both $top and $skip are included, the server should first apply $skip and then $top on the collection.
> [!NOTE] > If the server can't honor $top and/or $skip, the server must return an error to the client informing about it instead of just ignoring the query options. This reduces the risk of the client making assumptions about the data returned.
Learn how to find your [custom domain name](../get-started-with-document-transla
Request parameters passed on the query string are:
-|Query parameter|Required|Description|
-| | | |
-|id|True|The operation ID.|
-|$skip|False|Skip the $skip entries in the collection. When both $top and $skip are supplied, $skip is applied first.|
-|$top|False|Take the $top entries in the collection. When both $top and $skip are supplied, $skip is applied first.|
+|Query parameter|In|Required|Type|Description|
+| | | | | |
+|id|path|True|string|The operation ID.|
+|endpoint|path|True|string|Supported Cognitive Services endpoints (protocol and hostname, for example: `https://westus.api.cognitive.microsoft.com`).|
+|$maxpagesize|query|False|integer int32|$maxpagesize is the maximum items returned in a page. If more items are requested via $top (or $top is not specified and there are more items to be returned), @nextLink will contain the link to the next page. Clients MAY request server-driven paging with a specific page size by specifying a $maxpagesize preference. The server SHOULD honor this preference if the specified page size is smaller than the server's default page size.|
+|$orderBy|query|False|array|The sorting query for the collection (ex: 'CreatedDateTimeUtc asc', 'CreatedDateTimeUtc desc').|
+|$skip|query|False|integer int32|$skip indicates the number of records to skip from the list of records held by the server based on the sorting method specified. By default, we sort by descending start time. Clients MAY use $top and $skip query parameters to specify a number of results to return and an offset into the collection. When both $top and $skip are given by a client, the server SHOULD first apply $skip and then $top on the collection. Note: If the server can't honor $top and/or $skip, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
+|$top|query|False|integer int32|$top indicates the total number of records the user wants to be returned across all pages. Clients MAY use $top and $skip query parameters to specify a number of results to return and an offset into the collection. When both $top and $skip are given by a client, the server SHOULD first apply $skip and then $top on the collection. Note: If the server can't honor $top and/or $skip, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
+|createdDateTimeUtcEnd|query|False|string date-time|The end datetime to get items before.|
+|createdDateTimeUtcStart|query|False|string date-time|The start datetime to get items after.|
+|ids|query|False|array|IDs to use in filtering.|
+|statuses|query|False|array|Statuses to use in filtering.|
## Request headers
The following information is returned in a successful response.
|Name|Type|Description| | | | | |@nextLink|string|Url for the next page. Null if no more pages available.|
-|value|DocumentStatusDetail []|The detail status of individual documents listed below.|
+|value|DocumentStatus []|The detail status of individual documents listed below.|
|value.path|string|Location of the document or folder.|
+|value.sourcePath|string|Location of the source document.|
|value.createdDateTimeUtc|string|Operation created date time.|
-|value.lastActionDateTimeUt|string|Date time in which the operation's status has been updated.|
+|value.lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
|value.status|status|List of possible statuses for job or document.<ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>| |value.to|string|To language.|
-|value.progress|string|Progress of the translation if available.|
+|value.progress|number|Progress of the translation if available.|
|value.id|string|Document ID.| |value.characterCharged|integer|Characters charged by the API.|
The following information is returned in a successful response.
| | | | |code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>| |message|string|Gets high-level error message.|
-|target|string|Gets the source of the error. For example, it would be "documents" or "document id" in the case of an invalid document.|
-|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|target|string|Gets the source of the error. For example, it would be "documents" or "document ID" in the case of an invalid document.|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message, and optional properties target, details (key value pair), inner error (this can be nested).|
|innerError.code|string|Gets code error string.| |innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example, it would be "documents" or "document id" if there was an invalid document.|
## Examples
The following is an example of a successful response.
"value": [ { "path": "https://myblob.blob.core.windows.net/destinationContainer/fr/mydoc.txt",
+ "sourcePath": "https://myblob.blob.core.windows.net/sourceContainer/fr/mydoc.txt",
"createdDateTimeUtc": "2020-03-26T00:00:00Z", "lastActionDateTimeUtc": "2020-03-26T01:00:00Z", "status": "Running",
cognitive-services Get Supported Document Formats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-supported-document-formats.md
The following information is returned in a successful response.
|Name|Type|Description| | | | | |value|FileFormat []|FileFormat[] contains the details listed below.|
-|value.format|string[]|Supported Content-Types for this format.|
+|value.contentTypes|string[]|Supported Content-Types for this format.|
+|value.defaultVersion|string|Default version if none is specified.|
|value.fileExtensions|string[]|Supported file extension for this format.|
-|value.contentTypes|string[]|Name of the format.|
-|value.versions|String[]|Supported Version.|
+|value.format|string|Name of the format.|
+|value.versions|string [] | Supported version.|
### Error response
The following information is returned in a successful response.
| | | | |code|string|Enums containing high-level error codes. Possible values:<ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>| |message|string|Gets high-level error message.|
-|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|innerError|InnerTranslationError|New Inner Error format which conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(this can be nested).|
|innerError.code|string|Gets code error string.| |innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document id" in case of invalid document.|
## Examples
Status code: 200
], "contentTypes": [ "text/plain"
- ],
- "versions": []
+ ]
}, { "format": "PortableDocumentFormat",
Status code: 200
], "contentTypes": [ "application/pdf"
+ ]
+ },
+ {
+ "format": "OpenXmlWord",
+ "fileExtensions": [
+ ".docx"
],
- "versions": []
+ "contentTypes": [
+ "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
+ ]
}, { "format": "OpenXmlPresentation",
Status code: 200
], "contentTypes": [ "application/vnd.openxmlformats-officedocument.presentationml.presentation"
- ],
- "versions": []
+ ]
}, { "format": "OpenXmlSpreadsheet",
Status code: 200
], "contentTypes": [ "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
- ],
- "versions": []
- },
- {
- "format": "OutlookMailMessage",
- "fileExtensions": [
- ".msg"
- ],
- "contentTypes": [
- "application/vnd.ms-outlook"
- ],
- "versions": []
+ ]
}, { "format": "HtmlFile", "fileExtensions": [
- ".html"
+ ".html",
+ ".htm"
], "contentTypes": [ "text/html"
- ],
- "versions": []
- },
- {
- "format": "OpenXmlWord",
- "fileExtensions": [
- ".docx"
- ],
- "contentTypes": [
- "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
- ],
- "versions": []
+ ]
} ] }
cognitive-services Get Supported Glossary Formats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-supported-glossary-formats.md
Base type for list return in the Get supported glossary formats API.
Base type for list return in the Get supported glossary formats API.
-|Status Code|Description|
-| | |
-|200|OK. Returns the list of supported glossary file formats.|
-|500|Internal Server Error.|
-|Other Status Codes|Too many requestsServer temporary unavailable|
+|Name|Type|Description|
+| | | |
+|value|FileFormat []|FileFormat[] contains the details listed below.|
+|value.contentTypes|string []|Supported Content-Types for this format.|
+|value.defaultVersion|string|Default version if none is specified|
+|value.fileExtensions|string []| Supported file extension for this format.|
+|value.format|string|Name of the format.|
+|value.versions|string []| Supported version.|
### Error response
Base type for list return in the Get supported glossary formats API.
| | | | |code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>| |message|string|Gets high-level error message.|
-|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|innerError|InnerTranslationError|New Inner Error format which conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(this can be nested).|
|innerError.code|string|Gets code error string.| |innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document id" in case of invalid document.|
## Examples
The following is an example of a successful response.
```JSON {
- "value": [
- {
- "format": "XLIFF",
- "fileExtensions": [
- ".xlf"
- ],
- "contentTypes": [
- "application/xliff+xml"
- ],
- "versions": [
- "1.0",
- "1.1",
- "1.2"
- ]
- },
- {
- "format": "TSV",
- "fileExtensions": [
- ".tsv",
- ".tab"
- ],
- "contentTypes": [
- "text/tab-separated-values"
- ],
- "versions": []
- }
- ]
+ "value": [
+ {
+ "format": "XLIFF",
+ "fileExtensions": [
+ ".xlf"
+ ],
+ "contentTypes": [
+ "application/xliff+xml"
+ ],
+ "defaultVersion": "1.2",
+ "versions": [
+ "1.0",
+ "1.1",
+ "1.2"
+ ]
+ },
+ {
+ "format": "TMX",
+ "fileExtensions": [
+ ".tmx"
+ ],
+ "contentTypes": [],
+ "versions": [
+ "1.0",
+ "1.1",
+ "1.2",
+ "1.3",
+ "1.4"
+ ]
+ }
+ ]
} ```
cognitive-services Get Supported Storage Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-supported-storage-sources.md
Base type for list return in the Get supported storage sources API.
| | | | |code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>| |message|string|Gets high-level error message.|
-|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|innerError|InnerTranslationError|New Inner Error format which conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(this can be nested).|
|innerError.code|string|Gets code error string.| |innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document id" in case of invalid document.|
## Examples
cognitive-services Get Translation Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-translation-status.md
The following information is returned in a successful response.
|summary.cancelled|integer|Number of canceled.| |summary.totalCharacterCharged|integer|Total characters charged by the API.|
-###Error response
+### Error response
|Name|Type|Description| | | | | |code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>| |message|string|Gets high-level error message.| |target|string|Gets the source of the error. For example, it would be "documents" or "document id" for an invalid document.|
-|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message, and optional properties target, details(key value pair), inner error (can be nested).|
+|innerError|InnerTranslationError|New Inner Error format which conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(this can be nested).|
|innerError.code|string|Gets code error string.| |innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document id" in case of invalid document.|
## Examples
cognitive-services Get Translations Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-translations-status.md
# Get translations status
-The Get translations status method returns a list of batch requests submitted and the status for each request. This list only contains batch requests submitted by the user (based on the subscription). The status for each request is sorted by id.
+The Get translations status method returns a list of batch requests submitted and the status for each request. This list only contains batch requests submitted by the user (based on the resource).
-If the number of requests exceeds our paging limit, server-side paging is used. Paginated responses indicate a partial result and include a continuation token in the response. The absence of a continuation token means that no additional pages are available.
+If the number of requests exceeds our paging limit, server-side paging is used. Paginated responses indicate a partial result and include a continuation token in the response. The absence of a continuation token means that no other pages are available.
-$top and $skip query parameters can be used to specify a number of results to return and an offset for the collection.
+$top, $skip and $maxpagesize query parameters can be used to specify a number of results to return and an offset for the collection.
+
+$top indicates the total number of records the user wants to be returned across all pages. $skip indicates the number of records to skip from the list of batches based on the sorting method specified. By default, we sort by descending start time. $maxpagesize is the maximum items returned in a page. If more items are requested via $top (or $top is not specified and there are more items to be returned), @nextLink will contain the link to the next page.
+
+$orderBy query parameter can be used to sort the returned list (ex "$orderBy=createdDateTimeUtc asc" or "$orderBy=createdDateTimeUtc desc"). The default sorting is descending by createdDateTimeUtc. Some query parameters can be used to filter the returned list (ex: "status=Succeeded,Cancelled") will only return succeeded and canceled operations. createdDateTimeUtcStart and createdDateTimeUtcEnd can be used combined or separately to specify a range of datetime to filter the returned list by. The supported filtering query parameters are (status, IDs, createdDateTimeUtcStart, createdDateTimeUtcEnd).
The server honors the values specified by the client. However, clients must be prepared to handle responses that contain a different page size or contain a continuation token.
Learn how to find your [custom domain name](../get-started-with-document-transla
Request parameters passed on the query string are:
-|Query parameter|Required|Description|
-| | | |
-|$skip|False|Skip the $skip entries in the collection. When both $top and $skip are supplied, $skip is applied first.|
-|$top|False|Take the $top entries in the collection. When both $top and $skip are supplied, $skip is applied first.|
+|Query parameter|In|Required|Type|Description|
+| | | |||
+|$maxpagesize|query|False|integer int32|$maxpagesize is the maximum items returned in a page. If more items are requested via $top (or $top is not specified and there are more items to be returned), @nextLink will contain the link to the next page. Clients MAY request server-driven paging with a specific page size by specifying a $maxpagesize preference. The server SHOULD honor this preference if the specified page size is smaller than the server's default page size.|
+|$orderBy|query|False|array|The sorting query for the collection (ex: 'CreatedDateTimeUtc asc', 'CreatedDateTimeUtc desc')|
+|$skip|query|False|integer int32|$skip indicates the number of records to skip from the list of records held by the server based on the sorting method specified. By default, we sort by descending start time. Clients MAY use $top and $skip query parameters to specify a number of results to return and an offset into the collection. When both $top and $skip are given by a client, the server SHOULD first apply $skip and then $top on the collection.Note: If the server can't honor $top and/or $skip, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
+|$top|query|False|integer int32|$top indicates the total number of records the user wants to be returned across all pages. Clients MAY use $top and $skip query parameters to specify a number of results to return and an offset into the collection. When both $top and $skip are given by a client, the server SHOULD first apply $skip and then $top on the collection.Note: If the server can't honor $top and/or $skip, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
+|createdDateTimeUtcEnd|query|False|string date-time|The end datetime to get items before.|
+|createdDateTimeUtcStart|query|False|string date-time|The start datetime to get items after.|
+|ids|query|False|array|IDs to use in filtering.|
+|statuses|query|False|array|Statuses to use in filtering.|
## Request headers
The following information is returned in a successful response.
|Name|Type|Description| | | | |
-|id|string|ID of the operation.|
-|createdDateTimeUtc|string|Operation created date time.|
-|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
-|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
-|summary|StatusSummary[]|Summary containing the details listed below.|
-|summary.total|integer|Count of total documents.|
-|summary.failed|integer|Count of documents failed.|
-|summary.success|integer|Count of documents successfully translated.|
-|summary.inProgress|integer|Count of documents in progress.|
-|summary.notYetStarted|integer|Count of documents not yet started processing.|
-|summary.cancelled|integer|Count of documents canceled.|
-|summary.totalCharacterCharged|integer|Total count of characters charged.|
+|@nextLink|string|Url for the next page. Null if no more pages available.|
+|value|TranslationStatus[]|TranslationStatus[] array listed below|
+|value.id|string|ID of the operation.|
+|value.createdDateTimeUtc|string|Operation created date time.|
+|value.lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
+|value.status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|value.summary|StatusSummary[]|Summary containing the details listed below.|
+|value.summary.total|integer|Count of total documents.|
+|value.summary.failed|integer|Count of documents failed.|
+|value.summary.success|integer|Count of documents successfully translated.|
+|value.summary.inProgress|integer|Count of documents in progress.|
+|value.summary.notYetStarted|integer|Count of documents not yet started processing.|
+|value.summary.cancelled|integer|Count of documents canceled.|
+|value.summary.totalCharacterCharged|integer|Total count of characters charged.|
### Error response
The following information is returned in a successful response.
| | | | |code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>| |message|string|Gets high-level error message.|
-|target|string|Gets the source of the error. For example, it would be "documents" or "document id" in the case of an invalid document.|
-|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|target|string|Gets the source of the error. For example, it would be "documents" or "document ID" if there was an invalid document.|
+|innerError|InnerTranslationError|New Inner Error format thath conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message, and optional properties target, details (key value pair), inner error (this can be nested).|
|innerError.code|string|Gets code error string.| |innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example, it would be "documents" or "document ID" if there was an invalid document.|
## Examples
The following is an example of a successful response.
{ "value": [ {
- "id": "727bf148-f327-47a0-9481-abae6362f11e",
- "createdDateTimeUtc": "2020-03-26T00:00:00Z",
- "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
+ "id": "273622bd-835c-4946-9798-fd8f19f6bbf2",
+ "createdDateTimeUtc": "2021-03-23T07:03:30.013631Z",
+ "lastActionDateTimeUtc": "2021-03-26T01:00:00Z",
"status": "Succeeded", "summary": { "total": 10,
The following is an example of a successful response.
"inProgress": 0, "notYetStarted": 0, "cancelled": 0,
- "totalCharacterCharged": 0
+ "totalCharacterCharged": 1000
} } ]
cognitive-services Start Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/start-translation.md
Destination for the finished translated documents.
|glossaries.format|string|False|Format.| |glossaries.glossaryUrl|string|True (if using glossaries)|Location of the glossary. We will use the file extension to extract the formatting if the format parameter isn't supplied. If the translation language pair isn't present in the glossary, it won't be applied.| |glossaries.storageSource|StorageSource|False|StorageSource listed above.|
+|glossaries.version|string|False|Optional Version. If not specified, default is used.|
|targetUrl|string|True|Location of the folder / container with your documents.| |language|string|True|Two letter Target Language code. See [list of language codes](../../language-support.md).| |storageSource|StorageSource []|False|StorageSource [] listed above.|
-|version|string|False|Version.|
## Example request
The following are the possible HTTP status codes that a request returns.
| | | | |code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>| |message|string|Gets high-level error message.|
-|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|innerError|InnerTranslationError|New Inner Error format which conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(this can be nested).|
|inner.Errorcode|string|Gets code error string.| |innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document id" in case of invalid document.|
## Examples
cognitive-services Extract Excel Information https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/tutorials/extract-excel-information.md
Previously updated : 02/27/2019 Last updated : 05/12/2021
In this tutorial, you'll learn how to:
Download the example Excel file from [GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/TextAnalytics/sample-data/ReportedIssues.xlsx). This file must be stored in your OneDrive for Business account.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/example-data.png" alt-text="Examples from the Excel file.":::
The issues are reported in raw text. We will use the Text Analytics API's Named Entity Recognition to extract the person name and phone number. Then the flow will look for the word "plumbing" in the description to categorize the issues.
The issues are reported in raw text. We will use the Text Analytics API's Named
Go to the [Power Automate site](https://preview.flow.microsoft.com/), and login. Then click **Create** and **Scheduled flow**.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/flow-creation.png" alt-text="The flow creation screen.":::
-
-On the **Build a scheduled flow** page, initialize your flow with the following fields:
+On the **Build a scheduled cloud flow** page, initialize your flow with the following fields:
|Field |Value | |||
On the **Build a scheduled flow** page, initialize your flow with the following
## Add variables to the flow
-> [!NOTE]
-> If you want to see an image of the completed flow, you can download it from [GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/TextAnalytics/flow-diagrams).
- Create variables representing the information that will be added to the Excel file. Click **New Step** and search for **Initialize variable**. Do this four times, to create four variables.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/initialize-variables.png" alt-text="Initialize variables.":::
+ Add the following information to the variables you created. They represent the columns of the Excel file. If any variables are collapsed, you can click on them to expand them. | Action |Name | Type | Value | ||||| | Initialize variable | var_person | String | Person |
-| Initialize variable 2 | var_phone | String | Phone_Number |
+| Initialize variable 2 | var_phone | String | Phone Number |
| Initialize variable 3 | var_plumbing | String | plumbing | | Initialize variable 4 | var_other | String | other |
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/flow-variables.png" alt-text="information contained in the flow variables":::
## Read the excel file Click **New Step** and type **Excel**, then select **List rows present in a table** from the list of actions.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/list-excel-rows.png" alt-text="add excel rows.":::
Add the Excel file to the flow by filling in the fields in this action. This tutorial requires the file to have been uploaded to OneDrive for Business.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/list-excel-rows-options.png" alt-text="fill excel rows":::
Click **New Step** and add an **Apply to each** action.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/add-apply-action.png" alt-text="add an apply command.":::
Click on **Select an output from previous step**. In the Dynamic content box that appears, select **value**.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/select-output.png" alt-text="Select output from the excel file.":::
## Send a request to the Text Analytics API
In your flow, enter the following information to create a new Text Analytics con
| Account key | The key for your Text Analytics resource. | | Site URL | The endpoint for your Text Analytics resource. |
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/add-credentials.png" alt-text="Add Text Analytics credentials to your flow.":::
+ ## Extract the excel content
-After the connection is created, search for **Text Analytics** and select **Entities**. This will extract information from the description column of the issue.
+After the connection is created, search for **Text Analytics** and select **Named Entity Recognition**. This will extract information from the description column of the issue.
++
+Click in the **Text** field and select **Description** from the Dynamic content windows that appears. Enter `en` for Language, and a unique name as the document ID (you might need to click **Show advanced options**).
+
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/extract-info.png" alt-text="Add Text Analytics Entities.":::
-Click in the **Text** field and select **Description** from the Dynamic content windows that appears. Enter `en` for Language. (Click Show advanced options if you don't see Language)
+Within the **Apply to each**, click **Add an action** and create another **Apply to each** action. Click inside the text box and select **documents** in the Dynamic Content window that appears.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/description-from-dynamic-content.png" alt-text="Add Text Analytics settings.":::
## Extract the person name
-Next, we will find the person entity type in the Text Analytics output. Within the **Apply to each**, click **Add an action**, and create another **Apply to each** action. Click inside the text box and select **Entities** in the Dynamic Content window that appears.
+Next, we will find the person entity type in the Text Analytics output. Within the **Apply to each 2**, click **Add an action**, and create another **Apply to each** action. Click inside the text box and select **Entities** in the Dynamic Content window that appears.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/add-apply-action-2.png" alt-text="Add Text Analytics credentials to your flow. 2":::
-Within the newly created **Apply to each 2** action, click **Add an action**, and add a **Condition** control.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/create-condition.png" alt-text="Add Text Analytics credentials to your flow. 3":::
+Within the newly created **Apply to each 3** action, click **Add an action**, and add a **Condition** control.
-In the Condition window, click on the first text box. In the Dynamic content window, search for **Entities Type** and select it.
++
+In the Condition window, click on the first text box. In the Dynamic content window, search for **Category** and select it.
+
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/choose-entities-value.png" alt-text="Add Text Analytics credentials to your flow. 4":::
Make sure the second box is set to **is equal to**. Then select the third box, and search for `var_person` in the Dynamic content window.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/choose-variable-value.png" alt-text="Add Text Analytics credentials to your flow. 5":::
+ In the **If yes** condition, type in Excel then select **Update a Row**.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/yes-column-action.png" alt-text="Add Text Analytics credentials to your flow. 6":::
-Enter the Excel info, and update the **Key Column**, **Key Value** and **PersonName** fields. This will append the name detected by the API to the Excel sheet.
+Enter the Excel information, and update the **Key Column**, **Key Value** and **PersonName** fields. This will append the name detected by the API to the Excel sheet.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/yes-column-action-options.png" alt-text="Add Text Analytics credentials to your flow. 7":::
## Get the phone number
-Minimize the **Apply to each 2** action by clicking on the name. Then add another **Apply to each** action, like before. it will be named **Apply to each 3**. Select the text box, and add **entities** as the output for this action.
+Minimize the **Apply to each 3** action by clicking on the name. Then add another **Apply to each** action to **Apply to each 2**, like before. it will be named **Apply to each 4**. Select the text box, and add **entities** as the output for this action.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/add-apply-action-3.png" alt-text="Add Text Analytics credentials to your flow. 8":::
-Within **Apply to each 3**, add a **Condition** control. It will be named **Condition 2**. In the first text box, search for, and add **Entities Type** from the Dynamic content window. Be sure the center box is set to **is equal to**. Then, in the right text box, enter `var_phone`.
+Within **Apply to each 4**, add a **Condition** control. It will be named **Condition 2**. In the first text box, search for, and add **categories** from the Dynamic content window. Be sure the center box is set to **is equal to**. Then, in the right text box, enter `var_phone`.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/condition-2-options.png" alt-text="Add Text Analytics credentials to your flow. 9":::
In the **If yes** condition, add an **Update a row** action. Then enter the information like we did above, for the phone numbers column of the Excel sheet. This will append the phone number detected by the API to the Excel sheet.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/condition-2-yes-column.png" alt-text="Add Text Analytics credentials to your flow. 10":::
- ## Get the plumbing issues
-Minimize **Apply to each 3** by clicking on the name. Then create another **Apply to each** in the parent action. Select the text box, and add **Entities** as the output for this action from the Dynamic content window.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/add-apply-action-4.png" alt-text="Add Text Analytics credentials to your flow. 11":::
+Minimize **Apply to each 4** by clicking on the name. Then create another **Apply to each** in the parent action. Select the text box, and add **Entities** as the output for this action from the Dynamic content window.
Next, the flow will check if the issue description from the Excel table row contains the word "plumbing". If yes, it will add "plumbing" in the IssueType column. If not, we will enter "other." Inside the **Apply to each 4** action, add a **Condition** Control. It will be named **Condition 3**. In the first text box, search for, and add **Description** from the Excel file, using the Dynamic content window. Be sure the center box says **contains**. Then, in the right text box, find and select `var_plumbing`.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/condition-3-options.png" alt-text="Add Text Analytics credentials to your flow. 12":::
- In the **If yes** condition, click **Add an action**, and select **Update a row**. Then enter the information like before. In the IssueType column, select `var_plumbing`. This will apply a "plumbing" label to the row. In the **If no** condition, click **Add an action**, and select **Update a row**. Then enter the information like before. In the IssueType column, select `var_other`. This will apply an "other" label to the row.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/plumbing-issue-condition.png" alt-text="Add Text Analytics credentials to your flow. 13":::
## Test the workflow
-In the top-right corner of the screen, click **Save**, then **Test**. Select **I'll perform the trigger action**. Click **Save & Test**, **Run flow**, then **Done**.
+In the top-right corner of the screen, click **Save**, then **Test**. Under **Test Flow**, select **manually**. Then click **Test**, and **Run flow**.
The Excel file will get updated in your OneDrive account. It will look like the below.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="../media/tutorials/excel/updated-excel-sheet.png" alt-text="The updated excel spreadsheet.":::
## Next steps
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/pricing.md
Alice made a group call with her colleagues, Bob and Charlie. Alice and Bob used
Alice makes a PSTN Call from an app to Bob on his US phone number beginning with `+1-425`. - Alice used the JS SDK to build the app.-- The call lasts a total of 5 minutes.
+- The call lasts a total of 10 minutes.
**Cost calculations** - 1 participant on the VoIP leg (Alice) from App to Communication Services servers x 10 minutes x $0.004 per participant leg per minute = $0.04-- 1 participant on the PSTN outbound leg (Charlie) from Communication Services servers to a US telephone number x 10 minutes x $0.013 per participant leg per minute = $0.13.
+- 1 participant on the PSTN outbound leg (Bob) from Communication Services servers to a US telephone number x 10 minutes x $0.013 per participant leg per minute = $0.13.
Note: USA mixed rates to `+1-425` is $0.013. Refer to the following link for details: https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
-**Total cost for the group call**: $0.04 + $0.13 = $0.17
+**Total cost for the call**: $0.04 + $0.13 = $0.17
### Pricing example: Group audio call using JS SDK and 1 PSTN leg
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/sdk-options.md
Publishing locations for individual SDK packages are detailed below.
| Phone Numbers | [npm](https://www.npmjs.com/package/@azure/communication-phone-numbers) | [NuGet](https://www.nuget.org/packages/Azure.Communication.PhoneNumbers) | [PyPi](https://pypi.org/project/azure-communication-phonenumbers/) | [Maven](https://search.maven.org/search?q=a:azure-communication-phonenumbers) | - | - | - | | Chat | [npm](https://www.npmjs.com/package/@azure/communication-chat) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Chat) | [PyPi](https://pypi.org/project/azure-communication-chat/) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | - | | SMS | [npm](https://www.npmjs.com/package/@azure/communication-sms) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Sms) | [PyPi](https://pypi.org/project/azure-communication-sms/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-sms) | - | - | - |
-| Calling | [npm](https://www.npmjs.com/package/@azure/communication-calling) | - | - | - | [GitHub](https://github.com/Azure/Communication/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/) | - |
+| Calling | [npm](https://www.npmjs.com/package/@azure/communication-calling) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Calling) | - | - | [GitHub](https://github.com/Azure/Communication/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/) | - |
| Reference Documentation | [docs](https://azure.github.io/azure-sdk-for-js/communication.html) | [docs](https://azure.github.io/azure-sdk-for-net/communication.html) | - | [docs](http://azure.github.io/azure-sdk-for-java/communication.html) | [docs](/objectivec/communication-services/calling/) | [docs](/java/api/com.azure.android.communication.calling) | - |
Support via .NET Core 2.0:
- Xamarin iOS 10.14 - Xamarin Mac 3.8
+The Calling package supports UWP apps build with .NET Native or C++/WinRT on:
+- Windows 10 10.0.17763
+- Windows Server 2019 10.0.17763
+ ## API stability expectations > [!IMPORTANT]
For more information, see the following SDK overviews:
To get started with Azure Communication - [Create Azure Communication Resources](../quickstarts/create-communication-resource.md)-- Generate [User Access Tokens](../quickstarts/access-tokens.md)
+- Generate [User Access Tokens](../quickstarts/access-tokens.md)
communication-services Telephony Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/telephony-concept.md
With this option:
- You do not require deployment or maintenance of an on-premises deploymentΓÇöbecause Voice Calling (PSTN) operates out of Azure Communication Services. - Note: If necessary, you can choose to connect a supported Session Border Controller (SBC) through Azure direct routing for interoperability with third-party PBXs, analog devices, and other third-party telephony equipment supported by the SBC.
-This option requires an uninterrupted connection to Azure Communication Services.
+This option requires an uninterrupted connection to Azure Communication Services.
+
+For cloud calling, outbound calls are billed at per-minute rates depending on the target country. See the [current rate list for PSTN calls](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv).
### Azure direct routing
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/troubleshooting-info.md
When developing for Android, your logs are stored in `.blog` files. Note that yo
On Android Studio, navigate to the Device File Explorer by selecting View > Tool Windows > Device File Explorer from both the simulator and the device. The `.blog` file will be located within your application's directory, which should look something like `/data/data/[app_name_space:com.contoso.com.acsquickstartapp]/files/acs_sdk.blog`. You can attach this file to your support request.
+## Enable and access call logs (Windows)
+
+When developing for Windows, your logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted.
+
+These can be accessed by looking at where your app is keeping its local data. There are many ways to figure out where a UWP app keeps its local data, the following steps are just one of these ways:
+1. Open a Windows Command Prompt (Windows Key + R)
+2. Type `cmd.exe`
+3. Type `where /r %USERPROFILE%\AppData acs*.blog`
+4. Please check if the app ID of your application matches with the one returned by the previous command.
+5. Open the folder with the logs by typing `start ` followed by the path returned by the step 3. For example: `start C:\Users\myuser\AppData\Local\Packages\e84000dd-df04-4bbc-bf22-64b8351a9cd9_k2q8b5fxpmbf6`
+6. Please attach all the `*.blog` and `*.etl` files to your Azure support request.
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
Key features of the Calling SDK:
The following list presents the set of features which are currently available in the Azure Communication Services Calling SDKs.
-| Group of features | Capability | JS | Java (Android) | Objective-C (iOS) |
-| -- | - | | -- | -- |
-| Core Capabilities | Place a one-to-one call between two users | ✔️ | ✔️ | ✔️ |
-| | Place a group call with more than two users (up to 350 users) | ✔️ | ✔️ | ✔️ |
-| | Promote a one-to-one call with two users into a group call with more than two users | ✔️ | ✔️ | ✔️ |
-| | Join a group call after it has started | ✔️ | ✔️ | ✔️ |
-| | Invite another VoIP participant to join an ongoing group call | ✔️ | ✔️ | ✔️ |
-| Mid call control | Turn your video on/off | ✔️ | ✔️ | ✔️ |
-| | Mute/Unmute mic | ✔️ | ✔️ | ✔️ |
-| | Switch between cameras | ✔️ | ✔️ | ✔️ |
-| | Local hold/un-hold | ✔️ | ✔️ | ✔️ |
-| | Active speaker | ✔️ | ✔️ | ✔️ |
-| | Choose speaker for calls | ✔️ | ✔️ | ✔️ |
-| | Choose microphone for calls | ✔️ | ✔️ | ✔️ |
-| | Show state of a participant<br/>*Idle, Early media, Connecting, Connected, On hold, In Lobby, Disconnected* | ✔️ | ✔️ | ✔️ |
-| | Show state of a call<br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ | ✔️ | ✔️ |
-| | Show if a participant is muted | ✔️ | ✔️ | ✔️ |
-| | Show the reason why a participant left a call | ✔️ | ✔️ | ✔️ |
-| Screen sharing | Share the entire screen from within the application | ✔️ | ❌ | ❌ |
-| | Share a specific application (from the list of running applications) | ✔️ | ❌ | ❌ |
-| | Share a web browser tab from the list of open tabs | ✔️ | ❌ | ❌ |
-| | Participant can view remote screen share | ✔️ | ✔️ | ✔️ |
-| Roster | List participants | ✔️ | ✔️ | ✔️ |
-| | Remove a participant | ✔️ | ✔️ | ✔️ |
-| PSTN | Place a one-to-one call with a PSTN participant | ✔️ | ✔️ | ✔️ |
-| | Place a group call with PSTN participants | ✔️ | ✔️ | ✔️ |
-| | Promote a one-to-one call with a PSTN participant into a group call | ✔️ | ✔️ | ✔️ |
-| | Dial-out from a group call as a PSTN participant | ✔️ | ✔️ | ✔️ |
-| General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | ✔️ | ✔️ |
-| Device Management | Ask for permission to use audio and/or video | ✔️ | ✔️ | ✔️ |
-| | Get camera list | ✔️ | ✔️ | ✔️ |
-| | Set camera | ✔️ | ✔️ | ✔️ |
-| | Get selected camera | ✔️ | ✔️ | ✔️ |
-| | Get microphone list | ✔️ | ❌ | ❌ |
-| | Set microphone | ✔️ | ❌ | ❌ |
-| | Get selected microphone | ✔️ | ❌ | ❌ |
-| | Get speakers list | ✔️ | ❌ | ❌ |
-| | Set speaker | ✔️ | ❌ | ❌ |
-| | Get selected speaker | ✔️ | ❌ | ❌ |
-| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | ✔️ | ✔️ |
-| | Set / update scaling mode | ✔️ | ✔️ | ✔️ |
-| | Render remote video stream | ✔️ | ✔️ | ✔️ |
+
+| Group of features | Capability | JS | Windows | Java (Android) | Objective-C (iOS) |
+| -- | - | | - | -- | -- |
+| Core Capabilities | Place a one-to-one call between two users | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Place a group call with more than two users (up to 350 users) | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Promote a one-to-one call with two users into a group call with more than two users | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Join a group call after it has started | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Invite another VoIP participant to join an ongoing group call | ✔️ | ✔️ | ✔️ | ✔️ |
+| Mid call control | Turn your video on/off | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Mute/Unmute mic | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Switch between cameras | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Local hold/un-hold | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Active speaker | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Choose speaker for calls | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Choose microphone for calls | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Show state of a participant<br/>*Idle, Early media, Connecting, Connected, On hold, In Lobby, Disconnected* | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Show state of a call<br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Show if a participant is muted | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Show the reason why a participant left a call | ✔️ | ✔️ | ✔️ | ✔️ |
+| Screen sharing | Share the entire screen from within the application | ✔️ | ❌ | ❌ | ❌ |
+| | Share a specific application (from the list of running applications) | ✔️ | ❌ | ❌ | ❌ |
+| | Share a web browser tab from the list of open tabs | ✔️ | ❌ | ❌ | ❌ |
+| | Participant can view remote screen share | ✔️ | ✔️ | ✔️ | ✔️ |
+| Roster | List participants | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Remove a participant | ✔️ | ✔️ | ✔️ | ✔️ |
+| PSTN | Place a one-to-one call with a PSTN participant | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Place a group call with PSTN participants | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Promote a one-to-one call with a PSTN participant into a group call | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Dial-out from a group call as a PSTN participant | ✔️ | ✔️ | ✔️ | ✔️ |
+| General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | ✔️ | ✔️ | ✔️ |
+| Device Management | Ask for permission to use audio and/or video | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get camera list | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Set camera | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get selected camera | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get microphone list | ✔️ | ✔️ | ❌ | ❌ |
+| | Set microphone | ✔️ | ✔️ | ❌ | ❌ |
+| | Get selected microphone | ✔️ | ✔️ | ❌ | ❌ |
+| | Get speakers list | ✔️ | ✔️ | ❌ | ❌ |
+| | Set speaker | ✔️ | ✔️ | ❌ | ❌ |
+| | Get selected speaker | ✔️ | ✔️ | ❌ | ❌ |
+| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Set / update scaling mode | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Render remote video stream | ✔️ | ✔️ | ✔️ | ✔️ |
+ ## Calling SDK streaming support The Communication Services Calling SDK supports the following streaming configurations:
-| Limit | Web | Android/iOS |
+| Limit | Web | Windows/Android/iOS |
| - | | -- | | **# of outgoing streams that can be sent simultaneously** | 1 video or 1 screen sharing | 1 video + 1 screen sharing | | **# of incoming streams that can be rendered simultaneously** | 1 video or 1 screen sharing | 6 video + 1 screen sharing |
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/network-requirements.md
Below are the bandwidth requirements for the JavaScript SDKs:
|500 kbps|Group video calling 360p at 30fps| |1.2 Mbps|HD Group video calling with resolution of HD 720p at 30fps|
-Below are the bandwidth requirements for the native Android and iOS SDKs:
+Below are the bandwidth requirements for the native Windows, Android and iOS SDKs:
|Bandwidth|Scenarios| |:--|:--|
communication-services Calling Client Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/calling-client-samples.md
Last updated 03/10/2021
-zone_pivot_groups: acs-plat-web-ios-android
+zone_pivot_groups: acs-plat-web-ios-android-windows
# Quickstart: Use the Communication Services Calling SDK
Get started with Azure Communication Services by using the Communication Service
[!INCLUDE [Calling with iOS](./includes/calling-sdk-ios.md)] ::: zone-end + ## Clean up resources If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
communication-services Get Started Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop.md
Last updated 03/10/2021
-zone_pivot_groups: acs-plat-web-ios-android
+zone_pivot_groups: acs-plat-web-ios-android-windows
# Quickstart: Join your calling app to a Teams meeting
Get started with Azure Communication Services by connecting your calling solutio
[!INCLUDE [Calling with JavaScript](./includes/teams-interop-javascript.md)] ::: zone-end + ::: zone pivot="platform-android" [!INCLUDE [Calling with Android](./includes/teams-interop-android.md)] ::: zone-end
Get started with Azure Communication Services by connecting your calling solutio
[!INCLUDE [Calling with iOS](./includes/teams-interop-ios.md)] ::: zone-end
-Functionality described in this document uses the General Availablity version of the Communication Services SDKs. Teams Interoperability requires the Beta version of the Communication Services SDKs. The Beta SDKs can be explored on the [release notes page](https://github.com/Azure/Communication/tree/master/releasenotes).
+Functionality described in this document uses the General Availability version of the Communication Services SDKs. Teams Interoperability requires the Beta version of the Communication Services SDKs. The Beta SDKs can be explored on the [release notes page](https://github.com/Azure/Communication/tree/master/releasenotes).
When executing the "Install package" step with the Beta SDKs, modify the version of your package to the latest Beta release by specifying version `@1.0.0-beta.10` (version at the moment of writing this article) in the `communication-calling` package name. You don't need to modify the `communication-common` package command. For example:
communication-services Getting Started With Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/getting-started-with-calling.md
Last updated 03/10/2021
-zone_pivot_groups: acs-plat-web-ios-android
+zone_pivot_groups: acs-plat-web-ios-android-windows
# Quickstart: Add voice calling to your app
Get started with Azure Communication Services by using the Communication Service
[!INCLUDE [Emergency Calling Notice](../../includes/emergency-calling-notice-include.md)] + ::: zone pivot="platform-web" [!INCLUDE [Calling with JavaScript](./includes/get-started-javascript.md)] ::: zone-end
connectors Apis List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/apis-list.md
An *action* is an operation that follows the trigger and performs some kind of t
## Connector categories
-In Logic Apps, most triggers and actions are available in either a *built-in* version or *managed connector* version. A small number of triggers and actions are available in both versions. The versions available depend on whether you create a multi-tenant logic app or a single-tenant logic app, which is currently available only in [Logic Apps Preview](../logic-apps/logic-apps-overview-preview.md).
+In Logic Apps, most triggers and actions are available in either a *built-in* version or *managed connector* version. A small number of triggers and actions are available in both versions. The versions available depend on whether you create a multi-tenant logic app or a single-tenant logic app, which is currently available only in [Logic Apps Preview](../logic-apps/single-tenant-overview-compare.md).
[Built-in triggers and actions](built-in.md) run natively on the Logic Apps runtime, don't require creating connections, and perform these kinds of tasks:
connectors Connectors Create Api Mq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-mq.md
The following list describes only some of the built-in operations available for
* Receive a single message or an array of messages from the MQ server. For multiple messages, you can specify the maximum number of messages to return per batch and the maximum batch size in KB. * Send a single message or an array of messages to the MQ server.
-These built-in MQ operations also have the following capabilities plus the benefits from all the other capabilities for logic apps in the [single-tenant Logic Apps service](../logic-apps/logic-apps-overview-preview.md):
+These built-in MQ operations also have the following capabilities plus the benefits from all the other capabilities for logic apps in the [single-tenant Logic Apps service](../logic-apps/single-tenant-overview-compare.md):
* Transport Layer Security (TLS) encryption for data in transit * Message encoding for both the send and receive operations
container-registry Container Registry Get Started Docker Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-get-started-docker-cli.md
Title: Push & pull container image
+ Title: Push & pull container image
description: Push and pull Docker images to your private container registry in Azure using the Docker CLI Previously updated : 01/23/2019- Last updated : 05/12/2021+ # Push your first image to your Azure container registry using the Docker CLI
In the following steps, you download a public [Nginx image](https://store.docker
## Prerequisites
-* **Azure container registry** - Create a container registry in your Azure subscription. For example, use the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI](container-registry-get-started-azure-cli.md).
+* **Azure container registry** - Create a container registry in your Azure subscription. For example, use the [Azure portal](container-registry-get-started-portal.md), the [Azure CLI](container-registry-get-started-azure-cli.md), or [Azure PowerShell](container-registry-get-started-powershell.md).
* **Docker CLI** - You must also have Docker installed locally. Docker provides packages that easily configure Docker on any [macOS][docker-mac], [Windows][docker-windows], or [Linux][docker-linux] system. ## Log in to a registry
-There are [several ways to authenticate](container-registry-authentication.md) to your private container registry. The recommended method when working in a command line is with the Azure CLI command [az acr login](/cli/azure/acr#az_acr_login). For example, to log in to a registry named *myregistry*, log into the Azure CLI and then authenticate to your registry:
+There are [several ways to authenticate](container-registry-authentication.md) to your private container registry.
+
+### [Azure CLI](#tab/azure-cli)
+
+The recommended method when working in a command line is with the Azure CLI command [az acr login](/cli/azure/acr#az_acr_login). For example, to log in to a registry named *myregistry*, log into the Azure CLI and then authenticate to your registry:
```azurecli az login az acr login --name myregistry ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+The recommended method when working in PowerShell is with the Azure PowerShell cmdlet [Connect-AzContainerRegistry](/powershell/module/az.containerregistry/connect-azcontainerregistry). For example, to log in to a registry named *myregistry*, log into Azure and then authenticate to your registry:
+
+```azurepowershell
+Connect-AzAccount
+Connect-AzContainerRegistry -Name myregistry
+```
+++ You can also log in with [docker login](https://docs.docker.com/engine/reference/commandline/login/). For example, you might have [assigned a service principal](container-registry-authentication.md#service-principal) to your registry for an automation scenario. When you run the following command, interactively provide the service principal appID (username) and password when prompted. For best practices to manage login credentials, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/) command reference: ```
If you no longer need the Nginx image, you can delete it locally with the [docke
docker rmi myregistry.azurecr.io/samples/nginx ```
+### [Azure CLI](#tab/azure-cli)
+ To remove images from your Azure container registry, you can use the Azure CLI command [az acr repository delete](/cli/azure/acr/repository#az_acr_repository_delete). For example, the following command deletes the manifest referenced by the `samples/nginx:latest` tag, any unique layer data, and all other tags referencing the manifest. ```azurecli az acr repository delete --name myregistry --image samples/nginx:latest ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+The [Az.ContainerRegistry](/powershell/module/az.containerregistry) Azure PowerShell module contains multiple commands for removing images from your container instance. [Remove-AzContainerRegistryRepository](/powershell/module/az.containerregistry/remove-azcontainerregistryrepository) removes all images in a particular namespace such as `samples:nginx`, while [Remove-AzContainerRegistryManifest](/powershell/module/az.containerregistry/remove-azcontainerregistrymanifest) removes a specific tag or manifest.
+
+In the following example, you use the `Remove-AzContainerRegistryRepository` cmdlet to remove all images in the `samples:nginx` namespace.
+
+```azurepowershell
+Remove-AzContainerRegistryRepository -RegistryName myregistry -Name samples/nginx
+```
+
+In the following example, you use the `Remove-AzContainerRegistryManifest` cmdlet to delete the manifest referenced by the `samples/nginx:latest` tag, any unique layer data, and all other tags referencing the manifest.
+
+```azurepowershell
+Remove-AzContainerRegistryManifest -RegistryName myregistry -RepositoryName samples/nginx -Tag latest
+```
+++ ## Next steps Now that you know the basics, you're ready to start using your registry! For example, deploy container images from your registry to:
cosmos-db Autoscale Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/autoscale-faq.md
- Title: Frequently asked questions on autoscale provisioned throughput in Azure Cosmos DB
-description: Frequently asked questions about autoscale provisioned throughput for Azure Cosmos DB databases and containers
---- Previously updated : 12/11/2020--
-# Frequently asked questions about autoscale provisioned throughput in Azure Cosmos DB
-
-With autoscale provisioned throughput, Azure Cosmos DB will automatically manage and scale the RU/s of your database or container based on usage. This article answers commonly asked questions about autoscale.
-
-## Frequently asked questions
-
-### What is the difference between "autopilot" and "autoscale" in Azure Cosmos DB?
-"Autoscale" or "autoscale provisioned throughput" is the updated name for the feature, previously known as "autopilot." With the current release of autoscale, we've added new features, including the ability to set custom max RU/s and programmatic support.
-
-### What happens to databases or containers created in the previous autopilot tier model?
-Resources that were created with the previous tier model are automatically supported with the new autoscale custom maximum RU/s model. The upper bound of the tier becomes the new maximum RU/s, which results in the same scale range.
-
-For example, if you previously selected the tier that scaled between 400 to 4000 RU/s, the database or container will now show as having a maximum RU/s of 4000 RU/s, which scales between 400 to 4000 RU/s. From here, you can change the maximum RU/s to a custom value to suit your workload.
-
-### How quickly will autoscale scale up based on spikes in traffic?
-With autoscale, the system scales the throughput (RU/s) `T` up or down within the `0.1 * Tmax` and `Tmax` range, based on incoming traffic. Because the scaling is automatic and instantaneous, at any point in time, you can consume up to the provisioned `Tmax` with no delay.
-
-### How do I determine what RU/s the system is currently scaled to?
-Use [Azure Monitor metrics](how-to-choose-offer.md#measure-and-monitor-your-usage) to monitor both the provisioned autoscale max RU/s and the current throughput (RU/s) the system is scaled to.
-
-### What is the pricing for autoscale?
-Each hour, you will be billed for the highest throughput `T` the system scaled to within the hour. If your resource had no requests during the hour or did not scale beyond `0.1 * Tmax`, you will be billed for the minimum of `0.1 * Tmax`. Refer to the Azure Cosmos DB [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for details.
-
-### How does autoscale show up on my bill?
-In single write region accounts, the autoscale rate per 100 RU/s is 1.5x the rate of standard (manual) provisioned throughput. On your bill, you will see the existing standard provisioned throughput meter. The quantity of this meter will be multiplied by 1.5. For example, if the highest RU/s the system scaled to within an hour was 6000 RU/s, you'd be billed 60 * 1.5 = 90 units of the meter for that hour.
-
-In accounts with multiple write regions, the autoscale rate per 100 RU/s is the same as the rate for standard (manual) provisioned multiple write region throughput. On your bill, you will see the existing multiple write regions meter. Since the rates are the same, if you use autoscale, you'll see the same quantity as with standard throughput.
-
-### Does autoscale work with reserved capacity?
-Yes. When you purchase reserved capacity for accounts with single write regions, the reservation discount for autoscale resources is applied to your meter usage at a ratio of 1.5 * the [ratio of the specific region](../cost-management-billing/reservations/understand-cosmosdb-reservation-charges.md#reservation-discount-per-region).
-
-Multi-write region reserved capacity works the same for autoscale and standard (manual) provisioned throughput. See [Azure Cosmos DB reserved capacity](cosmos-db-reserved-capacity.md)
-
-### Does autoscale work with free tier?
-Yes. In free tier, you can use autoscale throughput on a container. Support for autoscale shared throughput databases with custom max RU/s is not yet available. See how [free tier billing works with autoscale](understand-your-bill.md#azure-free-tier).
-
-### Is autoscale supported for all APIs?
-Yes, autoscale is supported for all APIs: Core (SQL), Gremlin, Table, Cassandra, and API for MongoDB.
-
-### Is autoscale supported for multi-region write accounts?
-Yes. The max RU/s are available in each region that is added to the Azure Cosmos DB account.
-
-### How do I enable autoscale on new databases or containers?
-See this article on [how to enable autoscale](how-to-provision-autoscale-throughput.md).
-
-### Can I enable autoscale on an existing database or a container?
-Yes. You can also switch between autoscale and standard (manual) provisioned throughput as needed. Currently, for all APIs, you can only use the [Azure portal](how-to-provision-autoscale-throughput.md#enable-autoscale-on-existing-database-or-container) to do these operations.
-
-### How does the migration between autoscale and standard (manual) provisioned throughput work?
-Conceptually, changing the throughput type is a two-stage process. First, you send a request to change the throughput settings to use either autoscale or manual provisioned throughput. In both cases, the system will automatically determine and set an initial RU/s value, based on the current throughput settings and storage. During this step, no user-provided RU/s value will be accepted. Then, after the update is complete, you can [change the RU/s](#can-i-change-the-max-rus-on-the-database-or-container) to suit your workload.
-
-**Migration from standard (manual) provisioned throughput to autoscale**
-
-For a container, use the following formula to estimate the initial autoscale max RU/s: ``MAX(4000, current manual provisioned RU/s, maximum RU/s ever provisioned / 10, storage in GB * 100)``, rounded to the nearest 1000 RU/s. The actual initial autoscale max RU/s may vary depending on your account configuration.
-
-Example #1: Suppose you have a container with 10,000 RU/s manual provisioned throughput, and 25 GB of storage. When you enable autoscale, the initial autoscale max RU/s will be: 10,000 RU/s, which will scale between 1000 - 10,000 RU/s.
-
-Example #2: Suppose you have a container with 50,000 RU/s manual provisioned throughput, and 2500 GB of storage. When you enable autoscale, the initial autoscale max RU/s will be: 250,000 RU/s, which will scale between 25,000 - 250,000 RU/s.
-
-**Migration from autoscale to standard (manual) provisioned throughput**
-
-The initial manual provisioned throughput will be equal to the current autoscale max RU/s.
-
-Example: Suppose you have an autoscale database or container with max RU/s of 20,000 RU/s (scales between 2000 - 20,000 RU/s). When you update to use manual provisioned throughput, the initial throughput will be 20,000 RU/s.
-
-### Is there Azure CLI or PowerShell support to manage databases or containers with autoscale?
-Currently, you can only create and manage resources with autoscale from the Azure portal, .NET V3 SDK, Java V4 SDK, and Azure Resource Manager. Support in Azure CLI, PowerShell, and other SDKs is not yet available.
-
-### Is autoscale supported for shared throughput databases?
-Yes, autoscale is supported for shared throughput databases. To enable this feature, select autoscale and the **Provision throughput** option when creating the database.
-
-### What is the number of allowed containers per shared throughput database when autoscale is enabled?
-Azure Cosmos DB enforces a maximum of 25 containers in a shared throughput database, which applies to databases with autoscale or standard (manual) throughput.
-
-### What is the impact of autoscale on database consistency level?
-There is no impact of the autoscale on consistency level of the database.
-See the [consistency levels](consistency-levels.md) article for more information regarding available consistency levels.
-
-### What is the storage limit associated with each max RU/s option?
-The storage limit in GB for each max RU/s is: Max RU/s of database or container / 100. For example, if the max RU/s is 20,000 RU/s, the resource can support 200 GB of storage.
-See the [autoscale limits](provision-throughput-autoscale.md#autoscale-limits) article for the available max RU/s and storage options.
-
-### What happens if I exceed the storage limit associated with my max throughput?
-If the storage limit associated with the max throughput of the database or container is exceeded, Azure Cosmos DB will automatically increase the max throughput to the next highest RU/s that can support that level of storage.
-
-For example, if you start with a max RU/s of 50,000 RU/s (scales between 5000 - 50,000 RU/s), you can store up to 500 GB of data. If you exceed 500 GB - e.g. storage is now 600 GB, the new maximum RU/s will be 60,000 RU/s (scales between 6000 - 60,000 RU/s).
-
-### Can I change the max RU/s on the database or container?
-Yes. See this [article](how-to-provision-autoscale-throughput.md) on how to change the max RU/s. When you change the max RU/s, depending on the requested value, this can be an asynchronous operation that may take some time to complete (may be up to 4-6 hours, depending on the RU/s selected)
-
-#### Increasing the max RU/s
-When you send a request to increase the max RU/s `Tmax`, depending on the max RU/s selected, the service provisions more resources to support the higher max RU/s. While this is happening, your existing workload and operations will not be affected. The system will continue to scale your database or container between the previous `0.1*Tmax` to `Tmax` until the new scale range of `0.1*Tmax_new` to `Tmax_new` is ready.
-
-#### Lowering the max RU/s
-When you lower the max RU/s, the minimum value you can set it to is: `MAX(4000, highest max RU/s ever provisioned / 10, current storage in GB * 100)`, rounded to the nearest 1000 RU/s.
-
-Example #1: Suppose you have an autoscale container with max RU/s of 20,000 RU/s (scales between 2000 - 20,000 RU/s) and 50 GB of storage. The lowest, minimum value you can set max RU/s to is: MAX(4000, 20,000 / 10, **50 * 100**) = 5000 RU/s (scales between 500 - 5000 RU/s).
-
-Example #2: Suppose you have an autoscale container with max RU/s of 100,000 RU/s and 100 GB of storage. Now, you scale max RU/s up to 150,000 RU/s (scales between 15,000 - 150,000 RU/s). The lowest, minimum value you can now set max RU/s to is: MAX(4000, **150,000 / 10**, 100 * 100) = 15,000 RU/s (scales between 1500 - 15,000 RU/s).
-
-For a shared throughput database, when you lower the max RU/s, the minimum value you can set it to is: `MAX(4000, highest max RU/s ever provisioned / 10, current storage in GB * 100, 4000 + (MAX(Container count - 25, 0) * 1000))`, rounded to the nearest 1000 RU/s.
-
-The above formulas and examples relate to the minimum autoscale max RU/s you can set, and is distinct from the `0.1 * Tmax` to `Tmax` range the system automatically scales between. No matter what the max RU/s is, the system will always scale between `0.1 * Tmax` to `Tmax`.
-
-### How does TTL work with autoscale?
-With autoscale, TTL operations do not affect the scaling of RU/s. Any RUs consumed due to TTL are not part of the billed RU/s of the autoscale container.
-
-For example, suppose you have an autoscale container with 400 ΓÇô 4000 RU/s:
-- Hour 1: T=0: The container has no usage (no TTL or workload requests). The billable RU/s is 400 RU/s.-- Hour 1: T=1: TTL is enabled.-- Hour 1: T=2: The container starts getting requests, which consume 1000 RU in 1 second. There are also 200 RUs worths of TTL that need to happen.
-The billable RU/s is still 1000 RU/s. Regardless of when the TTLs occur, they will not affect the autoscale scaling logic.
-
-### What is the mapping between the max RU/s and physical partitions?
-When you first select the max RU/s, Azure Cosmos DB will provision: Max RU/s / 10,000 RU/s = # of physical partitions. Each [physical partition](partitioning-overview.md#physical-partitions) can support up to 10,000 RU/s and 50 GB of storage. As storage size grows, Azure Cosmos DB will automatically split the partitions to add more physical partitions to handle the storage increase, or increase the max RU/s if storage [exceeds the associated limit](#what-is-the-storage-limit-associated-with-each-max-rus-option).
-
-The max RU/s of the database or container is divided evenly across all physical partitions. So, the total throughput any single physical partition can scale to is: Max RU/s of database or container / # physical partitions.
-
-### What happens if incoming requests exceed the max RU/s of the database or container?
-If the overall consumed RU/s exceeds the max RU/s of the database or container, requests that exceed the max RU/s will be throttled and return a 429 status code. Requests that result in over 100% normalized utilization will also be throttled. Normalized utilization is defined as the max of the RU/s utilization across all physical partitions. For example, suppose your max throughput is 20,000 RU/s and you have two physical partitions, P_1 and P_2, each capable of scaling to 10,000 RU/s. In a given second, if P_1 has used 6000 RUs, and P_2 8000 RUs, the normalized utilization is MAX(6000 RU / 10,000 RU, 8000 RU / 10,000 RU) = 0.8.
-
-> [!NOTE]
-> The Azure Cosmos DB client SDKs and data import tools (Azure Data Factory, bulk executor library) automatically retry on 429s, so occasional 429s are fine. A sustained high number of 429s may indicate you need to increase the max RU/s or review your partitioning strategy for a [hot partition](#autoscale-rate-limiting).
-
-### <a id="autoscale-rate-limiting"></a> Is it still possible to see 429s (throttling/rate limiting) when autoscale is enabled?
-Yes. It is possible to see 429s in two scenarios. First, when the overall consumed RU/s exceeds the max RU/s of the database or container, the service will throttle requests accordingly.
-
-Second, if there is a hot partition, i.e. a logical partition key value that has a disproportionately higher amount of requests compared to other partition key values, it is possible for the underlying physical partition to exceed its RU/s budget. As a best practice, to avoid hot partitions, [choose a good partition key](partitioning-overview.md#choose-partitionkey) that results in an even distribution of both storage and throughput.
-
-For example, if you select the 20,000 RU/s max throughput option and have 200 GB of storage, with four physical partitions, each physical partition can be autoscaled up to 5000 RU/s. If there was a hot partition on a particular logical partition key, you will see 429s when the underlying physical partition it resides in exceeds 5000 RU/s, i.e. exceeds 100% normalized utilization.
--
-## Next steps
-
-* Learn how to [enable autoscale on an Azure Cosmos DB database or container](how-to-provision-autoscale-throughput.md).
-* Learn about the [benefits of provisioned throughput with autoscale](provision-throughput-autoscale.md#benefits-of-autoscale).
-* Learn more about [logical and physical partitions](partitioning-overview.md).
-
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/concepts-limits.md
Azure Cosmos DB maintains system metadata for each account. This metadata allows
## Limits for autoscale provisioned throughput
-See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article and [FAQ](autoscale-faq.md#lowering-the-max-rus) for more detailed explanation of the throughput and storage limits with autoscale.
+See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article and [FAQ](autoscale-faq.yml#lowering-the-max-ru-s) for more detailed explanation of the throughput and storage limits with autoscale.
| Resource | Default limit | | | |
cosmos-db Create Mongodb Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-mongodb-dotnet.md
This quickstart demonstrates how to create a Cosmos account with [Azure Cosmos D
## Prerequisites to run the sample app
-To run the sample, you'll need [Visual Studio](https://www.visualstudio.com/downloads/) and a valid Azure Cosmos DB account.
+* [Visual Studio](https://www.visualstudio.com/downloads/)
+* An Azure Cosmos DB account.
If you don't already have Visual Studio, download [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/) with the **ASP.NET and web development** workload installed with setup.
The sample described in this article is compatible with MongoDB.Driver version 2
## Clone the sample app
-First, download the sample app from GitHub.
+Run the following commands in a GitHub enabled command windows such as [Git bash](https://git-scm.com/downloads):
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+```bash
+mkdir "C:\git-samples"
+cd "C:\git-samples"
+git clone https://github.com/Azure-Samples/azure-cosmos-db-mongodb-dotnet-getting-started.git
+```
- ```bash
- mkdir "C:\git-samples"
- ```
-
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
-
- ```bash
- cd "C:\git-samples"
- ```
-
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+The preceding commands:
- ```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-mongodb-dotnet-getting-started.git
- ```
+1. Create the *C:\git-samples* directory for the sample. Chose a folder appropriate for your operating system.
+1. Change your current directory to the *C:\git-samples* folder.
+1. Clone the sample into the *C:\git-samples* folder.
If you don't wish to use git, you can also [download the project as a ZIP file](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-dotnet-getting-started/archive/master.zip). ## Review the code
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
+1. In Visual Studio, right-click on the project in **Solution Explorer** and then click **Manage NuGet Packages**.
+1. In the NuGet **Browse** box, type *MongoDB.Driver*.
+1. From the results, install the **MongoDB.Driver** library. This installs the MongoDB.Driver package as well as all dependencies.
-The following snippets are all taken from the Dal.cs file in the DAL directory.
+The following steps are optional. If you're interested in learning how the database resources are created in the code, review the following snippets. Otherwise, skip ahead to [Update your connection string](#update-the-connection-string).
-* Initialize the client.
+The following snippets are from the *DAL/Dal.cs* file.
+
+* The following code initializes the client:
```cs MongoClientSettings settings = new MongoClientSettings();
The following snippets are all taken from the Dal.cs file in the DAL directory.
MongoClient client = new MongoClient(settings); ```
-* Retrieve the database and the collection.
+* The following code retrieves the database and the collection:
```cs private string dbName = "Tasks";
The following snippets are all taken from the Dal.cs file in the DAL directory.
var todoTaskCollection = database.GetCollection<MyTask>(collectionName); ```
-* Retrieve all documents.
+* The following code retrieves all documents:
```cs collection.Find(new BsonDocument()).ToList(); ```
-Create a task and insert it into the collection
+The following code creates a task and insert it into the collection:
```csharp public void CreateTask(MyTask task)
Create a task and insert it into the collection
} } ```
- Similarly, you can update and delete documents by using the [collection.UpdateOne()](https://docs.mongodb.com/stitch/mongodb/actions/collection.updateOne/https://docsupdatetracker.net/index.html) and [collection.DeleteOne()](https://docs.mongodb.com/stitch/mongodb/actions/collection.deleteOne/https://docsupdatetracker.net/index.html) methods.
-
-## Update your connection string
+ Similarly, you can update and delete documents by using the [collection.UpdateOne()](https://docs.mongodb.com/stitch/mongodb/actions/collection.updateOne/https://docsupdatetracker.net/index.html) and [collection.DeleteOne()](https://docs.mongodb.com/stitch/mongodb/actions/collection.deleteOne/https://docsupdatetracker.net/index.html) methods.
-Now go back to the Azure portal to get your connection string information and copy it into the app.
+## Update the connection string
-1. In the [Azure portal](https://portal.azure.com/), in your Cosmos account, in the left navigation click **Connection String**, and then click **Read-write Keys**. You'll use the copy buttons on the right side of the screen to copy the Username, Password, and Host into the Dal.cs file in the next step.
+From the Azure portal copy the connection string information:
-2. Open the **Dal.cs** file in the **DAL** directory.
+1. In the [Azure portal](https://portal.azure.com/), select your Cosmos account, in the left navigation click **Connection String**, and then click **Read-write Keys**. You'll use the copy buttons on the right side of the screen to copy the Username, Password, and Host into the Dal.cs file in the next step.
-3. Copy your **username** value from the portal (using the copy button) and make it the value of the **username** in your **Dal.cs** file.
+2. Open the *DAL/Dal.cs* file.
-4. Then copy your **host** value from the portal and make it the value of the **host** in your **Dal.cs** file.
+3. Copy the **username** value from the portal (using the copy button) and make it the value of the **username** in the **Dal.cs** file.
-5. Finally copy your **password** value from the portal and make it the value of the **password** in your **Dal.cs** file.
+4. Copy the **host** value from the portal and make it the value of the **host** in the **Dal.cs** file.
-You've now updated your app with all the info it needs to communicate with Cosmos DB.
-
-## Run the web app
+5. Copy the **password** value from the portal and make it the value of the **password** in your **Dal.cs** file.
-1. In Visual Studio, right-click on the project in **Solution Explorer** and then click **Manage NuGet Packages**.
+<!-- TODO Store PW correctly-->
+> [!WARNING]
+> Never check passwords or other sensitive data into source code.
-2. In the NuGet **Browse** box, type *MongoDB.Driver*.
+You've now updated your app with all the info it needs to communicate with Cosmos DB.
-3. From the results, install the **MongoDB.Driver** library. This installs the MongoDB.Driver package as well as all dependencies.
-
-4. Click CTRL + F5 to run the application. Your app displays in your browser.
+## Run the web app
-5. Click **Create** in the browser and create a few new tasks in your task list app.
+1. Click CTRL + F5 to run the app. The default browser is launched with the app.
+1. Click **Create** in the browser and create a few new tasks in your task list app.
+<!--
+## Deploy the app to Azure
+1. In VS, right click .. publish
+2. This is so easy, why is this critical step missed?
+-->
## Review SLAs in the Azure portal [!INCLUDE [cosmosdb-tutorial-review-slas](../../includes/cosmos-db-tutorial-review-slas.md)]
You've now updated your app with all the info it needs to communicate with Cosmo
In this quickstart, you've learned how to create a Cosmos account, create a collection and run a console app. You can now import additional data to your Cosmos database. > [!div class="nextstepaction"]
-> [Import MongoDB data into Azure Cosmos DB](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
+> [Import MongoDB data into Azure Cosmos DB](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/faq.md
- Title: Frequently asked questions on different APIs in Azure Cosmos DB
-description: Get answers to frequently asked questions about Azure Cosmos DB, a globally distributed, multi-model database service. Learn about capacity, performance levels, and scaling.
--- Previously updated : 09/01/2019---
-# Frequently asked questions about different APIs in Azure Cosmos DB
-
-### What are the typical use cases for Azure Cosmos DB?
-
-Azure Cosmos DB is a good choice for new web, mobile, gaming, and IoT applications where automatic scale, predictable performance, fast order of millisecond response times, and the ability to query over schema-free data is important. Azure Cosmos DB lends itself to rapid development and supporting the continuous iteration of application data models. Applications that manage user-generated content and data are [common use cases for Azure Cosmos DB](use-cases.md).
-
-### How does Azure Cosmos DB offer predictable performance?
-
-A [request unit](request-units.md) (RU) is the measure of throughput in Azure Cosmos DB. A 1RU throughput corresponds to the throughput of the GET of a 1-KB document. Every operation in Azure Cosmos DB, including reads, writes, SQL queries, and stored procedure executions, has a deterministic RU value that's based on the throughput required to complete the operation. Instead of thinking about CPU, IO, and memory and how they each affect your application throughput, you can think in terms of a single RU measure.
-
-You can configure each Azure Cosmos container with provisioned throughput in terms of RUs of throughput per second. For applications of any scale, you can benchmark individual requests to measure their RU values, and provision a container to handle the total of request units across all requests. You can also scale up or scale down your container's throughput as the needs of your application evolve. For more information about request units and for help with determining your container needs, try the [throughput calculator](https://www.documentdb.com/capacityplanner).
-
-### How does Azure Cosmos DB support various data models such as key/value, columnar, document, and graph?
-
-Key/value (table), columnar, document, and graph data models are all natively supported because of the ARS (atoms, records, and sequences) design that Azure Cosmos DB is built on. Atoms, records, and sequences can be easily mapped and projected to various data models. The APIs for a subset of models are available right now (SQL, MongoDB, Table, and Gremlin) and others specific to additional data models will be available in the future.
-
-Azure Cosmos DB has a schema agnostic indexing engine capable of automatically indexing all the data it ingests without requiring any schema or secondary indexes from the developer. The engine relies on a set of logical index layouts (inverted, columnar, tree) which decouple the storage layout from the index and query processing subsystems. Cosmos DB also has the ability to support a set of wire protocols and APIs in an extensible manner and translate them efficiently to the core data model (1) and the logical index layouts (2) making it uniquely capable of supporting more than one data model natively.
-
-### Can I use multiple APIs to access my data?
-
-Azure Cosmos DB is Microsoft's globally distributed, multi-model database service. Where multi-model means Azure Cosmos DB supports multiple APIs and multiple data models, different APIs use different data formats for storage and wire protocol. For example, SQL uses JSON, MongoDB uses BSON, Table uses EDM, Cassandra uses CQL, Gremlin uses JSON format. As a result, we recommend using the same API for all access to the data in a given account.
-
-Each API operates independently, except the Gremlin and SQL API, which are interoperable.
-
-### Is Azure Cosmos DB HIPAA compliant?
-
-Yes, Azure Cosmos DB is HIPAA-compliant. HIPAA establishes requirements for the use, disclosure, and safeguarding of individually identifiable health information. For more information, see the [Microsoft Trust Center](/compliance/regulatory/offering-hipaa-hitech).
-
-### What are the storage limits of Azure Cosmos DB?
-
-There's no limit to the total amount of data that a container can store in Azure Cosmos DB.
-
-### What are the throughput limits of Azure Cosmos DB?
-
-There's no limit to the total amount of throughput that a container can support in Azure Cosmos DB. The key idea is to distribute your workload roughly evenly among a sufficiently large number of partition keys.
-
-### Are Direct and Gateway connectivity modes encrypted?
-
-Yes both modes are always fully encrypted.
-
-### How much does Azure Cosmos DB cost?
-
-For details, refer to the [Azure Cosmos DB pricing details](https://azure.microsoft.com/pricing/details/cosmos-db/) page. Azure Cosmos DB usage charges are determined by the number of provisioned containers, the number of hours the containers were online, and the provisioned throughput for each container.
-
-### Is a free account available?
-
-Yes, you can sign up for a time-limited account at no charge, with no commitment. To sign up, visit [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) or read more in the [Try Azure Cosmos DB FAQ](#try-cosmos-db).
-
-If you're new to Azure, you can sign up for an [Azure free account](https://azure.microsoft.com/free/), which gives you 30 days and a credit to try all the Azure services. If you have a Visual Studio subscription, you're also eligible for [free Azure credits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) to use on any Azure service.
-
-You can also use the [Azure Cosmos DB Emulator](local-emulator.md) to develop and test your application locally for free, without creating an Azure subscription. When you're satisfied with how your application is working in the Azure Cosmos DB Emulator, you can switch to using an Azure Cosmos DB account in the cloud.
-
-### How can I get additional help with Azure Cosmos DB?
-
-To ask a technical question, you can post to one of these two question and answer forums:
-
-* [Microsoft Q&A question page](/answers/topics/azure-cosmos-db.html)
-* [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-cosmosdb). Stack Overflow is best for programming questions. Make sure your question is [on-topic](https://stackoverflow.com/help/on-topic) and [provide as many details as possible, making the question clear and answerable](https://stackoverflow.com/help/how-to-ask).
-
-To request new features, create a new request on [User voice](https://feedback.azure.com/forums/263030-azure-cosmos-db).
-
-To fix an issue with your account, file a [support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
-
-## <a id="try-cosmos-db"></a>Try Azure Cosmos DB subscriptions
-
-You can now enjoy a time-limited Azure Cosmos DB experience without a subscription, free of charge and commitments. To sign up for a Try Azure Cosmos DB subscription, go to [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) and use any personal Microsoft account (MSA). This subscription is separate from the [Azure Free Trial](https://azure.microsoft.com/free/), and can be used along with an Azure Free Trial or an Azure paid subscription.
-
-Try Azure Cosmos DB subscriptions appear in the Azure portal next other subscriptions associated with your user ID.
-
-The following conditions apply to Try Azure Cosmos DB subscriptions:
-
-* Account access can be granted to personal Microsoft accounts (MSA). Avoid using Azure Active Directory (Azure AD) accounts or accounts belonging to corporate Azure AD Tenants, they might have limitations in place that could block access granting.
-* One [throughput provisioned container](./set-throughput.md#set-throughput-on-a-container) per subscription for SQL, Gremlin API, and Table accounts.
-* Up to three [throughput provisioned collections](./set-throughput.md#set-throughput-on-a-container) per subscription for MongoDB accounts.
-* One [throughput provisioned database](./set-throughput.md#set-throughput-on-a-database) per subscription. Throughput provisioned databases can contain any number of containers inside.
-* 10-GB storage capacity.
-* Global replication is available in the following [Azure regions](https://azure.microsoft.com/regions/): Central US, North Europe, and Southeast Asia
-* Maximum throughput of 5 K RU/s when provisioned at the container level.
-* Maximum throughput of 20 K RU/s when provisioned at the database level.
-* Subscriptions expire after 30 days, and can be extended to a maximum of 31 days total. After expiration, the information contained is deleted.
-* Azure support tickets can't be created for Try Azure Cosmos DB accounts; however, support is provided for subscribers with existing support plans.
-
-## Set up Azure Cosmos DB
-
-### How do I sign up for Azure Cosmos DB?
-
-Azure Cosmos DB is available in the Azure portal. First, sign up for an Azure subscription. After you've signed up, you can add an Azure Cosmos DB account to your Azure subscription.
-
-### What is a primary key?
-
-A primary key is a security token to access all resources for an account. Individuals with the key have read and write access to all resources in the database account. Use caution when you distribute primary keys. The primary primary key and secondary primary key are available on the **Keys** blade of the [Azure portal][azure-portal]. For more information about keys, see [View, copy, and regenerate access keys](manage-with-cli.md#list-account-keys).
-
-### What are the regions that PreferredLocations can be set to?
-
-The PreferredLocations value can be set to any of the Azure regions in which Cosmos DB is available. For a list of available regions, see [Azure regions](https://azure.microsoft.com/regions/).
-
-### Is there anything I should be aware of when distributing data across the world via the Azure datacenters?
-
-Azure Cosmos DB is present across all Azure regions, as specified on the [Azure regions](https://azure.microsoft.com/regions/) page. Because it's the core service, every new datacenter has an Azure Cosmos DB presence.
-
-When you set a region, remember that Azure Cosmos DB respects sovereign and government clouds. That is, if you create an account in a [sovereign region](https://azure.microsoft.com/global-infrastructure/), you can't replicate out of that [sovereign region](https://azure.microsoft.com/global-infrastructure/). Similarly, you can't enable replication into other sovereign locations from an outside account.
-
-### Is it possible to switch from container level throughput provisioning to database level throughput provisioning? Or vice versa
-
-Container and database level throughput provisioning are separate offerings and switching between either of these require migrating data from source to destination. Which means you need to create a new database or a new container and then migrate data by using [bulk executor library](bulk-executor-overview.md) or [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md).
-
-### Does Azure CosmosDB support time series analysis?
-
-Yes Azure CosmosDB supports time series analysis, here is a sample for [time series pattern](https://github.com/Azure/azure-cosmosdb-dotnet/tree/master/samples/Patterns). This sample shows how to use change feed to build aggregated views over time series data. You can extend this approach by using spark streaming or another stream data processor.
-
-## What are the Azure Cosmos DB service quotas and throughput limits
-
-See the Azure Cosmos DB [service quotas](concepts-limits.md) and [throughout limits per container and database](set-throughput.md#comparison-of-models) articles for more information.
-
-## <a id="sql-api-faq"></a>Frequently asked questions about SQL API
-
-### How do I start developing against the SQL API?
-
-First you must sign up for an Azure subscription. Once you sign up for an Azure subscription, you can add a SQL API container to your Azure subscription. For instructions on adding an Azure Cosmos DB account, see [Create an Azure Cosmos database account](create-sql-api-dotnet.md#create-account).
-
-[SDKs](sql-api-sdk-dotnet.md) are available for .NET, Python, Node.js, JavaScript, and Java. Developers can also use the [RESTful HTTP APIs](/rest/api/cosmos-db/) to interact with Azure Cosmos DB resources from various platforms and languages.
-
-### Can I access some ready-made samples to get a head start?
-
-Samples for the SQL API [.NET](sql-api-dotnet-samples.md), [Java](https://github.com/Azure/azure-documentdb-java), [Node.js](sql-api-nodejs-samples.md), and [Python](sql-api-python-samples.md) SDKs are available on GitHub.
-
-### Does the SQL API database support schema-free data?
-
-Yes, the SQL API allows applications to store arbitrary JSON documents without schema definitions or hints. Data is immediately available for query through the Azure Cosmos DB SQL query interface.
-
-### Does the SQL API support ACID transactions?
-
-Yes, the SQL API supports cross-document transactions expressed as JavaScript-stored procedures and triggers. Transactions are scoped to a single partition within each container and executed with ACID semantics as "all or nothing," isolated from other concurrently executing code and user requests. If exceptions are thrown through the server-side execution of JavaScript application code, the entire transaction is rolled back.
-
-### What is a container?
-
-A container is a group of documents and their associated JavaScript application logic. A container is a billable entity, where the [cost](performance-levels.md) is determined by the throughput and used storage. Containers can span one or more partitions or servers and can scale to handle practically unlimited volumes of storage or throughput.
-
-* For SQL API, the resource is called a container.
-* For Cosmos DB's API for MongoDB accounts, a container maps to a Collection.
-* For Cassandra and Table API accounts, a container maps to a Table.
-* For Gremlin API accounts, a container maps to a Graph.
-
-Containers are also the billing entities for Azure Cosmos DB. Each container is billed hourly, based on the provisioned throughput and used storage space. For more information, see [Azure Cosmos DB Pricing](https://azure.microsoft.com/pricing/details/cosmos-db/).
-
-### How do I create a database?
-
-You can create databases by using the [Azure portal](https://portal.azure.com), as described in [Add a container](create-sql-api-java.md#add-a-container), one of the [Azure Cosmos DB SDKs](sql-api-sdk-dotnet.md), or the [REST APIs](/rest/api/cosmos-db/).
-
-### How do I set up users and permissions?
-
-You can create users and permissions by using one of the [Cosmos DB API SDKs](sql-api-sdk-dotnet.md) or the [REST APIs](/rest/api/cosmos-db/).
-
-### Does the SQL API support SQL?
-
-The SQL query language supported by SQL API accounts is an enhanced subset of the query functionality that's supported by SQL Server. The Azure Cosmos DB SQL query language provides rich hierarchical and relational operators and extensibility via JavaScript-based, user-defined functions (UDFs). JSON grammar allows for modeling JSON documents as trees with labeled nodes, which are used by both the Azure Cosmos DB automatic indexing techniques and the SQL query dialect of Azure Cosmos DB. For information about using SQL grammar, see the [SQL Query][query] article.
-
-### Does the SQL API support SQL aggregation functions?
-
-The SQL API supports low-latency aggregation at any scale via aggregate functions `COUNT`, `MIN`, `MAX`, `AVG`, and `SUM` via the SQL grammar. For more information, see [Aggregate functions](sql-query-aggregate-functions.md).
-
-### How does the SQL API provide concurrency?
-
-The SQL API supports optimistic concurrency control (OCC) through HTTP entity tags, or ETags. Every SQL API resource has an ETag, and the ETag is set on the server every time a document is updated. The ETag header and the current value are included in all response messages. ETags can be used with the If-Match header to allow the server to decide whether a resource should be updated. The If-Match value is the ETag value to be checked against. If the ETag value matches the server ETag value, the resource is updated. If the ETag is no longer current, the server rejects the operation with an "HTTP 412 Precondition failure" response code. The client then refetches the resource to acquire the current ETag value for the resource. In addition, ETags can be used with the If-None-Match header to determine whether a refetch of a resource is needed.
-
-To use optimistic concurrency in .NET, use the [AccessCondition](/dotnet/api/microsoft.azure.documents.client.accesscondition) class. For a .NET sample, see [Program.cs](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/samples/code-samples/DocumentManagement/Program.cs) in the DocumentManagement sample on GitHub.
-
-### How do I perform transactions in the SQL API?
-
-The SQL API supports language-integrated transactions via JavaScript-stored procedures and triggers. All database operations inside scripts are executed under snapshot isolation. If it's a single-partition container, the execution is scoped to the container. If the container is partitioned, the execution is scoped to documents with the same partition-key value within the container. A snapshot of the document versions (ETags) is taken at the start of the transaction and committed only if the script succeeds. If the JavaScript throws an error, the transaction is rolled back. For more information, see [Server-side JavaScript programming for Azure Cosmos DB](stored-procedures-triggers-udfs.md).
-
-### How can I bulk-insert documents into Cosmos DB?
-
-You can bulk-insert documents into Azure Cosmos DB in one of the following ways:
-
-* The bulk executor tool, as described in [Using bulk executor .NET library](bulk-executor-dot-net.md) and [Using bulk executor Java library](bulk-executor-java.md)
-* The data migration tool, as described in [Database migration tool for Azure Cosmos DB](import-data.md).
-* Stored procedures, as described in [Server-side JavaScript programming for Azure Cosmos DB](stored-procedures-triggers-udfs.md).
-
-### Does the SQL API support resource link caching?
-
-Yes, because Azure Cosmos DB is a RESTful service, resource links are immutable and can be cached. SQL API clients can specify an "If-None-Match" header for reads against any resource-like document or container and then update their local copies after the server version has changed.
-
-### Is a local instance of SQL API available?
-
-Yes. The [Azure Cosmos DB Emulator](local-emulator.md) provides a high-fidelity emulation of the Cosmos DB service. It supports functionality that's identical to Azure Cosmos DB, including support for creating and querying JSON documents, provisioning and scaling collections, and executing stored procedures and triggers. You can develop and test applications by using the Azure Cosmos DB Emulator, and deploy them to Azure at a global scale by making a single configuration change to the connection endpoint for Azure Cosmos DB.
-
-### Why are long floating-point values in a document rounded when viewed from data explorer in the portal.
-
-This is limitation of JavaScript. JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and it can safely hold numbers between -(2<sup>53</sup> - 1) and 2<sup>53</sup>-1 (i.e., 9007199254740991) only.
-
-### Where are permissions allowed in the object hierarchy?
-
-Creating permissions by using ResourceTokens is allowed at the container level and its descendants (such as documents, attachments). This implies that trying to create a permission at the database or an account level isn't currently allowed.
-
-[azure-portal]: https://portal.azure.com
-[query]: ./sql-query-getting-started.md
-
-## Next steps
-
-To learn about frequently asked questions in other APIs, see:
-
-* Frequently asked questions about [Azure Cosmos DB's API for MongoDB](mongodb-api-faq.md)
-* Frequently asked questions about [Gremlin API in Azure Cosmos DB](gremlin-api-faq.md)
-* Frequently asked questions about [Cassandra API in Azure Cosmos DB](cassandra-faq.yml)
-* Frequently asked questions about [Table API in Azure Cosmos DB](table-api-faq.md)
cosmos-db Gremlin Api Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/gremlin-api-faq.md
- Title: Frequently asked questions about the Gremlin API in Azure Cosmos DB
-description: Get answers to frequently asked questions about the Gremlin API in Azure Cosmos DB
---- Previously updated : 04/28/2020---
-# Frequently asked questions about the Gremlin API in Azure Cosmos DB
-
-This article explains answers to some frequently asked questions about Gremlin API in Azure Cosmos DB.
-
-## How to evaluate the efficiency of Gremlin queries
-
-The **executionProfile()** preview step can be used to provide an analysis of the query execution plan. This step needs to be added to the end of any Gremlin query as illustrated by the following example:
-
-**Query example**
-
-```
-g.V('mary').out('knows').executionProfile()
-```
-
-**Example output**
-
-```json
-[
- {
- "gremlin": "g.V('mary').out('knows').executionProfile()",
- "totalTime": 8,
- "metrics": [
- {
- "name": "GetVertices",
- "time": 3,
- "annotations": {
- "percentTime": 37.5
- },
- "counts": {
- "resultCount": 1
- }
- },
- {
- "name": "GetEdges",
- "time": 5,
- "annotations": {
- "percentTime": 62.5
- },
- "counts": {
- "resultCount": 0
- },
- "storeOps": [
- {
- "count": 0,
- "size": 0,
- "time": 0.6
- }
- ]
- },
- {
- "name": "GetNeighborVertices",
- "time": 0,
- "annotations": {
- "percentTime": 0
- },
- "counts": {
- "resultCount": 0
- }
- },
- {
- "name": "ProjectOperator",
- "time": 0,
- "annotations": {
- "percentTime": 0
- },
- "counts": {
- "resultCount": 0
- }
- }
- ]
- }
-]
-```
-
-The output of the above profile shows how much time is spent obtaining the vertex objects, the edge objects, and the size of the working data set. This is related to the standard cost measurements for Azure Cosmos DB queries.
-
-## Other frequently asked questions
-
-### How are RU/s charged when running queries on a graph database?
-
-All graph objects, vertices, and edges, are shown as JSON documents in the backend. Since one Gremlin query can modify one or many graph objects at a time, the cost associated with it is directly related to the objects, edges that are processed by the query. This is the same process that Azure Cosmos DB uses for all other APIs. For more information, see [Request Units in Azure Cosmos DB](request-units.md).
-
-The RU charge is based on the working data set of the traversal, and not the result set. For example, if a query aims to obtain a single vertex as a result but needs to traverse more than one other object on the way, then the cost will be based on all the graph objects that it will take to compute the one result vertex.
-
-### What's the maximum scale that a graph database can have in Azure Cosmos DB Gremlin API?
-
-Azure Cosmos DB makes use of [horizontal partitioning](partitioning-overview.md) to automatically address increase in storage and throughput requirements. The maximum throughput and storage capacity of a workload is determined by the number of partitions that are associated with a given container. However, a Gremlin API container has a specific set of guidelines to ensure a proper performance experience at scale. For more information about partitioning, and best practices, see [partitioning in Azure Cosmos DB](partitioning-overview.md) article.
-
-### For C#/.NET development, should I use the Microsoft.Azure.Graphs package or Gremlin.NET?
-
-Azure Cosmos DB Gremlin API leverages the open-source drivers as the main connectors for the service. So the recommended option is to use [drivers that are supported by Apache Tinkerpop](https://tinkerpop.apache.org/).
-
-### How can I protect against injection attacks using Gremlin drivers?
-
-Most native Apache Tinkerpop Gremlin drivers allow the option to provide a dictionary of parameters for query execution. This is an example of how to do it in [Gremlin.Net](https://tinkerpop.apache.org/docs/3.2.7/reference/#gremlin-DotNet) and in [Gremlin-Javascript](https://github.com/Azure-Samples/azure-cosmos-db-graph-nodejs-getting-started/blob/main/app.js).
-
-### Why am I getting the "Gremlin Query Compilation Error: Unable to find any method" error?
-
-Azure Cosmos DB Gremlin API implements a subset of the functionality defined in the Gremlin surface area. For supported steps and more information, see [Gremlin support](gremlin-support.md) article.
-
-The best workaround is to rewrite the required Gremlin steps with the supported functionality, since all essential Gremlin steps are supported by Azure Cosmos DB.
-
-### Why am I getting the "WebSocketException: The server returned status code '200' when status code '101' was expected" error?
-
-This error is likely thrown when the wrong endpoint is being used. The endpoint that generates this error has the following pattern:
-
-`https:// YOUR_DATABASE_ACCOUNT.documents.azure.com:443/`
-
-This is the documents endpoint for your graph database. The correct endpoint to use is the Gremlin Endpoint, which has the following format:
-
-`https://YOUR_DATABASE_ACCOUNT.gremlin.cosmosdb.azure.com:443/`
-
-### Why am I getting the "RequestRateIsTooLarge" error?
-
-This error means that the allocated Request Units per second aren't enough to serve the query. This error is usually seen when you run a query that obtains all vertices:
-
-```
-// Query example:
-g.V()
-```
-
-This query will attempt to retrieve all vertices from the graph. So, the cost of this query will be equal to at least the number of vertices in terms of RUs. The RU/s setting should be adjusted to address this query.
-
-### Why do my Gremlin driver connections get dropped eventually?
-
-A Gremlin connection is made through a WebSocket connection. Although WebSocket connections don't have a specific time to live, Azure Cosmos DB Gremlin API will terminate idle connections after 30 minutes of inactivity.
-
-### Why can't I use fluent API calls in the native Gremlin drivers?
-
-Fluent API calls aren't yet supported by the Azure Cosmos DB Gremlin API. Fluent API calls require an internal formatting feature known as bytecode support that currently isn't supported by Azure Cosmos DB Gremlin API. Due to the same reason, the latest Gremlin-JavaScript driver is also currently not supported.
-
-## Next steps
-
-* [Azure Cosmos DB Gremlin wire protocol support](gremlin-support.md)
-* Create, query, and traverse an Azure Cosmos DB graph database using the [Gremlin console](create-graph-gremlin-console.md)
cosmos-db How To Access System Properties Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-access-system-properties-gremlin.md
g.addV('vertex-one').property('ttl', 123)
``` ## Next steps
-* [Cosmos DB Optimistic Concurrency](faq.md#how-does-the-sql-api-provide-concurrency)
+* [Cosmos DB Optimistic Concurrency](faq.yml#how-does-the-sql-api-provide-concurrency-)
* [Time to Live (TTL)](time-to-live.md) in Azure Cosmos DB
cosmos-db How To Choose Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-choose-offer.md
When using autoscale, use Azure Monitor to see the provisioned autoscale max RU/
* Use [RU calculator](https://cosmos.azure.com/capacitycalculator/) to estimate throughput for new workloads. * Use [Azure Monitor](monitor-cosmos-db.md#view-operation-level-metrics-for-azure-cosmos-db) to monitor your existing workloads. * Learn how to [provision autoscale throughput on an Azure Cosmos database or container](how-to-provision-autoscale-throughput.md).
-* Review the [autoscale FAQ](autoscale-faq.md).
+* Review the [autoscale FAQ](autoscale-faq.yml).
cosmos-db How To Manage Database Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-manage-database-account.md
Previously updated : 04/25/2021 Last updated : 05/13/2021 # Manage an Azure Cosmos account This article describes how to manage various tasks on an Azure Cosmos account using the Azure portal, Azure PowerShell, Azure CLI, and Azure Resource Manager templates.
cosmos-db How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-move-regions.md
Previously updated : 09/12/2020 Last updated : 05/13/2021 # Move an Azure Cosmos DB account to another region This article describes how to either:
cosmos-db How To Provision Autoscale Throughput https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-provision-autoscale-throughput.md
To provision autoscale on shared throughput database, select the **Provision dat
:::image type="content" source="./media/how-to-provision-autoscale-throughput/autoscale-scale-and-settings.png" alt-text="Enabling autoscale on an existing container"::: > [!NOTE]
-> When you enable autoscale on an existing database or container, the starting value for max RU/s is determined by the system, based on your current manual provisioned throughput settings and storage. After the operation completes, you can change the max RU/s if needed. [Learn more.](autoscale-faq.md#how-does-the-migration-between-autoscale-and-standard-manual-provisioned-throughput-work)
+> When you enable autoscale on an existing database or container, the starting value for max RU/s is determined by the system, based on your current manual provisioned throughput settings and storage. After the operation completes, you can change the max RU/s if needed. [Learn more.](autoscale-faq.yml#how-does-the-migration-between-autoscale-and-standard--manual--provisioned-throughput-work-)
## Azure Cosmos DB .NET V3 SDK
Azure PowerShell can be used to provision autoscale throughput on a database or
* Learn about the [benefits of provisioned throughput with autoscale](provision-throughput-autoscale.md#benefits-of-autoscale). * Learn how to [choose between manual and autoscale throughput](how-to-choose-offer.md).
-* Review the [autoscale FAQ](autoscale-faq.md).
+* Review the [autoscale FAQ](autoscale-faq.yml).
cosmos-db Manage With Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/manage-with-cli.md
Previously updated : 04/25/2021 Last updated : 05/13/2021 # Manage Azure Cosmos Core (SQL) API resources using Azure CLI The following guide describes common commands to automate management of your Azure Cosmos DB accounts, databases and containers using Azure CLI. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). You can also find more examples in [Azure CLI samples for Azure Cosmos DB](cli-samples.md), including how to create and manage Cosmos DB accounts, databases and containers for MongoDB, Gremlin, Cassandra and Table API.
cosmos-db Manage With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/manage-with-powershell.md
Previously updated : 04/25/2021 Last updated : 05/13/2021 # Manage Azure Cosmos DB Core (SQL) API resources using PowerShell The following guide describes how to use PowerShell to script and automate management of Azure Cosmos DB Core (SQL) API resources, including the Cosmos account, database, container, and throughput. For PowerShell cmdlets for other APIs see [PowerShell Samples for Cassandra](powershell-samples-cassandra.md), [PowerShell Samples for MongoDB API](powershell-samples-mongodb.md), [PowerShell Samples for Gremlin](powershell-samples-gremlin.md), [PowerShell Samples for Table](powershell-samples-table.md)
cosmos-db Manage With Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/manage-with-templates.md
Previously updated : 03/24/2021 Last updated : 05/13/2021 # Manage Azure Cosmos DB Core (SQL) API resources with Azure Resource Manager templates- In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, databases, and containers.
cosmos-db Mongodb Api Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-api-faq.md
- Title: Frequently asked questions about the Azure Cosmos DB's API for MongoDB
-description: Get answers to Frequently asked questions about the Azure Cosmos DB's API for MongoDB
---- Previously updated : 04/28/2020---
-# Frequently asked questions about the Azure Cosmos DB's API for MongoDB
-
-The Azure Cosmos DB's API for MongoDB is a wire-protocol compatibility layer that allows applications to easily and transparently communicate with the native Azure Cosmos database engine by using existing, community-supported SDKs and drivers for MongoDB. Developers can now use existing MongoDB toolchains and skills to build applications that take advantage of Azure Cosmos DB. Developers benefit from the unique capabilities of Azure Cosmos DB, which include global distribution with multi-region write replication, auto-indexing, backup maintenance, financially backed service level agreements (SLAs) etc.
-
-## How do I connect to my database?
-
-The quickest way to connect to a Cosmos database with Azure Cosmos DB's API for MongoDB is to head over to the [Azure portal](https://portal.azure.com). Go to your account and then, on the left navigation menu, click **Quick Start**. Quickstart is the best way to get code snippets to connect to your database.
-
-Azure Cosmos DB enforces strict security requirements and standards. Azure Cosmos DB accounts require authentication and secure communication via TLS, so be sure to use TLSv1.2.
-
-For more information, see [Connect to your Cosmos database with Azure Cosmos DB's API for MongoDB](connect-mongodb-account.md).
-
-## Error codes while using Azure Cosmos DB's API for MongoDB?
-
-Along with the common MongoDB error codes, the Azure Cosmos DB's API for MongoDB has its own specific error codes. These can be found in the [Troubleshooting Guide](mongodb-troubleshoot.md).
-
-## Supported drivers
-
-### Is the Simba driver for MongoDB supported for use with Azure Cosmos DB's API for MongoDB?
-
-Yes, you can use Simba's Mongo ODBC driver with Azure Cosmos DB's API for MongoDB
-
-## Next steps
-
-* [Build a .NET web app using Azure Cosmos DB's API for MongoDB](create-mongodb-dotnet.md)
-* [Create a console app with Java and the MongoDB API in Azure Cosmos DB](create-mongodb-java.md)
cosmos-db Prevent Rate Limiting Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/prevent-rate-limiting-errors.md
Title: Prevent rate-limiting errors for Azure Cosmos DB API for MongoDB operations.
-description: Learn how to prevent your Azure Cosmos DB API for MongoDB operations from hitting rate limiting errors with the SSR (server side retry) feature.
+description: Learn how to prevent your Azure Cosmos DB API for MongoDB operations from hitting rate limiting errors with the SSR (server-side retry) feature.
You can enable the Server Side Retry (SSR) feature and let the server retry thes
1. Click **Enable** to enable this feature for all collections in your account. ## Use the Azure CLI 1. Check if SSR is already enabled for your account:
-```bash
-az cosmosdb show --name accountname --resource-group resourcegroupname
-```
-2. **Enable** SSR for all collections in your database account. It may take up to 15min for this change to take effect.
-```bash
-az cosmosdb update --name accountname --resource-group resourcegroupname --capabilities EnableMongo DisableRateLimitingResponses
-```
-The following command will **Disable** SSR for all collections in your database account by removing "DisableRateLimitingResponses" from the capabilities list. It may take up to 15min for this change to take effect.
-```bash
-az cosmosdb update --name accountname --resource-group resourcegroupname --capabilities EnableMongo
-```
-
-## Frequently Asked Questions
-* How are requests retried?
- * Requests are retried continuously (over and over again) until a 60-second timeout is reached. If the timeout is reached, the client will receive an [ExceededTimeLimit exception (50)](mongodb-troubleshoot.md).
-* How can I monitor the effects of SSR?
- * You can view the rate limiting errors (429s) that are retried server-side in the Cosmos DB Metrics pane. Keep in mind that these errors don't go to the client when SSR is enabled, since they are handled and retried server-side.
- * You can search for log entries containing "estimatedDelayFromRateLimitingInMilliseconds" in your [Cosmos DB resource logs](cosmosdb-monitor-resource-logs.md).
-* Will SSR affect my consistency level?
- * SSR does not affect a request's consistency. Requests are retried server-side if they are rate limited (with a 429 error).
-* Does SSR affect any type of error that my client might receive?
- * No, SSR only affects rate limiting errors (429s) by retrying them server-side. This feature prevents you from having to handle rate-limiting errors in the client application. All [other errors](mongodb-troubleshoot.md) will go to the client.
+
+ ```azurecli-interactive
+ az cosmosdb show --name accountname --resource-group resourcegroupname
+ ```
+
+1. **Enable** SSR for all collections in your database account. It may take up to 15 min for this change to take effect.
+
+ ```azurecli-interactive
+ az cosmosdb update --name accountname --resource-group resourcegroupname --capabilities EnableMongo DisableRateLimitingResponses
+ ```
+
+1. The following command will **Disable** server-side retry for all collections in your database account by removing `DisableRateLimitingResponses` from the capabilities list. It may take up to 15 min for this change to take effect.
+
+ ```azurecli-interactive
+ az cosmosdb update --name accountname --resource-group resourcegroupname --capabilities EnableMongo
+ ```
+
+## Frequently asked questions
+
+### How are requests retried?
+
+Requests are retried continuously (over and over again) until a 60-second timeout is reached. If the timeout is reached, the client will receive an [ExceededTimeLimit exception (50)](mongodb-troubleshoot.md).
+
+### How can I monitor the effects of a server-side retry?
+
+You can view the rate limiting errors (429) that are retried server-side in the Cosmos DB Metrics pane. Keep in mind that these errors don't go to the client when SSR is enabled, since they are handled and retried server-side.
+
+You can search for log entries containing *estimatedDelayFromRateLimitingInMilliseconds* in your [Cosmos DB resource logs](cosmosdb-monitor-resource-logs.md).
+
+### Will server-side retry affect my consistency level?
+
+server-side retry does not affect a request's consistency. Requests are retried server-side if they are rate limited (with a 429 error).
+
+### Does server-side retry affect any type of error that my client might receive?
+
+No, server-side retry only affects rate limiting errors (429) by retrying them server-side. This feature prevents you from having to handle rate-limiting errors in the client application. All [other errors](mongodb-troubleshoot.md) will go to the client.
## Next steps
cosmos-db Provision Throughput Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/provision-throughput-autoscale.md
The entry point for autoscale maximum throughput `Tmax` starts at 4000 RU/s, whi
## Enable autoscale on existing resources
-Use the [Azure portal](how-to-provision-autoscale-throughput.md#enable-autoscale-on-existing-database-or-container), [Azure Resource Manager template](how-to-provision-autoscale-throughput.md#azure-resource-manager), [CLI](how-to-provision-autoscale-throughput.md#azure-cli) or [PowerShell](how-to-provision-autoscale-throughput.md#azure-powershell) to enable autoscale on an existing database or container. You can switch between autoscale and standard (manual) provisioned throughput at any time. See this [documentation](autoscale-faq.md#how-does-the-migration-between-autoscale-and-standard-manual-provisioned-throughput-work) for more information.
+Use the [Azure portal](how-to-provision-autoscale-throughput.md#enable-autoscale-on-existing-database-or-container), [Azure Resource Manager template](how-to-provision-autoscale-throughput.md#azure-resource-manager), [CLI](how-to-provision-autoscale-throughput.md#azure-cli) or [PowerShell](how-to-provision-autoscale-throughput.md#azure-powershell) to enable autoscale on an existing database or container. You can switch between autoscale and standard (manual) provisioned throughput at any time. See this [documentation](autoscale-faq.yml#how-does-the-migration-between-autoscale-and-standard--manual--provisioned-throughput-work-) for more information.
## <a id="autoscale-limits"></a> Throughput and storage limits for autoscale
For any value of `Tmax`, the database or container can store a total of `0.01 *
For example, if you start with a maximum RU/s of 50,000 RU/s (scales between 5000 - 50,000 RU/s), you can store up to 500 GB of data. If you exceed 500 GB - e.g. storage is now 600 GB, the new maximum RU/s will be 60,000 RU/s (scales between 6000 - 60,000 RU/s).
-When you use database level throughput with autoscale, you can have the first 25 containers share an autoscale maximum RU/s of 4000 (scales between 400 - 4000 RU/s), as long as you don't exceed 40 GB of storage. See this [documentation](autoscale-faq.md#can-i-change-the-max-rus-on-the-database-or-container) for more information.
+When you use database level throughput with autoscale, you can have the first 25 containers share an autoscale maximum RU/s of 4000 (scales between 400 - 4000 RU/s), as long as you don't exceed 40 GB of storage. See this [documentation](autoscale-faq.yml#can-i-change-the-max-ru-s-on-the-database-or-container--) for more information.
## Comparison ΓÇô containers configured with manual vs autoscale throughput For more detail, see this [documentation](how-to-choose-offer.md) on how to choose between standard (manual) and autoscale throughput.
For more detail, see this [documentation](how-to-choose-offer.md) on how to choo
## Next steps
-* Review the [autoscale FAQ](autoscale-faq.md).
+* Review the [autoscale FAQ](autoscale-faq.yml).
* Learn how to [choose between manual and autoscale throughput](how-to-choose-offer.md). * Learn how to [provision autoscale throughput on an Azure Cosmos database or container](how-to-provision-autoscale-throughput.md). * Learn more about [partitioning](partitioning-overview.md) in Azure Cosmos DB.
cosmos-db Rate Limiting Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/rate-limiting-requests.md
+
+ Title: Optimize your Azure Cosmos DB application using rate limiting
+description: This article provides developers with a methodology to rate limit requests to Azure Cosmos DB. Implementing this pattern can reduce errors and improve overall performance for workloads that exceed the provisioned throughput of the target database or container.
+++ Last updated : 05/07/2021+++
+# Optimize your Azure Cosmos DB application using rate limiting
+
+This article provides developers with a methodology to rate limit requests to Azure Cosmos DB. Implementing this pattern can reduce errors and improve overall performance for workloads that exceed the provisioned throughput of the target database or container.
+
+Requests that exceed your provisioned throughput in Azure Cosmos DB can result in transient faults like [TooManyRequests](troubleshoot-request-rate-too-large.md), [Timeout](troubleshoot-request-timeout.md), and [ServiceUnavailable](troubleshoot-service-unavailable.md). Typically you would retry these requests when capacity is available and be successful. However, this approach can result in a large number of requests following the error path in your code and typically results in reduced throughput.
+
+Optimal system performance, as measured by cost and time, can be achieved by matching the client-side workload traffic to the server-side provisioned throughput.
+
+Consider the following scenario:
+
+* You provision Azure Cosmos DB with 20 K RU/second.
+* Your application processes an ingestion job that contains 10 K records, each of which
+costs 10 RU. The total capacity required to complete this job is 100 K RU.
+* If you send the entire job to Azure Cosmos DB, you should expect a large number of transient faults and a large buffer of requests that you must retry. This condition is because the total number of RUs needed for the job (100 K) is much greater than the provisioned maximum (20 K). ~2 K of the records will be accepted into the database, but ~8 K will be rejected. You will send ~8 K records to Azure Cosmos DB on retry, of which ~2 K will be accepted, and so on. You should expect this pattern would send ~30 K records instead of 10 K records.
+* Instead if you send those requests evenly across 5 seconds, you should expect no faults and overall faster throughput as each batch would be at or under the provisioned 20 K.
+
+Spreading the requests across a period of time can be accomplished by introducing a rate limiting mechanism in your code.
+
+The RUs provisioned for a container will be evenly shared across the number of physical partitions. In the above example, if Azure Cosmos DB provisioned two physical partitions, each would have 10 K RU.
+
+For more information about Request Units, see [Request Units in Azure Cosmos DB
+](request-units.md).
+For more information about estimating the number of RUs consumed by your workload, see [Request Unit considerations](request-units.md#request-unit-considerations).
+For more information about partitioning Azure Cosmos DB, see [Partitioning and horizontal scaling in Azure Cosmos DB](partitioning-overview.md).
+
+## Methodology
+
+An approach to implementing rate limiting might look like this:
+
+1. Profile your application so that you have data about the writes and read requests that are used.
+1. Define all indexes.
+1. Populate the collection with a reasonable amount of data (could be sample data). If you expect your application to normally have millions of records, populate it with millions of records.
+1. Write your representative documents and record the RU cost.
+1. Run your representative queries and record the RU cost.
+1. Implement a function in your application to determine the cost of any given request based on your findings.
+1. Implement a rate limiting mechanism in your code to ensure that the sum of all operations sent to Azure Cosmos DB in a second do not exceed your provisioned throughput.
+1. Load test your application and verify that you don't exceed the provisioned throughput.
+1. Retest the RU costs periodically and update your cost function as needed.
+
+## Indexing
+
+Unlike other SQL and NoSQL databases you may be familiar with, Azure Cosmos DB's default indexing policy for newly created containers indexes **every** property. Each property indexed increases the RU cost of writes.
+
+The default indexing policy can lower latency in read-heavy systems where query filter conditions are well distributed across all of the stored fields. For example, systems where Azure Cosmos DB is spending most of its time serving end-user crafted ad-hoc searches might benefit.
+
+You might want to exclude properties that are never searched against from being indexed. Removing properties from the index could improve overall system performance (cost and time) for systems that are write-heavy and record retrieval patterns are more constrained.
+
+Before measuring any costs, you should intentionally configure an appropriate index policy for your use-cases. Also, if you later change indexes, you will need to rerun all cost calculations.
+
+Where possible, testing a system under development with a load reflecting typical queries at normal and peak demand conditions will help reveal what indexing policy to use.
+
+For more information about indexes, see [Indexing policies in Azure Cosmos DB](index-policy.md).
+
+## Measuring cost
+
+There are some key concepts when measuring cost:
+
+* Consider all factors that affect RU usage, as described in [request unit considerations](request-units.md#request-unit-considerations).
+* All reads and writes in your database or container will share the same provisioned throughput.
+* RU consumption is incurred regardless of the Azure Cosmos DB APIs being used.
+* The partition strategy for a collection can have a significant impact on the cost of a system. For more information, see [Partitioning and horizontal scaling in Azure Cosmos DB](partitioning-overview.md#choose-partitionkey).
+* Use representative documents and representative queries.
+ * These are documents and queries that you think are close to what the operational system will come across.
+ * The best way to get these representative documents and queries is to instrument the usage of your application. It is always better to make a data-driven decision.
+* Measure costs periodically.
+ * Index changes, the size of indexes can affect the cost.
+ * It will be helpful to create some repeatable (maybe even automated) test of the representative documents and queries.
+ * Ensure your representative documents and queries are still representative.
+
+The method to determine the cost of a request, is different for each API:
+
+* [Core API](find-request-unit-charge.md)
+* [Cassandra API](find-request-unit-charge-cassandra.md)
+* [Gremlin API](find-request-unit-charge-gremlin.md)
+* [Mongo DB API](find-request-unit-charge-mongodb.md)
+* [Table API](find-request-unit-charge-table.md)
+
+## Write requests
+
+The cost of write operations tends to be easy to predict. You will insert records and document the cost that Azure Cosmos reported.
+
+If you have documents of different size and/or documents that will use different indexes, it is important to measure all of them.
+You may find that your representative documents are close enough in cost that you can assign a single value across all writes.
+For example, if you found costs of 13.14 RU, 16.01 RU, and 12.63 RU, you might average those costs to 14 RU.
+
+## Read requests
+
+The cost of query operations can be much harder to predict for the following reasons:
+
+* If your system supports user-defined queries, you will need to map the incoming queries to the representative queries to help determine the cost. There are various forms this process might take:
+ * It may be possible to match the queries exactly. If there is no direct match, you may have to find the representative query that it is closest to.
+ * You may find that you can calculate a cost based on characteristics of the query. For example, you may find that each clause of the query has a certain cost,
+ or that an indexed property costs "x" while one not indexed costs "y", etc.
+* The number of results can vary and unless you have statistics, you cannot predict the RU impact from the return payload.
+
+It is likely you will not have a single cost of query operations, but rather some function that evaluates the query and calculates a cost.
+If you are using the Core API, you could then evaluate the actual cost of the operation and determine how accurate your estimation was
+(tuning of this estimation could even happen automatically within the code).
+
+## Transient fault handling
+
+Your application will still need transient fault handling even if you implement a rate limiting mechanism for the following reasons:
+
+* The actual cost of a request may be different than your projected cost.
+* Transient faults can occur for reasons other than TooManyRequests.
+
+However, properly implementing a rate limiting mechanism in your application will substantially reduce the number of transient faults.
+
+## Alternate and related techniques
+
+While this article describes client-side coordination and batching of workloads, there are other techniques that can be employed to manage overall system throughput.
+
+### Autoscaling
+
+Autoscale provisioned throughput in Azure Cosmos DB allows you to scale the throughput (RU/s) of your database or container automatically and instantly. The throughput is scaled based on the usage, without impacting the availability, latency, throughput, or performance of the workload.
+
+Autoscale provisioned throughput is well suited for mission-critical workloads that have variable or unpredictable traffic patterns, and require SLAs on high performance and scale.
+
+For more information on autoscaling, see [Create Azure Cosmos containers and databases with autoscale throughput
+](provision-throughput-autoscale.md).
+
+### Queue-Based Load Leveling pattern
+
+You could employ a queue that acts as a buffer between a client and Azure Cosmos DB in order to smooth intermittent heavy loads that can cause the service to fail or the task to time out.
+
+This pattern is useful to any application that uses services that are subject to overloading. However, this pattern isn't useful if the application expects a response from the service with minimal latency.
+
+This pattern is often well suited to ingest operations.
+
+For more information about this pattern, see [Queue-Based Load Leveling pattern](/azure/architecture/patterns/queue-based-load-leveling).
+
+### Cache-Aside pattern
+
+You might consider loading data on demand into a cache instead of querying Azure Cosmos DB every time. Using a cache can improve performance and also helps to maintain consistency between data held in the cache and data in the underlying data store.
+
+For more information, see: [Cache-Aside pattern](/azure/architecture/patterns/cache-aside).
+
+### Materialized View pattern
+
+You might pre-populate views into other collections after storing the data in Azure Cosmos DB when the data isn't ideally formatted for required query operations. This pattern can help support efficient querying and data extraction, and improve application performance.
+
+For more information, see [Materialized View pattern](/azure/architecture/patterns/materialized-view).
+
+## Next steps
+
+* Learn how to [troubleshoot TooManyRequests errors](troubleshoot-request-rate-too-large.md) in Azure Cosmos DB.
+* Learn how to [troubleshoot Timeout errors](troubleshoot-request-timeout.md) in Azure Cosmos DB.
+* Learn how to [troubleshoot ServiceUnavailable errors](troubleshoot-service-unavailable.md) in Azure Cosmos DB.
+* Learn more about [Request Units](request-units.md) in Azure Cosmos DB.
+* Learn more about [Partitioning and horizontal scaling](partitioning-overview.md) in Azure Cosmos DB.
+* Learn about [Indexing policies](index-policy.md) in Azure Cosmos DB.
+* Learn about [Autoscaling](provision-throughput-autoscale.md) in Azure Cosmos DB.
cosmos-db Resource Locks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/resource-locks.md
Previously updated : 10/06/2020 Last updated : 05/13/2021 # Prevent Azure Cosmos DB resources from being deleted or changed As an administrator, you may need to lock an Azure Cosmos account, database or container to prevent other users in your organization from accidentally deleting or modifying critical resources. You can set the lock level to CanNotDelete or ReadOnly.
Resource Manager locks apply only to operations that happen in the management pl
### PowerShell
-```powershell
+```powershell-interactive
$resourceGroupName = "myResourceGroup" $accountName = "my-cosmos-account" $lockName = "$accountName-Lock"
New-AzResourceLock `
### Azure CLI
-```bash
+```azurecli-interactive
resourceGroupName='myResourceGroup' accountName='my-cosmos-account' $lockName="$accountName-Lock"
cosmos-db Table Api Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-api-faq.md
- Title: Frequently asked questions about the Table API in Azure Cosmos DB
-description: Get answers to frequently asked questions about the Table API in Azure Cosmos DB
---- Previously updated : 08/12/2020---
-# Frequently asked questions about the Table API in Azure Cosmos DB
-
-The Azure Cosmos DB Table API is available in the [Azure portal](https://portal.azure.com) First you must sign up for an Azure subscription. After you've signed up, you can add an Azure Cosmos DB Table API account to your Azure subscription, and then add tables to your account. You can find the supported languages and associated quick-starts in the [Introduction to Azure Cosmos DB Table API](table-introduction.md).
-
-## <a id="table-api-vs-table-storage"></a>Table API in Azure Cosmos DB Vs Azure Table storage
-
-### Where is Table API not identical with Azure Table storage behavior?
-
-There are some behavior differences that users coming from Azure Table storage who want to create tables with the Azure Cosmos DB Table API should be aware of:
-
-* Azure Cosmos DB Table API uses a reserved capacity model in order to ensure guaranteed performance but this means that one pays for the capacity as soon as the table is created, even if the capacity isn't being used. With Azure Table storage one only pays for capacity that's used. This helps to explain why Table API can offer a 10 ms read and 15 ms write SLA at the 99th percentile while Azure Table storage offers a 10-second SLA. But as a consequence, with Table API tables, even empty tables without any requests, cost money in order to ensure the capacity is available to handle any requests to them at the SLA offered by Azure Cosmos DB.
-
-* Query results returned by the Table API aren't sorted in partition key/row key order as they are in Azure Table storage.
-
-* Row keys can only be up to 255 bytes.
-
-* Batches can only have up to 2 MBs.
-
-* CORS isn't currently supported.
-
-* Table names in Azure Table storage aren't case-sensitive, but they are in Azure Cosmos DB Table API.
-
-* Some of Azure Cosmos DB's internal formats for encoding information, such as binary fields, are currently not as efficient as one might like. Therefore this can cause unexpected limitations on data size. For example, currently one couldn't use the full one Meg of a table entity to store binary data because the encoding increases the data's size.
-
-* Entity property name 'ID' currently not supported.
-
-* TableQuery TakeCount isn't limited to 1000.
-
-* In terms of the REST API there are a number of endpoints/query options that aren't supported by Azure Cosmos DB Table API:
-
- | Rest Method(s) | Rest Endpoint/Query Option | Doc URLs | Explanation |
- | | - | - | -- |
- | GET, PUT | `/?restype=service@comp=properties`| [Set Table Service Properties](/rest/api/storageservices/set-table-service-properties) and [Get Table Service Properties](/rest/api/storageservices/get-table-service-properties) | This endpoint is used to set CORS rules, storage analytics configuration, and logging settings. CORS is currently not supported and analytics and logging are handled differently in Azure Cosmos DB than Azure Storage Tables |
- | OPTIONS | `/<table-resource-name>` | [Pre-flight CORS table request](/rest/api/storageservices/preflight-table-request) | This is part of CORS which Azure Cosmos DB doesn't currently support. |
- | GET | `/?restype=service@comp=stats` | [Get Table Service Stats](/rest/api/storageservices/get-table-service-stats) | Provides information how quickly data is replicating between primary and secondaries. This isn't needed in Cosmos DB as the replication is part of writes. |
- | GET, PUT | `/mytable?comp=acl` | [Get Table ACL](/rest/api/storageservices/get-table-acl) and [Set Table ACL](/rest/api/storageservices/set-table-acl) | This gets and sets the stored access policies used to manage Shared Access Signatures (SAS). Although SAS is supported, they are set and managed differently. |
-
-* Azure Cosmos DB Table API only supports the JSON format, not ATOM.
-
-* While Azure Cosmos DB supports Shared Access Signatures (SAS) there are certain policies it doesn't support, specifically those related to management operations such as the right to create new tables.
-
-* For the .NET SDK in particular, there are some classes and methods that Azure Cosmos DB doesn't currently support.
-
- | Class | Unsupported Method |
- |-|-- |
- | CloudTableClient | \*ServiceProperties* |
- | | \*ServiceStats* |
- | CloudTable | SetPermissions* |
- | | GetPermissions* |
- | TableServiceContext | * (this class is deprecated) |
- | TableServiceEntity | " " |
- | TableServiceExtensions | " " |
- | TableServiceQuery | " " |
-
-## Other frequently asked questions
-
-### Do I need a new SDK to use the Table API?
-
-No, existing storage SDKs should still work. However, it's recommended that one always gets the latest SDKs for the best support and in many cases superior performance. See the list of available languages in the [Introduction to Azure Cosmos DB Table API](table-introduction.md).
-
-### What is the connection string that I need to use to connect to the Table API?
-
-The connection string is:
-
-```
-DefaultEndpointsProtocol=https;AccountName=<AccountNamefromCosmosDB;AccountKey=<FromKeysPaneofCosmosDB>;TableEndpoint=https://<AccountName>.table.cosmosdb.azure.com
-```
-
-You can get the connection string from the Connection String page in the Azure portal.
-
-### How do I override the config settings for the request options in the .NET SDK for the Table API?
-
-Some settings are handled on the CreateCloudTableClient method and other via the app.config in the appSettings section in the client application. For information about config settings, see [Azure Cosmos DB capabilities](tutorial-develop-table-dotnet.md).
-
-### Are there any changes for customers who are using the existing Azure Table storage SDKs?
-
-None. There are no changes for existing or new customers who are using the existing Azure Table storage SDKs.
-
-### How do I view table data that's stored in Azure Cosmos DB for use with the Table API?
-
-You can use the Azure portal to browse the data. You can also use the Table API code or the tools mentioned in the next answer.
-
-### Which tools work with the Table API?
-
-You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
-
-Tools with the flexibility to take a connection string in the format specified previously can support the new Table API. A list of table tools is provided on the [Azure Storage Client Tools](../storage/common/storage-explorers.md) page.
-
-### Is the concurrency on operations controlled?
-
-Yes, optimistic concurrency is provided via the use of the ETag mechanism.
-
-### Is the OData query model supported for entities?
-
-Yes, the Table API supports OData query and LINQ query.
-
-### Can I connect to Azure Table Storage and Azure Cosmos DB Table API side by side in the same application?
-
-Yes, you can connect by creating two separate instances of the CloudTableClient, each pointing to its own URI via the connection string.
-
-### How do I migrate an existing Azure Table storage application to this offering?
-
-[AzCopy](../storage/common/storage-use-azcopy-v10.md) and the [Azure Cosmos DB Data Migration Tool](import-data.md) are both supported.
-
-### How is expansion of the storage size done for this service if, for example, I start with *n* GB of data and my data will grow to 1 TB over time?
-
-Azure Cosmos DB is designed to provide unlimited storage via the use of horizontal scaling. The service can monitor and effectively increase your storage.
-
-### How do I monitor the Table API offering?
-
-You can use the Table API **Metrics** pane to monitor requests and storage usage.
-
-### How do I calculate the throughput I require?
-
-You can use the capacity estimator to calculate the TableThroughput that's required for the operations. For more information, see [Estimate Request Units and Data Storage](https://www.documentdb.com/capacityplanner). In general, you can show your entity as JSON and provide the numbers for your operations.
-
-### Can I use the Table API SDK locally with the emulator?
-
-Not at this time.
-
-### Can my existing application work with the Table API?
-
-Yes, the same API is supported.
-
-### Do I need to migrate my existing Azure Table storage applications to the SDK if I don't want to use the Table API features?
-
-No, you can create and use existing Azure Table storage assets without interruption of any kind. However, if you don't use the Table API, you can't benefit from the automatic index, the additional consistency option, or global distribution.
-
-### How do I add replication of the data in the Table API across more than one region of Azure?
-
-You can use the Azure Cosmos DB portal's [global replication settings](tutorial-global-distribution-sql-api.md#portal) to add regions that are suitable for your application. To develop a globally distributed application, you should also add your application with the PreferredLocation information set to the local region for providing low read latency.
-
-### How do I change the primary write region for the account in the Table API?
-
-You can use the Azure Cosmos DB global replication portal pane to add a region and then fail over to the required region. For instructions, see [Developing with multi-region Azure Cosmos DB accounts](high-availability.md).
-
-### How do I configure my preferred read regions for low latency when I distribute my data?
-
-To help read from the local location, use the PreferredLocation key in the app.config file. For existing applications, the Table API throws an error if LocationMode is set. Remove that code, because the Table API picks up this information from the app.config file.
-
-### How should I think about consistency levels in the Table API?
-
-Azure Cosmos DB provides well-reasoned trade-offs between consistency, availability, and latency. Azure Cosmos DB offers five consistency levels to Table API developers, so you can choose the right consistency model at the table level and make individual requests while querying the data. When a client connects, it can specify a consistency level. You can change the level via the consistencyLevel argument of CreateCloudTableClient.
-
-The Table API provides low-latency reads with "Read your own writes," with Bounded-staleness consistency as the default. For more information, see [Consistency levels](consistency-levels.md).
-
-By default, Azure Table storage provides Strong consistency within a region and Eventual consistency in the secondary locations.
-
-### Does Azure Cosmos DB Table API offer more consistency levels than Azure Table storage?
-
-Yes, for information about how to benefit from the distributed nature of Azure Cosmos DB, see [Consistency levels](consistency-levels.md). Because guarantees are provided for the consistency levels, you can use them with confidence.
-
-### When global distribution is enabled, how long does it take to replicate the data?
-
-Azure Cosmos DB commits the data durably in the local region and pushes the data to other regions immediately in a matter of milliseconds. This replication is dependent only on the round-trip time (RTT) of the datacenter. To learn more about the global-distribution capability of Azure Cosmos DB, see [Azure Cosmos DB: A globally distributed database service on Azure](distribute-data-globally.md).
-
-### Can the read request consistency level be changed?
-
-With Azure Cosmos DB, you can set the consistency level at the container level (on the table). By using the .NET SDK, you can change the level by providing the value for TableConsistencyLevel key in the app.config file. The possible values are: Strong, Bounded Staleness, Session, Consistent Prefix, and Eventual. For more information, see [Tunable data consistency levels in Azure Cosmos DB](consistency-levels.md). The key idea is that you can't set the request consistency level at more than the setting for the table. For example, you can't set the consistency level for the table at Eventual and the request consistency level at Strong.
-
-### How does the Table API handle failover if a region goes down?
-
-The Table API leverages the globally distributed platform of Azure Cosmos DB. To ensure that your application can tolerate datacenter downtime, enable at least one more region for the account in the Azure Cosmos DB portal [Developing with multi-region Azure Cosmos DB accounts](high-availability.md). You can set the priority of the region by using the portal [Developing with multi-region Azure Cosmos DB accounts](high-availability.md).
-
-You can add as many regions as you want for the account and control where it can fail over to by providing a failover priority. To use the database, you need to provide an application there too. When you do so, your customers won't experience downtime. The [latest .NET client SDK](table-sdk-dotnet.md) is auto homing but the other SDKs aren't. That is, it can detect the region that's down and automatically fail over to the new region.
-
-### Is the Table API enabled for backups?
-
-Yes, the Table API leverages the platform of Azure Cosmos DB for backups. Backups are made automatically. For more information, see [Online backup and restore with Azure Cosmos DB](online-backup-and-restore.md).
-
-### Does the Table API index all attributes of an entity by default?
-
-Yes, all attributes of an entity are indexed by default. For more information, see [Azure Cosmos DB: Indexing policies](index-policy.md).
-
-### Does this mean I don't have to create more than one index to satisfy the queries?
-
-Yes, Azure Cosmos DB Table API provides automatic indexing of all attributes without any schema definition. This automation frees developers to focus on the application rather than on index creation and management. For more information, see [Azure Cosmos DB: Indexing policies](index-policy.md).
-
-### Can I change the indexing policy?
-
-Yes, you can change the indexing policy by providing the index definition. You need to properly encode and escape the settings.
-
-For the non-.NET SDKs, the indexing policy can only be set in the portal at **Data Explorer**, navigate to the specific table you want to change and then go to the **Scale & Settings**->Indexing Policy, make the desired change and then **Save**.
-
-From the .NET SDK it can be submitted in the app.config file:
-
-```JSON
-{
- "indexingMode": "consistent",
- "automatic": true,
- "includedPaths": [
- {
- "path": "/somepath",
- "indexes": [
- {
- "kind": "Range",
- "dataType": "Number",
- "precision": -1
- },
- {
- "kind": "Range",
- "dataType": "String",
- "precision": -1
- }
- ]
- }
- ],
- "excludedPaths":
-[
- {
- "path": "/anotherpath"
- }
-]
-}
-```
-
-### Azure Cosmos DB as a platform seems to have lot of capabilities, such as sorting, aggregates, hierarchy, and other functionality. Will you be adding these capabilities to the Table API?
-
-The Table API provides the same query functionality as Azure Table storage. Azure Cosmos DB also supports sorting, aggregates, geospatial query, hierarchy, and a wide range of built-in functions. For more information, see [SQL queries](./sql-query-getting-started.md).
-
-### When should I change TableThroughput for the Table API?
-
-You should change TableThroughput when either of the following conditions applies:
-
-* You're performing an extract, transform, and load (ETL) of data, or you want to upload a lot of data in short amount of time.
-* You need more throughput from the container or from a set of containers at the back end. For example, you see that the used throughput is more than the provisioned throughput, and you're getting throttled. For more information, see [Set throughput for Azure Cosmos containers](set-throughput.md).
-
-### Can I scale up or scale down the throughput of my Table API table?
-
-Yes, you can use the Azure Cosmos DB portal's scale pane to scale the throughput. For more information, see [Set throughput](set-throughput.md).
-
-### Is a default TableThroughput set for newly provisioned tables?
-
-Yes, if you don't override the TableThroughput via app.config and don't use a pre-created container in Azure Cosmos DB, the service creates a table with throughput of 400.
-
-### Is there any change of pricing for existing customers of the Azure Table storage service?
-
-None. There's no change in price for existing Azure Table storage customers.
-
-### How is the price calculated for the Table API?
-
-The price depends on the allocated TableThroughput.
-
-### How do I handle any rate limiting on the tables in Table API offering?
-
-If the request rate is more than the capacity of the provisioned throughput for the underlying container or a set of containers, you get an error, and the SDK retries the call by applying the retry policy.
-
-### Why do I need to choose a throughput apart from PartitionKey and RowKey to take advantage of the Table API offering of Azure Cosmos DB?
-
-Azure Cosmos DB sets a default throughput for your container if you don't provide one in the app.config file or via the portal.
-
-Azure Cosmos DB provides guarantees for performance and latency, with upper bounds on operation. This guarantee is possible when the engine can enforce governance on the tenant's operations. Setting TableThroughput ensures that you get the guaranteed throughput and latency, because the platform reserves this capacity and guarantees operational success.
-
-By using the throughput specification, you can elastically change it to benefit from the seasonality of your application, meet the throughput needs, and save costs.
-
-### Azure Table storage has been inexpensive for me, because I pay only to store the data, and I rarely query. The Azure Cosmos DB Table API offering seems to be charging me even though I haven't performed a single transaction or stored anything. Can you explain?
-
-Azure Cosmos DB is designed to be a globally distributed, SLA-based system with guarantees for availability, latency, and throughput. When you reserve throughput in Azure Cosmos DB, it's guaranteed, unlike the throughput of other systems. Azure Cosmos DB provides additional capabilities that customers have requested, such as secondary indexes and global distribution.
-
-### I never get a quota full" notification (indicating that a partition is full) when I ingest data into Azure Table storage. With the Table API, I do get this message. Is this offering limiting me and forcing me to change my existing application?
-
-Azure Cosmos DB is an SLA-based system that provides unlimited scale, with guarantees for latency, throughput, availability, and consistency. To ensure guaranteed premium performance, make sure that your data size and index are manageable and scalable. The 20-GB limit on the number of entities or items per partition key is to ensure that we provide great lookup and query performance. To ensure that your application scales well, even for Azure Storage, we recommend that you *not* create a hot partition by storing all information in one partition and querying it.
-
-### So PartitionKey and RowKey are still required with the Table API?
-
-Yes. Because the surface area of the Table API is similar to that of the Azure Table storage SDK, the partition key provides an efficient way to distribute the data. The row key is unique within that partition. The row key needs to be present and can't be null as in the standard SDK. The length of RowKey is 255 bytes and the length of PartitionKey is 1 KB.
-
-### What are the error messages for the Table API?
-
-Azure Table storage and Azure Cosmos DB Table API use the same SDKs so most of the errors will be the same.
-
-### Why do I get throttled when I try to create lot of tables one after another in the Table API?
-
-Azure Cosmos DB is an SLA-based system that provides latency, throughput, availability, and consistency guarantees. Because it's a provisioned system, it reserves resources to guarantee these requirements. The rapid rate of creation of tables is detected and throttled. We recommend that you look at the rate of creation of tables and lower it to less than 5 per minute. Remember that the Table API is a provisioned system. The moment you provision it, you'll begin to pay for it.
-
-### How do I provide feedback about the SDK or bugs?
-
-You can share your feedback in any of the following ways:
-
-* [User voice](https://feedback.azure.com/forums/263030-azure-cosmos-db)
-* [Microsoft Q&A question page](/answers/topics/azure-cosmos-db.html)
-* [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-cosmosdb). Stack Overflow is best for programming questions. Make sure your question is [on-topic](https://stackoverflow.com/help/on-topic) and [provide as many details as possible, making the question clear and answerable](https://stackoverflow.com/help/how-to-ask).
-
-## Next steps
-
-* [Build a Table API app with .NET SDK and Azure Cosmos DB](create-table-dotnet.md)
-* [Build a Java app to manage Azure Cosmos DB Table API data](create-table-java.md)
cosmos-db Table Storage Design Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-design-guide.md
Here are some general guidelines for designing Table storage queries. The filter
`$filter=PartitionKey eq 'Sales' and LastName eq 'Smith'`. * A *table scan* doesn't include the `PartitionKey`, and is inefficient because it searches all of the partitions that make up your table for any matching entities. It performs a table scan regardless of whether or not your filter uses the `RowKey`. For example: `$filter=LastName eq 'Jones'`.
-* Azure Table storage queries that return multiple entities sort them in `PartitionKey` and `RowKey` order. To avoid resorting the entities in the client, choose a `RowKey` that defines the most common sort order. Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table storage](table-api-faq.md#table-api-vs-table-storage).
+* Azure Table storage queries that return multiple entities sort them in `PartitionKey` and `RowKey` order. To avoid resorting the entities in the client, choose a `RowKey` that defines the most common sort order. Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table storage](/table-api-faq.yml#table-api-in-azure-cosmos-db-vs-azure-table-storage).
Using an "**or**" to specify a filter based on `RowKey` values results in a partition scan, and isn't treated as a range query. Therefore, avoid queries that use filters such as: `$filter=PartitionKey eq 'Sales' and (RowKey eq '121' or RowKey eq '322')`.
Many designs must meet requirements to enable lookup of entities based on multip
Table storage returns query results sorted in ascending order, based on `PartitionKey` and then by `RowKey`. > [!NOTE]
-> Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table storage](table-api-faq.md#table-api-vs-table-storage).
+> Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table storage](/table-api-faq.yml#table-api-in-azure-cosmos-db-vs-azure-table-storage).
Keys in Table storage are string values. To ensure that numeric values sort correctly, you should convert them to a fixed length, and pad them with zeroes. For example, if the employee ID value you use as the `RowKey` is an integer value, you should convert employee ID **123** to **00000123**.
The following patterns and guidance might also be relevant when implementing thi
Retrieve the *n* entities most recently added to a partition by using a `RowKey` value that sorts in reverse date and time order. > [!NOTE]
-> Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. Thus, while this pattern is suitable for Table storage, it isn't suitable for Azure Cosmos DB. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table Storage](table-api-faq.md#table-api-vs-table-storage).
+> Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. Thus, while this pattern is suitable for Table storage, it isn't suitable for Azure Cosmos DB. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table Storage](/table-api-faq.yml#table-api-in-azure-cosmos-db-vs-azure-table-storage).
#### Context and problem A common requirement is to be able to retrieve the most recently created entities, for example the ten most recent expense claims submitted by an employee. Table queries support a `$top` query operation to return the first *n* entities from a set. There's no equivalent query operation to return the last *n* entities in a set.
cosmos-db Table Storage How To Use Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-how-to-use-python.md
table_service.delete_table('tasktable')
## Next steps
-* [FAQ - Develop with the Table API](./faq.md)
+* [FAQ - Develop with the Table API](./faq.yml)
* [Azure Cosmos DB SDK for Python API reference](/python/api/overview/azure/cosmosdb) * [Python Developer Center](https://azure.microsoft.com/develop/python/) * [Microsoft Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md): A free, cross-platform application for working visually with Azure Storage data on Windows, macOS, and Linux.
cosmos-db Table Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-support.md
At this time, the [Azure Cosmos DB Table API](table-introduction.md) has four SD
* [Node.js SDK](table-sdk-nodejs.md): This Azure Storage SDK has the ability to connect to Azure Cosmos DB accounts using the Table API.
-Additional information about working with the Table API is available in the [FAQ: Develop with the Table API](table-api-faq.md) article.
+Additional information about working with the Table API is available in the [FAQ: Develop with the Table API](table-api-faq.yml) article.
## Developing with Azure Table storage
data-factory Continuous Integration Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment.md
Previously updated : 04/27/2021 Last updated : 05/12/2021 # Continuous integration and delivery in Azure Data Factory
The Azure Key Vault task might fail with an Access Denied error if the correct p
### Updating active triggers
+Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+
+>[!WARNING]
+>If you do not use latest versions of PowerShell and Data Factory module, you may run into deserialization errors while running the commands.
+>
+ Deployment can fail if you try to update active triggers. To update active triggers, you need to manually stop them and then restart them after the deployment. You can do this by using an Azure PowerShell task:
-1. On the **Tasks** tab of the release, add an **Azure PowerShell** task. Choose task version 4.*.
+1. On the **Tasks** tab of the release, add an **Azure PowerShell** task. Choose task version the latest Azure Powershell version.
1. Select the subscription your factory is in.
If you're using Git integration with your data factory and have a CI/CD pipeline
## <a name="script"></a> Sample pre- and post-deployment script
-The following sample script can be used to stop triggers before deployment and restart them afterward. The script also includes code to delete resources that have been removed. Save the script in an Azure DevOps git repository and reference it via an Azure PowerShell task using version 4.*.
+Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+
+>[!WARNING]
+>If you do not use latest versions of PowerShell and Data Factory module, you may run into deserialization errors while running the commands.
+>
+
+The following sample script can be used to stop triggers before deployment and restart them afterward. The script also includes code to delete resources that have been removed. Save the script in an Azure DevOps git repository and reference it via an Azure PowerShell task the latest Azure Powershell version.
+ When running a pre-deployment script, you will need to specify a variation of the following parameters in the **Script Arguments** field.
else {
Start-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name -Force } }
-```
+```
data-factory Control Flow System Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-system-variables.md
These system variables can be referenced anywhere in the pipeline JSON.
| @pipeline().TriggerId|ID of the trigger that invoked the pipeline | | @pipeline().TriggerName|Name of the trigger that invoked the pipeline | | @pipeline().TriggerTime|Time of the trigger run that invoked the pipeline. This is the time at which the trigger **actually** fired to invoke the pipeline run, and it may differ slightly from the trigger's scheduled time. |
+| @pipeline().GroupId | ID of the group to which pipeline run belongs. |
+| @pipeline()__?__.TriggeredByPipelineName | Name of the pipeline that trigger the pipeline run. Applicable when the pipeline run is triggered by an ExecutePipeline activity. Evaluate to _Null_ when used in other circumstances. Note the question mark after @pipeline() |
+| @pipeline()__?__.TriggeredByPipelineRunId | Run id of the pipeline that trigger the pipeline run. Applicable when the pipeline run is triggered by an ExecutePipeline activity. Evaluate to _Null_ when used in other circumstances. Note the question mark after @pipeline() |
>[!NOTE] >Trigger-related date/time system variables (in both pipeline and trigger scopes) return UTC dates in ISO 8601 format, for example, `2017-06-01T22:20:00.4061448Z`.
data-factory Self Hosted Integration Runtime Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/self-hosted-integration-runtime-troubleshoot-guide.md
description: Learn how to troubleshoot self-hosted integration runtime issues in
Previously updated : 01/25/2021 Last updated : 05/12/2021
The only way to avoid this issue is to make sure that the two nodes are in crede
certutil -importpfx FILENAME.pfx AT_KEYEXCHANGE ```
+### Self-hosted integration runtime nodes out of the sync issue
+
+#### Symptoms
+
+Self-hosted integration runtime nodes try to sync the credentials across nodes but get stuck in the process and encounter the error message below after a while:
+
+"The Integration Runtime (Self-hosted) node is trying to sync the credentials across nodes. It may take several minutes."
+
+>[!Note]
+>If this error appears for over 10 minutes, please check the connectivity with the dispatcher node.
+
+#### Cause
+
+The reason is that the worker nodes do not have access to the private keys. This can be confirmed from the self-hosted integration runtime logs below:
+
+`[14]0460.3404::05/07/21-00:23:32.2107988 [System] A fatal error occurred when attempting to access the TLS server credential private key. The error code returned from the cryptographic module is 0x8009030D. The internal error state is 10001.`
+
+You have no issue with the sync process when you use the service principal authentication in the ADF linked service. However, when you switch the authentication type to account key, the syncing issue started. This is because the self-hosted integration runtime service runs under a service account (NT SERVICE\DIAHostService) and it need to be added to the private key permissions.
+
+
+#### Resolution
+
+To solve this issue, you need to add the self-hosted integration runtime service account (NT SERVICE\DIAHostService) to the private key permissions. You can apply the following steps:
+
+1. Open your Microsoft Management Console (MMC) Run Command.
+
+ :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/management-console-run-command.png" alt-text="Screenshot that shows the MMC Run Command":::
+
+1. In the MMC pane, apply the following steps:
+
+ :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/add-service-account-to-private-key-1.png" alt-text="Screenshot that shows the second step to add self-hosted IR service account to the private key permissions." lightbox="./media/self-hosted-integration-runtime-troubleshoot-guide/add-service-account-to-private-key-1-expanded.png":::
+
+ 1. Select **File**.
+ 1. Choose **Add/Remove Snap-in** in th drop-down menu.
+ 1. Select **Certificates** in the "Available snap-ins" pane.
+ 1. Select **Add**.
+ 1. In the pop-up "Certificates snap-in" pane, choose **Computer account**.
+ 1. Select **Next**.
+ 1. In the "Select Computer" pane, choose **Local computer: the computer this console is running on**.
+ 1. Select **Finish**.
+ 1. Select **OK** in the "Add or Remove Snap-ins" pane.
+
+1. In the pane of MMC, move on with the following steps:
+
+ :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/add-service-account-to-private-key-2.png" alt-text="Screenshot that shows the third step to add self-hosted IR service account to the private key permissions." lightbox="./media/self-hosted-integration-runtime-troubleshoot-guide/add-service-account-to-private-key-2-expanded.png":::
+
+ 1. From the left folder list, select **Console Root -> Certificates (Local Computer) -> Personal -> Certificates**.
+ 1. Right-click the **Microsoft Intune Beta MDM**.
+ 1. Select **All Tasks** in the drop-down list.
+ 1. Select **Manage Private Keys**.
+ 1. Select **Add** under "Group or user names".
+ 1. Select **NT SERVICE\DIAHostService** to grant it full control access to this certificate, apply and safe.
+ 1. Select **Check Names** and then select **OK**.
+ 1. In the "Permissions" pane, select **Apply** and then select **OK**.
+ ## Self-hosted IR setup ### Integration runtime registration error
databox-online Azure Stack Edge Gpu 2105 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-2105-release-notes.md
+
+ Title: Azure Stack Edge 2105 release notes
+description: Describes critical open issues and resolutions for the Azure Stack Edge running 2105 release.
++
+
+++ Last updated : 05/13/2021+++
+# Azure Stack Edge 2105 release notes
++
+The following release notes identify the critical open issues and the resolved issues for the 2105 release for your Azure Stack Edge devices. These release notes are applicable for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices. Features and issues that correspond to a specific model are called out wherever applicable.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they are added. Before you deploy your device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge 2105** release, which maps to software version number **2.2.1592.3244**. This software can be applied to your device if you are running at least Azure Stack Edge 2010 (2.1.1377.2170) software.
+
+## What's new
+
+The following new features are available in the Azure Stack Edge 2105 release.
+
+- **Virtual Local Area Network (VLAN) configuration support** - In this release, the virtual local area network (VLAN) configuration can be changed by connecting to the PowerShell interface of the device. For more information, see [Create vLAN networks on virtual switch](azure-stack-edge-gpu-create-virtual-switch-powershell.md).
+- **IP Forwarding support** - Beginning this release, IP forwarding is supported for network interfaces attached to Virtual Machines (VMs).
+ - IP forwarding enables VMs to receive network traffic from an IP not assigned to any of the IP configurations assigned to a network interface on the VM.
+ - IP forwarding also lets VMs send network traffic with a different source IP address than the one assigned to the IP configurations for the VM's network interface.
+
+ For more information, see [Enable or disable IP forwarding](../virtual-network/virtual-network-network-interface.md#enable-or-disable-ip-forwarding).
+- **Support for Az cmdlets** - Starting this release, the Az cmdlets are available (in preview) when connecting to the local Azure Resource Manager of the device or when deploying VM workloads. For more information, see [Az cmdlets](/powershell/azure/new-azureps-module-az?view=azps-5.9.0&preserve-view=true).
+- **Enable remote PowerShell session over HTTP** - Starting this release, you can enable a remote PowerShell session into a device over *http* via the local UI. For more information, see how to [Enable Remote PowerShell over http](azure-stack-edge-gpu-manage-access-power-connectivity-mode.md#enable-device-access-via-remote-powershell-over-http) for your device.
++
+## Issues fixed in 2105 release
+
+The following table lists the issues that were release noted in previous releases and fixed in the current release.
+
+| No. | Feature | Issue |
+| | | |
+|**1.**|VM |Failure during DHCP lease renewal should not cause network interface record to be removed.|
+|**2.**|VM | Monitoring improvements to resolve locking issue when provisioning VMs.|
+
+## Known issues in 2105 release
+
+The following table provides a summary of known issues in the 2105 release.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**|Preview features |For this release, the following features: Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, Azure Arc enabled Kubernetes, VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R, Multi-process service (MPS) for Azure Stack Edge Pro GPU - are all available in preview. |These features will be generally available in later releases. |
++
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <ul><li>In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**</li><li>Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). </li><li>Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.</li><li>Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd".</li>After this, steps 3-4 from the current documentation should be identical. </li></ul> |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<ul><li>Create blob in cloud. Or delete a previously uploaded blob from the device.</li><li>Refresh blob from the cloud into the appliance using the refresh functionality.</li><li>Update only a portion of the blob using Azure SDK REST APIs.</li></ul>These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: cannot create directory 'test': Permission deniedΓÇï|
+|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<ul><li> Only block blobs are supported. Page blobs are not supported.</li><li>There is no snapshot or copy API support.</li><li> Hadoop workload ingestion through `distcp` is not supported as it uses the copy operation heavily.</li></ul>||
+|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You will need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**9.**|Azure Arc enabled Kubernetes |For the GA release, Azure Arc enabled Kubernetes is updated from version 0.1.18 to 0.2.9. As the Azure Arc enabled Kubernetes update is not supported on Azure Stack Edge device, you will need to redeploy Azure Arc enabled Kubernetes.|Follow these steps:<ol><li>[Apply device software and Kubernetes updates](azure-stack-edge-gpu-install-update.md).</li><li>Connect to the [PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md).</li><li>Remove the existing Azure Arc agent. Type: `Remove-HcsKubernetesAzureArcAgent`.</li><li>Deploy [Azure Arc to a new resource](azure-stack-edge-gpu-deploy-arc-kubernetes-cluster.md). Do not use an existing Azure Arc resource.</li></ol>|
+|**10.**|Azure Arc enabled Kubernetes|Azure Arc deployments are not supported if web proxy is configured on your Azure Stack Edge Pro device.||
+|**11.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Do not use reserved IPs.|
+|**12.**|Kubernetes |Kubernetes does not currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**13.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see Modify Azure IoT Edge modules from marketplace to run on Azure Stack Edge device.<!-- insert link-->|
+|**14.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts cannot be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**15.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates are not picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**16.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected.<ul><li>**Status** column in **Certificates** page.</li><li>**Security** tile in **Get started** page.</li><li>**Configuration** tile in **Overview** page.</li></ul> |
+|**17.**|IoT Edge |Modules deployed through IoT Edge can't use host network. | |
+|**18.**|Compute + Kubernetes |Compute/Kubernetes does not support NTLM web proxy. ||
+|**19.**|Kubernetes + update |Earlier software versions such as 2008 releases have a race condition update issue that causes the update to fail with ClusterConnectionException. |Using the newer builds should help avoid this issue. If you still see this issue, the workaround is to retry the upgrade, and it should work.|
+|**20**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**21.**|Kubernetes Dashboard | *Https* endpoint for Kubernetes Dashboard with SSL certificate is not supported. | |
+|**22.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**23.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
+|**24.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**25.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| |
+|**26.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it is not possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create 1 VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available 1 GPU. |
+|**27.**|Custom script VM extension |There is a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <ul><li> Connect to the Windows VM using remote desktop protocol (RDP). </li><li> Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. </li><li> If the `waappagent.exe` is not running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.</li><li> While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. </li><li>After you kill the process, the process starts running again with the newer version.</li><li>Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.</li><li>[Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). </li><ul> |
+|**28.**|GPU VMs |Prior to this release, GPU VM lifecycle was not managed in the update flow. Hence, when updating to 2103 release, GPU VMs are not stopped automatically during the update. You will need to manually stop the GPU VMs using a `stop-stayProvisioned` flag before you update your device. For more information, see [Suspend or shut down the VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#suspend-or-shut-down-the-vm).<br> All the GPU VMs that are kept running before the update, are started after the update. In these instances, the workloads running on the VMs aren't terminated gracefully. And the VMs could potentially end up in an undesirable state after the update. <br>All the GPU VMs that are stopped via the `stop-stayProvisioned` before the update, are automatically started after the update. <br>If you stop the GPU VMs via the Azure portal, you'll need to manually start the VM after the device update.| If running GPU VMs with Kubernetes, stop the GPU VMs right before the update. <br>When the GPU VMs are stopped, Kubernetes will take over the GPUs that were used originally by VMs. <br>The longer the GPU VMs are in stopped state, higher the chances that Kubernetes will take over the GPUs. |
+|**29.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting is not retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
++
+## Next steps
+
+- [Update your device](azure-stack-edge-gpu-install-update.md)
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 03/25/2021 Last updated : 05/13/2021 # Update your Azure Stack Edge Pro GPU
This article describes the steps required to install update on your Azure Stack
The procedure described in this article was performed using a different version of software, but the process remains the same for the current software version. > [!IMPORTANT]
-> - Update **2103** is the current update and corresponds to:
-> - Device software version - **2.2.1540.2890**
-> - Kubernetes server version - **v1.17.3**
-> - IoT Edge version: **0.1.0-beta13**
+> - Update **2105** is the current update and corresponds to:
+> - Device software version - **2.2.1592.3244**
+> - Kubernetes server version - **v1.20.2**
+> - IoT Edge version: **0.1.0-beta14**
> - GPU driver version: **460.32.03** > - CUDA version: **11.2** >
-> For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2103-release-notes.md).
-> - To apply 2103 update, your device must be running 2010. If you are not running the minimal supported version, you'll see this error: *Update package cannot be installed as its dependencies are not met*.
+> For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2105-release-notes.md).
+> - To apply 2105 update, your device must be running 2010. If you are not running the minimal supported version, you'll see this error: *Update package cannot be installed as its dependencies are not met*.
> - This update requires you to apply two updates sequentially. First you apply the device software updates and then the Kubernetes updates.
-> - Keep in mind that installing an update or hotfix restarts your device. This update contains the device software updates and the Kubernetes updates. Given that the Azure Stack Edge Pro is a single node device, any I/O in progress is disrupted and your device experiences a downtime of up to 1.5 hours for the update.
+> - Keep in mind that installing an update or hotfix restarts your device. This update contains the device software updates and the Kubernetes updates. Given that the Azure Stack Edge Pro GPU is a single node device, any I/O in progress is disrupted and your device experiences a downtime of up to 1.5 hours for the update.
To install updates on your device, you first need to configure the location of the update server. After the update server is configured, you can apply the updates via the Azure portal UI or the local web UI.
Do the following steps to download the update from the Microsoft Update Catalog.
![Search catalog](./media/azure-stack-edge-gpu-install-update/download-update-1.png)
-2. In the search box of the Microsoft Update Catalog, enter the Knowledge Base (KB) number of the hotfix or terms for the update you want to download. For example, enter **Azure Stack Edge Pro**, and then click **Search**.
+2. In the search box of the Microsoft Update Catalog, enter the Knowledge Base (KB) number of the hotfix or terms for the update you want to download. For example, enter **Azure Stack Edge**, and then click **Search**.
- The update listing appears as **Azure Stack Edge Update 2103**.
+ The update listing appears as **Azure Stack Edge Update 2105**.
<!--![Search catalog 2](./media/azure-stack-edge-gpu-install-update/download-update-2-b.png)-->
-4. Select **Download**. There are two packages to download: KB 4613486 and KB 46134867 that correspond to the device software updates (*SoftwareUpdatePackage.exe*) and Kubernetes updates (*Kubernetes_Package.exe*) respectively. Download the packages to a folder on the local system. You can also copy the folder to a network share that is reachable from the device.
+4. Select **Download**. There are two packages to download: KB 4616970 and KB 4616971 that correspond to the device software updates (*SoftwareUpdatePackage.exe*) and Kubernetes updates (*Kubernetes_Package.exe*) respectively. Download the packages to a folder on the local system. You can also copy the folder to a network share that is reachable from the device.
### Install the update or the hotfix
This procedure takes around 20 minutes to complete. Perform the following steps
5. The update starts. After the device is successfully updated, it restarts. The local UI is not accessible in this duration.
-6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2103**.
+6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2105**.
<!--![update device 6](./media/azure-stack-edge-gpu-install-update/local-ui-update-6.png)-->
databox-online Azure Stack Edge Gpu Manage Access Power Connectivity Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-access-power-connectivity-mode.md
In this article, you learn how to:
The access to your Azure Stack Edge Pro device is controlled by the use of a device password. You can change the password via the local web UI. You can also reset the device password in the Azure portal.
-The access to data on the device disks is also controlled by encryption-at-rest keys.
+The access to data on the device disks is also controlled by encryption-at-rest keys.
+
+You can access the device by opening a remote PowerShell session over HTTP or HTTPS from the local web UI of the device.
### Change device password
Follow these steps to rotate the encryption-at-rest keys.
![Screenshot shows the Reset device password dialog box.](media/azure-stack-edge-manage-access-power-connectivity-mode/reset-password-2.png)
+## Enable device access via remote PowerShell over HTTP
+
+You can open a remote PowerShell session to your device over HTTP or HTTPS. By default, you access the device via a PowerShell session over HTTPS. However, in trusted networks, it is acceptable to enable remote PowerShell over HTTP.
+
+Follow these steps in the local UI to enable remote PowerShell over HTTP:
+
+1. In the local UI of your device, go to **Settings** from the top right corner of the page.
+1. Select **Enable** to allow you to open a remote PowerShell session for your device over HTTP. This setting should be enabled only in trusted networks.
+
+ ![Screenshot shows Enable remote PowerShell over HTTP setting.](media/azure-stack-edge-gpu-manage-access-power-connectivity-mode/enable-remote-powershell-http-1.png)
+
+1. Select **Apply**.
+
+You can now connect to the PowerShell interface of the device over HTTP. For details, see [Connect to the PowerShell interface of your device](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
+ ## Manage resource access To create your Azure Stack Edge / Data Box Gateway, IoT Hub, and Azure Storage resource, you need permissions as a contributor or higher at a resource group level. You also need the corresponding resource providers to be registered. For any operations that involve activation key and credentials, permissions to the Microsoft Graph API are also required. These are described in the following sections.
databox-online Azure Stack Edge Gpu System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-system-requirements.md
We recommend that you set your firewall rules for outbound traffic, based on Azu
| https:\//mcr.microsoft.com<br></br>https://\*.cdn.mscr.io | Microsoft container registry (required) | | https://\*.azurecr.io | Personal and third-party container registries (optional) | | https://\*.azure-devices.net | IoT Hub access (required) |
+| https://\*.docker.com | StorageClass (required) |
### URL patterns for monitoring
databox-online Azure Stack Edge Mini R System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-system-requirements.md
We recommend that you set your firewall rules for outbound traffic, based on Azu
| https:\//mcr.microsoft.com<br></br>https://\*.cdn.mscr.io | Microsoft container registry (required) | | https://\*.azurecr.io | Personal and third-party container registries (optional) | | https://\*.azure-devices.net | IoT Hub access (required) |
+| https://\*.docker.com | StorageClass (required) |
### URL patterns for gateway for Azure Government
databox-online Azure Stack Edge Mini R Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-technical-specifications-compliance.md
Previously updated : 05/04/2021 Last updated : 05/12/2021 # Azure Stack Edge Mini R technical specifications
The Azure Stack Edge Mini R device has 1 data disk and 1 boot disk (that serves
| Total capacity (data only) | 1 TB | | Total usable capacity* | ~ 750 GB |
-**Some space is reserved for internal use.*
+*Some space is reserved for internal use.*
## Network
The Azure Stack Edge Mini R device has the following specifications for the netw
|Specification |Value | |-|--|
-|Network interfaces |2 x 10 Gbps SFP+ <br> Shown as PORT 3 and PORT 4 in the local UI |
-|Network interfaces |2 x 1 Gbps RJ45 <br> Shown as PORT 1 and PORT 2 in the local UI |
+|Network interfaces |2 x 10 Gbps SFP+ <br> Shown as PORT 3 and PORT 4 in the local UI |
+|Network interfaces |2 x 1 Gbps RJ45 <br> Shown as PORT 1 and PORT 2 in the local UI |
|Wi-Fi |802.11ac |
-|Specification |Value |
-|||
-|Network interfaces |2 x 10 GbE SFP+ <br> Shown as PORT 3 and PORT 4 in the local UI |
-|Network interfaces |2 x 1 GbE RJ45 <br> Shown as PORT 1 and PORT 2 in the local UI |
-|Wi-Fi |802.11ac |
+## Routers and switches
The following routers and switches are compatible with the 10 Gbps SPF+ network interfaces (Port 3 and Port 4) on your Azure Stack Edge Mini R devices:
The following routers and switches are compatible with the 10 Gbps SPF+ network
## Transceivers, cables
-The following copper SFP+ (10 Gbps) transceivers and cables are strongly recommended for use with Azure Stack Edge Mini R devices. Compatible fiber-optic cables can be used with SFP+ network interfaces (Port 3 and Port 4) but have not been tested.
+The following copper SFP+ (10 Gbps) transceivers and cables are strongly recommended for use with Azure Stack Edge Mini R devices. Compatible fiber-optic cables can be used with SFP+ network interfaces (Port 3 and Port 4) but have not been tested.
|SFP+ transceiver type |Supported cables | Notes | |-|--|-|
The Azure Stack Edge Mini R device also includes an onboard battery that is char
An additional [Type 2590 battery](https://www.bren-tronics.com/bt-70791ck.html) can be used along with the onboard battery to extend the use of the device between the charges. This battery should be compliant with all the safety, transportation, and environmental regulations applicable in the country of use. - | Specification | Value | |--|| | Onboard battery capacity | 73 Wh |
databox-online Azure Stack Edge Pro R System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-pro-r-system-requirements.md
We recommend that you set your firewall rules for outbound traffic, based on Azu
| https:\//mcr.microsoft.com<br></br>https://\*.cdn.mscr.com | Microsoft container registry (required) | | https://\*.azure-devices.us | IoT Hub access (required) | | https://\*.azurecr.us | Personal and third-party container registries (optional) |
+| https://\*.docker.com | StorageClass (required) |
## Internet bandwidth
databox-online Azure Stack Edge Troubleshoot Virtual Machine Image Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-troubleshoot-virtual-machine-image-upload.md
+
+ Title: Troubleshoot virtual machine image uploads in Azure Stack Edge Pro GPU | Microsoft Docs
+description: Describes how to troubleshoot issues that occur when uploading, downloading, or deleting virtual machine images in Azure Stack Edge Pro GPU.
++++++ Last updated : 05/13/2021++
+# Troubleshoot virtual machine image uploads in Azure Stack Edge Pro GPU
++
+This article describes how to troubleshoot issues that occur when downloading and managing virtual machine (VM) images on an Azure Stack Edge Pro GPU device.
++
+## Unable to add VM image to blob container
+
+**Error Description:** In the Azure portal, when trying to upload a VM image to a blob container, the **Add** button isn't available, and the image can't be uploaded. The **Add** button isn't available when you don't have the required contributor role permissions to the resource group or subscription for the device.
+
+**Suggested solution:** Make sure you have the required contributor permissions to add files to the resource group or storage account. For more information, see [Prerequisites for the Azure Stack Edge resource](azure-stack-edge-deploy-prep.md#prerequisites).
++
+## Invalid blob type for the source blob URI
+
+**Error Description:** A VHD stored as a block blob cannot be downloaded. To be downloaded, a VHD must be stored as a page blob.
+
+**Suggested solution:** Upload the VHD to the Azure Storage account as a page blob. Then download the blob. For upload instructions, see [Use Storage Explorer for upload](azure-stack-edge-gpu-deploy-virtual-machine-templates.md#use-storage-explorer-for-upload).
++
+## Only blobs formatted as VHDs can be imported
+
+**Error Description:** The VHD can't be imported because it doesn't meet formatting requirements. To be imported, a virtual hard disk must be a fixed-size, Generation 1 VHD.
+
+**Suggested solutions:**
+
+- Follow the steps in [Prepare generalized image from Windows VHD to deploy VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md) to create a fixed-size VHD for a Generation 1 virtual machine from your source VHD or VHDX.
+
+- If you prefer to use PowerShell:
+
+ - You can use [Convert-VHD](/powershell/module/hyper-v/convert-vhd?view=windowsserver2019-ps&preserve-view=true) in the Windows PowerShell module for Hyper-V. You can't use Convert-VHD to convert a VM image from a Generation 2 VM to Generation 1; instead, use the portal procedures in [Prepare generalized image from Windows VHD to deploy VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md).
+
+ - If you need to find out the current VHD type, use [Get-VHD](/powershell/module/hyper-v/get-vhd?view=windowsserver2019-ps&preserve-view=true).
++
+## The condition specified using HTTP conditional header(s) is not met
+
+**Error Description:** If any changes are being made to a VHD when you try to download it from Azure, the download will fail because the VHD in Azure won't match the VHD being downloaded. This error also occurs when a download is attempted before the upload of the VHD to Azure has completed.
+
+**Suggested solution:** Wait until the upload of the VHD has completed and no changes are being made to the VHD. Then try downloading the VHD again.
++
+## Next steps
+
+* Learn how to [Deploy VMs via the Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
+* Learn how to [Deploy VMs via Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md).
devtest-labs Devtest Lab Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-concepts.md
Title: DevTest Labs concepts | Microsoft Docs description: Learn the basic concepts of DevTest Labs, and how it can make it easy to create, manage, and monitor Azure virtual machines Previously updated : 06/26/2020 Last updated : 05/13/2021 # DevTest Labs concepts
An Azure Claimable VM is a virtual machine that is available for use by any lab
A VM that is claimable is not initially assigned to any particular user, but will show up in every user's list under "Claimable virtual machines". After a VM is claimed by a user, it is moved up to their "My virtual machines" area and is no longer claimable by any other user. ## Environment
-In DevTest Labs, an environment refers to a collection of Azure resources in a lab. [This blog post](./devtest-lab-faq.md#blog-post) discusses how to create multi-VM environments from your Azure Resource Manager templates.
+In DevTest Labs, an environment refers to a collection of Azure resources in a lab. [This article](./devtest-lab-create-environment-from-arm.md) discusses how to create multi-VM environments from your Azure Resource Manager templates.
## Base images Base images are VM images with all the tools and settings preinstalled and configured to quickly create a VM. You can provision a VM by picking an existing base and adding an artifact to install your test agent. You can then save the provisioned VM as a base so that the base can be used without having to reinstall the test agent for each provisioning of the VM.
dms Tutorial Postgresql Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md
To complete all the database objects like table schemas, indexes and stored proc
psql -h mypgserver-20170401.postgres.database.azure.com -U postgres -d dvdrental citus < dvdrentalSchema.sql ```
-4. To extract the drop foreign key script and add it at the destination (Azure Database for PostgreSQL), in PgAdmin or in psql, run the following script.
-
- > [!IMPORTANT]
- > Foreign keys in your schema will cause the initial load and continuous sync of the migration to fail.
-
- ```
- SELECT Q.table_name
- ,CONCAT('ALTER TABLE ', table_schema, '.', table_name, STRING_AGG(DISTINCT CONCAT(' DROP CONSTRAINT ', foreignkey), ','), ';') as DropQuery
- ,CONCAT('ALTER TABLE ', table_schema, '.', table_name, STRING_AGG(DISTINCT CONCAT(' ADD CONSTRAINT ', foreignkey, ' FOREIGN KEY (', column_name, ')', ' REFERENCES ', foreign_table_schema, '.', foreign_table_name, '(', foreign_column_name, ')' ), ','), ';') as AddQuery
- FROM
- (SELECT
- S.table_schema,
- S.foreignkey,
- S.table_name,
- STRING_AGG(DISTINCT S.column_name, ',') AS column_name,
- S.foreign_table_schema,
- S.foreign_table_name,
- STRING_AGG(DISTINCT S.foreign_column_name, ',') AS foreign_column_name
- FROM
- (SELECT DISTINCT
- tc.table_schema,
- tc.constraint_name AS foreignkey,
- tc.table_name,
- kcu.column_name,
- ccu.table_schema AS foreign_table_schema,
- ccu.table_name AS foreign_table_name,
- ccu.column_name AS foreign_column_name
- FROM information_schema.table_constraints AS tc
- JOIN information_schema.key_column_usage AS kcu ON tc.constraint_name = kcu.constraint_name AND tc.table_schema = kcu.table_schema
- JOIN information_schema.constraint_column_usage AS ccu ON ccu.constraint_name = tc.constraint_name AND ccu.table_schema = tc.table_schema
- WHERE constraint_type = 'FOREIGN KEY'
- ) S
- GROUP BY S.table_schema, S.foreignkey, S.table_name, S.foreign_table_schema, S.foreign_table_name
- ) Q
- GROUP BY Q.table_schema, Q.table_name;
- ```
-
-5. Run the drop foreign key (which is the second column) in the query result.
-
-6. To disable triggers in target database, run the script below.
-
- > [!IMPORTANT]
- > Triggers (insert or update) in the data enforce data integrity in the target ahead of the data being replicated from the source. As a result, it's recommended that you disable triggers in all the tables **at the target** during migration, and then re-enable the triggers after migration is complete.
+ > [!NOTE]
+ > The migration service internally handles the enable/disable of foreign keys and triggers to ensure a reliable and robust data migration. As a result, you do not have to worry about making any modifications to the target database schema.
- ```
- SELECT DISTINCT CONCAT('ALTER TABLE ', event_object_schema, '.', event_object_table, ' DISABLE TRIGGER ', trigger_name, ';')
- FROM information_schema.triggers
- ```
## Register the Microsoft.DataMigration resource provider
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-postgresql-azure-postgresql-online.md
To complete all the database objects like table schemas, indexes and stored proc
psql -h mypgserver-20170401.postgres.database.azure.com -U postgres -d dvdrental < dvdrentalSchema.sql ```
-4. If you have foreign keys in your schema, the initial load and continuous sync of the migration will fail. Execute the following script in PgAdmin or in psql to extract the drop foreign key script and add foreign key script at the destination (Azure Database for PostgreSQL).
-
- ```
- SELECT Queries.tablename
- ,concat('alter table ', Queries.tablename, ' ', STRING_AGG(concat('DROP CONSTRAINT ', Queries.foreignkey), ',')) as DropQuery
- ,concat('alter table ', Queries.tablename, ' ',
- STRING_AGG(concat('ADD CONSTRAINT ', Queries.foreignkey, ' FOREIGN KEY (', column_name, ')', 'REFERENCES ', foreign_table_name, '(', foreign_column_name, ')' ), ',')) as AddQuery
- FROM
- (SELECT
- tc.table_schema,
- tc.constraint_name as foreignkey,
- tc.table_name as tableName,
- kcu.column_name,
- ccu.table_schema AS foreign_table_schema,
- ccu.table_name AS foreign_table_name,
- ccu.column_name AS foreign_column_name
- FROM
- information_schema.table_constraints AS tc
- JOIN information_schema.key_column_usage AS kcu
- ON tc.constraint_name = kcu.constraint_name
- AND tc.table_schema = kcu.table_schema
- JOIN information_schema.constraint_column_usage AS ccu
- ON ccu.constraint_name = tc.constraint_name
- AND ccu.table_schema = tc.table_schema
- WHERE constraint_type = 'FOREIGN KEY') Queries
- GROUP BY Queries.tablename;
- ```
-
- Run the drop foreign key (which is the second column) in the query result.
-
-5. Triggers in the data (insert or update triggers) will enforce data integrity in the target ahead of the replicated data from the source. It's recommended that you disable triggers in all the tables **at the target** during migration and then re-enable the triggers after migration is complete.
-
- To disable triggers in target database, use the following command:
-
- ```
- select concat ('alter table ', event_object_table, ' disable trigger ', trigger_name)
- from information_schema.triggers;
- ```
-
-6. If there are ENUM data type in any tables, it's recommended that you temporarily update it to a ΓÇÿcharacter varyingΓÇÖ datatype in the target table. After data replication is done, revert the datatype to ENUM.
+ > [!NOTE]
+ > The migration service internally handles the enable/disable of foreign keys and triggers to ensure a reliable and robust data migration. As a result, you do not have to worry about making any modifications to the target database schema.
## Provisioning an instance of DMS using the Azure CLI
dms Tutorial Rds Postgresql Server Azure Db For Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-rds-postgresql-server-azure-db-for-postgresql-online.md
To complete this tutorial, you need to:
psql -h mypgserver-20170401.postgres.database.azure.com -U postgres -d dvdrental < dvdrentalSchema.sql ```
-4. If you have foreign keys in your schema, the initial load and continuous sync of the migration will fail. To extract the drop foreign key script and add foreign key script at the destination (Azure Database for PostgreSQL), run the following script in PgAdmin or in psql:
-
- ```
- SELECT Queries.tablename
- ,concat('alter table ', Queries.tablename, ' ', STRING_AGG(concat('DROP CONSTRAINT ', Queries.foreignkey), ',')) as DropQuery
- ,concat('alter table ', Queries.tablename, ' ',
- STRING_AGG(concat('ADD CONSTRAINT ', Queries.foreignkey, ' FOREIGN KEY (', column_name, ')', 'REFERENCES ', foreign_table_name, '(', foreign_column_name, ')' ), ',')) as AddQuery
- FROM
- (SELECT
- tc.table_schema,
- tc.constraint_name as foreignkey,
- tc.table_name as tableName,
- kcu.column_name,
- ccu.table_schema AS foreign_table_schema,
- ccu.table_name AS foreign_table_name,
- ccu.column_name AS foreign_column_name
- FROM
- information_schema.table_constraints AS tc
- JOIN information_schema.key_column_usage AS kcu
- ON tc.constraint_name = kcu.constraint_name
- AND tc.table_schema = kcu.table_schema
- JOIN information_schema.constraint_column_usage AS ccu
- ON ccu.constraint_name = tc.constraint_name
- AND ccu.table_schema = tc.table_schema
- WHERE constraint_type = 'FOREIGN KEY') Queries
- GROUP BY Queries.tablename;
- ```
-
-5. Run the drop foreign key (which is the second column) in the query result to drop the foreign key.
-
-6. If you have triggers (insert or update trigger) in the data, it will enforce data integrity in the target before replicating data from the source. The recommendation is to disable triggers in all the tables *at the target* during migration, and then enable the triggers after migration is complete.
-
- To disable triggers in target database:
-
- ```
- SELECT Concat('DROP TRIGGER ', Trigger_Name,' ON ', event_object_table, ';') FROM information_schema.TRIGGERS WHERE TRIGGER_SCHEMA = 'your_schema';
- ```
+ > [!NOTE]
+ > The migration service internally handles the enable/disable of foreign keys and triggers to ensure a reliable and robust data migration. As a result, you do not have to worry about making any modifications to the target database schema.
## Register the Microsoft.DataMigration resource provider
event-grid Event Schema Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-schema-blob-storage.md
Title: Azure Blob Storage as Event Grid source description: Describes the properties that are provided for blob storage events with Azure Event Grid Previously updated : 02/11/2021 Last updated : 05/12/2021 # Azure Blob Storage as an Event Grid source
These events are triggered when a client creates, replaces, or deletes a blob by
|Event name |Description| |-|--|
- |**Microsoft.Storage.BlobCreated** |Triggered when a blob is created or replaced. <br>Specifically, this event is triggered when clients use the `PutBlob`, `PutBlockList`, or `CopyBlob` operations that are available in the Blob REST API. |
+ |**Microsoft.Storage.BlobCreated** |Triggered when a blob is created or replaced. <br>Specifically, this event is triggered when clients use the `PutBlob`, `PutBlockList`, or `CopyBlob` operations that are available in the Blob REST API **and** when the Block Blob is completely committed. <br><br>If clients use the `CopyBlob` operation on accounts that have the **hierarchical namespace** feature enabled on them, the `CopyBlob` operation works a little differently. In that case, the **Microsoft.Storage.BlobCreated** event is triggered when the `CopyBlob` operation is **initiated** instead of when the Block Blob is completely committed. |
|**Microsoft.Storage.BlobDeleted** |Triggered when a blob is deleted. <br>Specifically, this event is triggered when clients call the `DeleteBlob` operation that is available in the Blob REST API. |
-> [!NOTE]
-> For **Azure Blob Storage**, if you want to ensure that the **Microsoft.Storage.BlobCreated** event is triggered only when a Block Blob is completely committed, filter the event for the `CopyBlob`, `PutBlob`, and `PutBlockList` REST API calls. These API calls trigger the **Microsoft.Storage.BlobCreated** event only after data is fully committed to a Block Blob. To learn how to create a filter, see [Filter events for Event Grid](./how-to-filter-events.md).
- ### List of the events for Azure Data Lake Storage Gen 2 REST APIs These events are triggered if you enable a hierarchical namespace on the storage account, and clients use Azure Data Lake Storage Gen2 REST APIs. For more information bout Azure Data Lake Storage Gen2, see [Introduction to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md).
firewall Quick Create Multiple Ip Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/quick-create-multiple-ip-template.md
For more information about Azure Firewall with multiple public IP addresses, see
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Ffw-docs-qs%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Ffw-docs-qs%2Fazuredeploy.json)
## Prerequisites
This template creates an Azure Firewall with two public IP addresses, along with
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/fw-docs-qs). Multiple Azure resources are defined in the template:
Deploy the ARM template to Azure:
1. Select **Deploy to Azure** to sign in to Azure and open the template. The template creates an Azure Firewall, the network infrastructure, and two virtual machines.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Ffw-docs-qs%2Fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Ffw-docs-qs%2Fazuredeploy.json)
2. In the portal, on the **Create an Azure Firewall with multiple IP public addresses** page, type or select the following values:
- - Subscription: Select from existing subscriptions
+ - Subscription: Select from existing subscriptions
- Resource group: Select from existing resource groups or select **Create new**, and select **OK**. - Location: Select a location
- - Admin Username: Type username for the administrator user account
+ - Admin Username: Type username for the administrator user account
- Admin Password: Type an administrator password or key 3. Select **I agree to the terms and conditions stated above** and then select **Purchase**. The deployment can take 10 minutes or longer to complete. ## Validate the deployment
-In the Azure portal, review the deployed resources. Note the firewall public IP addresses.
+In the Azure portal, review the deployed resources. Note the firewall public IP addresses.
Use Remote Desktop Connection to connect to the firewall public IP addresses. Successful connections demonstrates firewall NAT rules that allow the connection to the backend servers.
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/policy-for-kubernetes.md
Title: Learn Azure Policy for Kubernetes description: Learn how Azure Policy uses Rego and Open Policy Agent to manage clusters running Kubernetes in Azure or on-premises. Previously updated : 03/22/2021 Last updated : 05/13/2021
you want to manage.
1. In the main page, select the **Enable add-on** button.
- <a name="migrate-from-v1"></a>
- > [!NOTE]
- > If the **Disable add-on** button is enabled and a migration warning v2 message is displayed,
- > v1 add-on is installed and must be removed prior to assigning v2 policy definitions. The
- > _deprecated_ v1 add-on will automatically be replaced with the v2 add-on starting August 24,
- > 2020. New v2 versions of the policy definitions must then be assigned. To upgrade now, follow
- > these steps:
- >
- > 1. Validate your AKS cluster has the v1 add-on installed by visiting the **Policies** page on
- > your AKS cluster and has the "The current cluster uses Azure Policy add-on v1..." message.
- > 1. [Remove the add-on](#remove-the-add-on-from-aks).
- > 1. Select the **Enable add-on** button to install the v2 version of the add-on.
- > 1. [Assign v2 versions of your v1 built-in policy definitions](#assign-a-built-in-policy-definition)
- - Azure CLI ```azurecli-interactive
kubectl get pods -n gatekeeper-system
Lastly, verify that the latest add-on is installed by running this Azure CLI command, replacing `<rg>` with your resource group name and `<cluster-name>` with the name of your AKS cluster: `az aks show --query addonProfiles.azurepolicy -g <rg> -n <cluster-name>`. The result should look
-similar to the following output and **config.version** should be `v2`:
+similar to the following output:
```output "addonProfiles": { "azurepolicy": {
- "config": {
- "version": "v2"
- },
"enabled": true, "identity": null },
hdinsight Hdinsight Create Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-create-virtual-network.md
description: Learn how to create an Azure Virtual Network to connect HDInsight t
Previously updated : 04/16/2020 Last updated : 05/12/2021 # Create virtual networks for Azure HDInsight clusters
This example makes the following assumptions:
After completing these steps, you can connect to resources in the virtual network using fully qualified domain names (FQDN). You can now install HDInsight into the virtual network.
+## Test your settings before deploying an HDInsight cluster
+
+Before deploying your cluster, you can check that your many of your network configuration settings are correct by running the [networkValidator tool](https://github.com/Azure-Samples/hdinsight-diagnostic-scripts/blob/main/HDInsightNetworkValidator) on a virtual machine in the same VNet and subnet as the planned cluster.
+
+**To deploy a Virtual Machine to run the networkValidator.sh script**
+
+1. Open the [Azure portal Ubuntu Server 18.04 LTS page](https://portal.azure.com/?feature.customportal=false#create/Canonical.UbuntuServer1804LTS-ARM) and click **Create** .
+
+1. In the **Basics** tab, under **Project details**, select your subscription, and choose an existing Resource group or create a new one.
+
+ :::image type="content" source="./media/hdinsight-create-virtual-network/project-details.png" alt-text="Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the virtual machine.":::
+
+1. Under **Instance details**, enter a unique **Virtual machine name**, select the same **Region** as your VNet, choose *No infrastructure redundancy required* for **Availability options**, choose *Ubuntu 18.04 LTS* for your **Image**, leave **Azure Spot instance** blank, and choose Standard_B1s (or larger) for the **Size**.
+
+ :::image type="content" source="./media/hdinsight-create-virtual-network/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image and size.":::
+
+1. Under **Administrator account**, select **Password** and enter a username and password for the administrator account.
+
+ :::image type="content" source="./media/hdinsight-create-virtual-network/administrator-account.png" alt-text="Screenshot of the Administrator account section where you select an authentication type and provide the administrator credentials.":::
+
+1. Under **Inbound port rules** > **Public inbound ports**, choose **Allow selected ports** and then select **SSH (22)** from the drop-down, and then click **Next: Disks >**
+
+ :::image type="content" source="./media/hdinsight-create-virtual-network/inbound-port-rules.png" alt-text="Screenshot of the inbound port rules section where you select what ports inbound connections are allowed on.":::
+
+1. Under **Disk options**, choose *Standard SSD for the OS disk type*, and then click **Next: Networking >**.
+
+1. On the **Networking** page, under **Network interface**, select the **Virtual Network** and the **Subnet** in which you plan to add the HDInsight cluster to, and then select the **Review + create** button at the bottom of the page.
+
+ :::image type="content" source="./media/hdinsight-create-virtual-network/vnet.png" alt-text="Screenshot of the network interface section where you select the VNet and subnet in which to add the virtual machine.":::
+
+1. On the **Create a virtual machine** page, you can see the details about the VM you are about to create. When you are ready, select **Create**.
+
+1. When the deployment is finished, select **Go to resource**.
+
+1. On the page for your new VM, select the public IP address and copy it to your clipboard.
+
+ :::image type="content" source="./media/hdinsight-create-virtual-network/ip-address.png" alt-text="Screenshot showing how to copy the IP address for the virtual machine.":::
+
+**Run the /networkValidator.sh script**
+
+1. SSH to the new virtual machine.
+1. Copy all the files from [github](https://github.com/Azure-Samples/hdinsight-diagnostic-scripts/tree/main/HDInsightNetworkValidator) to the virtual machine with the following command:
+
+ `wget -i https://raw.githubusercontent.com/Azure-Samples/hdinsight-diagnostic-scripts/main/HDInsightNetworkValidator/all.txt`
+
+1. Open the params.txt file in a text editor, and add values to all the variables. Use an empty string ("") when you want to omit the related validation.
+1. Run `sudo chmod +x ./setup.sh` to make setup.sh executable and run it with `sudo ./setup.sh` to install pip for Python 2.x and install the required Python 2.x modules.
+1. Run the main script with `sudo python2 ./networkValidator.py`.
+1. Once the script completes, the Summary section indicates whether the checks were successful and you can create the cluster, or it will indicate if any issues were encountered, in which case you should review the error output and related documentation to fix the error.
+
+ Once you attempt to fix the error/s, you can run the script again to check your progress.
+1. After you have completed your checks and the summary says "SUCCESS: You can create your HDInsight cluster in this VNet/Subnet." you can create your cluster.
+1. Delete the new virtual machine when you are done running the validation script.
+ ## Next steps * For a complete example of configuring HDInsight to connect to an on-premises network, see [Connect HDInsight to an on-premises network](./connect-on-premises-network.md).
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-release-notes.md
The following changes will happen in upcoming releases.
As customer scenarios grow more mature and diverse, we have identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The impact on performance can outweigh the cost benefits of Autoscale.
-Starting from May 15, 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable Autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
+Starting from July, 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable Autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
Microsoft recommends that you move to a schedule-based Autoscale for LLAP. You can analyze your cluster's current usage pattern through the Grafana Hive dashboard. For more information, see [Automatically scale Azure HDInsight clusters](hdinsight-autoscale-clusters.md).
hdinsight Apache Spark Structured Streaming Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-structured-streaming-overview.md
These JSON files are stored in the `temps` subfolder underneath the HDInsight c
First configure a DataFrame that describes the source of the data and any settings required by that source. This example draws from the JSON files in Azure Storage and applies a schema to them at read time.
-```sql
+```scala
import org.apache.spark.sql.types._ import org.apache.spark.sql.functions._
val streamingInputDF = spark.readStream.schema(jsonSchema).json(inputPath)
Next, apply a query that contains the desired operations against the Streaming DataFrame. In this case, an aggregation groups all the rows into 1-hour windows, and then computes the minimum, average, and maximum temperatures in that 1-hour window.
-```sql
+```scala
val streamingAggDF = streamingInputDF.groupBy(window($"time", "1 hour")).agg(min($"temp"), avg($"temp"), max($"temp")) ```
val streamingAggDF = streamingInputDF.groupBy(window($"time", "1 hour")).agg(min
Next, define the destination for the rows that are added to the results table within each trigger interval. This example just outputs all rows to an in-memory table `temps` that you can later query with SparkSQL. Complete output mode ensures that all rows for all windows are output every time.
-```sql
+```scala
val streamingOutDF = streamingAggDF.writeStream.format("memory").queryName("temps").outputMode("complete") ```
val streamingOutDF = streamingAggDF.writeStream.format("memory").queryName("temp
Start the streaming query and run until a termination signal is received.
-```sql
+```scala
val query = streamingOutDF.start() ```
iot-central Howto Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-export-data.md
You can add properties to telemetry messages if you need to add custom metadata
The following code snippet shows how to add the `iothub-creation-time-utc` property to the message when you create it on the device:
+> [!IMPORTANT]
+> The format of this timestamp must be UTC with no timezone information. For example, `2021-04-21T11:30:16Z` is valid, `2021-04-21T11:30:16-07:00` is invalid.
+ # [JavaScript](#tab/javascript) ```javascript
iot-hub Iot Hub Csharp Csharp File Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-csharp-csharp-file-upload.md
[!INCLUDE [iot-hub-file-upload-language-selector](../../includes/iot-hub-file-upload-language-selector.md)]
-This tutorial builds on the code in the [Send cloud-to-device messages with IoT Hub](iot-hub-csharp-csharp-c2d.md) tutorial to show you how to use the file upload capabilities of IoT Hub. It shows you how to:
-
-* Securely provide a device with an Azure blob URI for uploading a file.
-
-* Use the IoT Hub file upload notifications to trigger processing the file in your app back end.
+This tutorial shows you how to use the file upload capabilities of IoT Hub by using the .NET file upload sample.
The [Send telemetry from a device to an IoT hub](quickstart-send-telemetry-dotnet.md) quickstart and [Send cloud-to-device messages with IoT Hub](iot-hub-csharp-csharp-c2d.md) tutorial show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure Message Routing with IoT Hub](tutorial-routing.md) tutorial describes a way to reliably store device-to-cloud messages in Microsoft Azure Blob storage. However, in some scenarios you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
The [Send telemetry from a device to an IoT hub](quickstart-send-telemetry-dotne
* Some form of preprocessed data
-These files are typically batch processed in the cloud using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upload files from a device, you can still use the security and reliability of IoT Hub.
-
-At the end of this tutorial you run two .NET console apps:
-
-* **SimulatedDevice**. This app uploads a file to storage using a SAS URI provided by your IoT hub. It is a modified version of the app created in the [Send cloud-to-device messages with IoT Hub](iot-hub-csharp-csharp-c2d.md) tutorial.
-
-* **ReadFileUploadNotification**. This app receives file upload notifications from your IoT hub.
+These files are typically batch processed in the cloud using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upload files from a device, however, you can still use the security and reliability of IoT Hub. This tutorial shows you how.
> [!NOTE]
-> IoT Hub supports many device platforms and languages, including C, Java, Python, and Javascript, through Azure IoT device SDKs. Refer to the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot) for step-by-step instructions on how to connect your device to Azure IoT Hub.
+> IoT Hub supports many device platforms and languages, including C, Java, Python, and JavaScript, through Azure IoT device SDKs. Refer to the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot) for step-by-step instructions on how to connect your device to Azure IoT Hub.
[!INCLUDE [iot-hub-include-x509-ca-signed-file-upload-support-note](../../includes/iot-hub-include-x509-ca-signed-file-upload-support-note.md)] ## Prerequisites
-* Visual Studio
+* Visual Studio Code
* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
--
-## Upload a file from a device app
-
-In this section, you modify the device app you created in [Send cloud-to-device messages with IoT Hub](iot-hub-csharp-csharp-c2d.md) to receive cloud-to-device messages from the IoT hub.
-
-1. In Visual Studio Solution Explorer, right-click the **SimulatedDevice** project, and select **Add** > **Existing Item**. Find an image file and include it in your project. This tutorial assumes the image is named `image.jpg`.
+* Download the Azure IoT C# samples from [https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/master.zip](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/master.zip) and extract the ZIP archive.
-1. Right-click the image, and then select **Properties**. Make sure that **Copy to Output Directory** is set to **Copy always**.
+* Open the *FileUploadSample* folder in Visual Studio Code, and open the *FileUploadSample.cs* file.
- ![Show where to update the image property for Copy to Output Directory](./media/iot-hub-csharp-csharp-file-upload/image-properties.png)
+* Make sure that port 8883 is open in your firewall. The sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
-1. In the **Program.cs** file, add the following statements at the top of the file:
+## Create an IoT hub
- ```csharp
- using System.IO;
- ```
-1. Add the following method to the **Program** class:
+## Associate an Azure Storage Account to your IoT Hub
- ```csharp
- private static async Task SendToBlobAsync(string fileName)
- {
- Console.WriteLine("Uploading file: {0}", fileName);
- var watch = System.Diagnostics.Stopwatch.StartNew();
+You must have an Azure Storage account associated with your IoT hub. To learn how to create one, see [Create a storage account](../storage/common/storage-account-create.md). When you associate an Azure Storage account with an IoT hub, the IoT hub generates a SAS URI. A device can use this SAS URI to securely upload a file to a blob container.
- await deviceClient.GetFileUploadSasUriAsync(new FileUploadSasUriRequest { BlobName = fileName });
- var blob = new CloudBlockBlob(sas.GetBlobUri());
- await blob.UploadFromFileAsync(fileName);
- await deviceClient.CompleteFileUploadAsync(new FileUploadCompletionNotification { CorrelationId = sas.CorrelationId, IsSuccess = true });
+## Create a container
- watch.Stop();
- Console.WriteLine("Time to upload file: {0}ms\n", watch.ElapsedMilliseconds);
- }
- ```
+Follow these steps to create a blob container for your storage account:
- The `UploadToBlobAsync` method takes in the file name and stream source of the file to be uploaded and handles the upload to storage. The console app displays the time it takes to upload the file.
+1. In the left pane of your storage account, under **Data Storage**, select **Containers**.
+1. In the Container blade, select **+ Container**.
+1. In the **New container** pane that opens, give your container a name and select **Create**.
-1. Add the following line in the **Main** method, right before `Console.ReadLine()`:
-
- ```csharp
- await SendToBlobAsync("image.jpg");
- ```
-
-> [!NOTE]
-> For simplicity's sake, this tutorial does not implement any retry policy. In production code, you should implement retry policies, such as exponential backoff, as suggested in [Transient fault handling](/azure/architecture/best-practices/transient-faults).
+After creating a container, follow the instructions in [Configure file uploads using the Azure portal](iot-hub-configure-file-upload.md). Make sure that a blob container is associated with your IoT hub and that file notifications are enabled.
## Get the IoT hub connection string
-In this article, you create a back-end service to receive file upload notification messages from the IoT hub you created in [Send telemetry from a device to an IoT hub](quickstart-send-telemetry-dotnet.md). To receive file upload notification messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
- [!INCLUDE [iot-hub-include-find-service-connection-string](../../includes/iot-hub-include-find-service-connection-string.md)]
-## Receive a file upload notification
-
-In this section, you write a .NET console app that receives file upload notification messages from IoT Hub.
-
-1. In the current Visual Studio solution, select **File** > **New** > **Project**. In **Create a new project**, select **Console App (.NET Framework)**, and then select **Next**.
+## Examine the Application
-1. Name the project *ReadFileUploadNotification*. Under **Solution**, select **Add to solution**. Select **Create** to create the project.
+Navigate to the *FileUploadSample* folder in your .NET samples download. Open the folder in Visual Studio Code. The folder contains a file named *parameters.cs*. If you open that file, you'll see that the parameter *p* is required and contains the connection string. The parameter *t* can be specified if you want to change the transport protocol. The default protocol is mqtt. The file *program.cs* contains the *main* function. The *FileUploadSample.cs* file contains the primary sample logic. *TestPayload.txt* is the file to be uploaded to your blob container.
- ![Configure the ReadFileUploadNotification project in Visual Studio](./media/iot-hub-csharp-csharp-file-upload/read-file-upload-project-configure.png)
+## Run the application
-1. In Solution Explorer, right-click the **ReadFileUploadNotification** project, and select **Manage NuGet Packages**.
+Now you are ready to run the application.
-1. In **NuGet Package Manager**, select **Browse**. Search for and select **Microsoft.Azure.Devices**, and then select **Install**.
-
- This step downloads, installs, and adds a reference to the [Azure IoT service SDK NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Devices/) in the **ReadFileUploadNotification** project.
-
-1. In the **Program.cs** file for this project, add the following statement at the top of the file:
-
- ```csharp
- using Microsoft.Azure.Devices;
+1. Open a terminal window in Visual Studio Code.
+1. Type the following commands:
+ ```cmd/sh
+ dotnet restore
+ dotnet run --p "{Your connection string}"
```
-1. Add the following fields to the **Program** class. Replace the `{iot hub connection string}` placeholder value with the IoT hub connection string that you copied previously in [Get the IoT hub connection string](#get-the-iot-hub-connection-string):
-
- ```csharp
- static ServiceClient serviceClient;
- static string connectionString = "{iot hub connection string}";
- ```
-
-1. Add the following method to the **Program** class:
-
- ```csharp
- private async static void ReceiveFileUploadNotificationAsync()
- {
- var notificationReceiver = serviceClient.GetFileNotificationReceiver();
-
- Console.WriteLine("\nReceiving file upload notification from service");
- while (true)
- {
- var fileUploadNotification = await notificationReceiver.ReceiveAsync();
- if (fileUploadNotification == null) continue;
-
- Console.ForegroundColor = ConsoleColor.Yellow;
- Console.WriteLine("Received file upload notification: {0}",
- string.Join(", ", fileUploadNotification.BlobName));
- Console.ResetColor();
-
- await notificationReceiver.CompleteAsync(fileUploadNotification);
- }
- }
- ```
-
- Note this receive pattern is the same one used to receive cloud-to-device messages from the device app.
-
-1. Finally, add the following lines to the **Main** method:
-
- ```csharp
- Console.WriteLine("Receive file upload notifications\n");
- serviceClient = ServiceClient.CreateFromConnectionString(connectionString);
- ReceiveFileUploadNotificationAsync();
- Console.WriteLine("Press Enter to exit\n");
- Console.ReadLine();
- ```
-
-## Run the applications
-
-Now you are ready to run the applications.
+The output should resemble the following:
-1. In Solutions Explorer, right-click your solution, and select **Set StartUp Projects**.
+```cmd/sh
+ Uploading file TestPayload.txt
+ Getting SAS URI from IoT Hub to use when uploading the file...
+ Successfully got SAS URI (https://contosostorage.blob.core.windows.net/contosocontainer/MyDevice%2FTestPayload.txt?sv=2018-03-28&sr=b&sig=x0G1Baf%2BAjR%2BTg3nW34zDNKs07p6dLzkxvZ3ZSmjIhw%3D&se=2021-05-04T16%3A40%3A52Z&sp=rw) from IoT Hub
+ Uploading file TestPayload.txt using the Azure Storage SDK and the retrieved SAS URI for authentication
+ Successfully uploaded the file to Azure Storage
+ Notified IoT Hub that the file upload succeeded and that the SAS URI can be freed.
+ Time to upload file: 00:00:01.5077954.
+ Done.
+```
-1. In **Common Properties** > **Startup Project**, select **Multiple startup projects**, then select the **Start** action for **ReadFileUploadNotification** and **SimulatedDevice**. Select **OK** to save your changes.
+## Verify the file upload
-1. Press **F5**. Both applications should start. You should see the upload completed in one console app and the upload notification message received by the other console app. You can use the [Azure portal](https://portal.azure.com/) or Visual Studio Server Explorer to check for the presence of the uploaded file in your Azure Storage account.
+Perform the following steps to verify that *TestPayload.txt* was uploaded to your container:
- ![Screenshot showing the output screen](./media/iot-hub-csharp-csharp-file-upload/run-apps1.png)
+1. In the left pane of your storage account, select **Containers** under **Data Storage**.
+1. Select to container to which you uploaded *TestPayload.txt*.
+1. Select the folder named after your device.
+1. Select *TestPayload.txt*.
+1. Download the file to view its contents locally.
## Next steps
iot-hub Iot Hub Devguide Messages Construct https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messages-construct.md
For more information about how to encode and decode messages sent using differen
| dt-dataschema | This value is set by IoT hub on device-to-cloud messages. It contains the device model ID set in the device connection. | No | $dt-dataschema | | dt-subject | The name of the component that is sending the device-to-cloud messages. | Yes | $dt-subject |
+## Application Properties of **D2C** IoT Hub messages
+
+A common use of application properties is to send a timestamp from the device using the `iothub-creation-time-utc` property to record when the message was sent by the device. The format of this timestamp must be UTC with no timezone information. For example, `2021-04-21T11:30:16Z` is valid, `2021-04-21T11:30:16-07:00` is invalid:
+
+```json
+{
+ "applicationId":"5782ed70-b703-4f13-bda3-1f5f0f5c678e",
+ "messageSource":"telemetry",
+ "deviceId":"sample-device-01",
+ "schema":"default@v1",
+ "templateId":"urn:modelDefinition:mkuyqxzgea:e14m1ukpn",
+ "enqueuedTime":"2021-01-29T16:45:39.143Z",
+ "telemetry":{
+ "temperature":8.341033560421833
+ },
+ "messageProperties":{
+ "iothub-creation-time-utc":"2021-01-29T16:45:39.021Z"
+ },
+ "enrichments":{}
+}
+```
+ ## System Properties of **C2D** IoT Hub messages | Property | Description |User Settable?|
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-managed-identity.md
In IoT Hub, managed identities can be used for egress connectivity from IoT Hub
### Enable managed identity at hub creation time using ARM template
-To enable the system-assigned managed identity in your IoT hub at resource provisioning time, use the Azure Resource Manager (ARM) template below. This ARM template has two required resources, and they both need to be deployed before creating other resources like `Microsoft.Devices/IotHubs/eventHubEndpoints/ConsumerGroups`.
+To enable the system-assigned managed identity in your IoT hub at resource provisioning time, use the Azure Resource Manager (ARM) template below.
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
- "resources": [
+ "parameters":
{
- "type": "Microsoft.Devices/IotHubs",
- "apiVersion": "2020-03-01",
- "name": "<provide-a-valid-resource-name>",
- "location": "<any-of-supported-regions>",
- "identity": {
- "type": "SystemAssigned"
+ "iotHubName": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of iothub resource"
+ }
+ },
+ "skuName": {
+ "type": "string",
+ "defaultValue": "S1",
+ "metadata": {
+ "description": "SKU name of iothub resource, by default is Standard S1"
+ }
},
- "sku": {
- "name": "<your-hubs-SKU-name>",
- "tier": "<your-hubs-SKU-tier>",
- "capacity": 1
+ "skuTier": {
+ "type": "string",
+ "defaultValue": "Standard",
+ "metadata": {
+ "description": "SKU tier of iothub resource, by default is Standard"
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location of iothub resource. Please provide any of supported-regions of iothub"
+ }
} },
+ "resources": [
{ "type": "Microsoft.Resources/deployments",
- "apiVersion": "2018-02-01",
+ "apiVersion": "2020-10-01",
"name": "createIotHub",
- "dependsOn": [
- "[resourceId('Microsoft.Devices/IotHubs', '<provide-a-valid-resource-name>')]"
- ],
"properties": { "mode": "Incremental", "template": { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "0.9.0.0",
+ "contentVersion": "1.0.0.0",
"resources": [ { "type": "Microsoft.Devices/IotHubs",
- "apiVersion": "2020-03-01",
- "name": "<provide-a-valid-resource-name>",
- "location": "<any-of-supported-regions>",
+ "apiVersion": "2021-03-31",
+ "name": "[parameters('iotHubName')]",
+ "location": "[parameters('location')]",
"identity": { "type": "SystemAssigned" }, "sku": {
- "name": "<your-hubs-SKU-name>",
- "tier": "<your-hubs-SKU-tier>",
- "capacity": 1
+ "name": "[parameters('skuName')]",
+ "tier": "[parameters('skuTier')]",
+ "capacity": 1
} }
- ]
+ ]
} } }
iot-hub Iot Hub Mqtt 5 Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-mqtt-5-reference.md
Post message to telemetry channel - EventHubs by default or other endpoint via r
| correlation-id | string | no | translates into `correlation-id` system property on posted message | | creation-time | time | no | translates into `iothub-creation-time-utc` property on posted message |
+> [!TIP]
+> The format of `creation-time` must be UTC with no timezone information. For example, `2021-04-21T11:30:16Z` is valid, `2021-04-21T11:30:16-07:00` is invalid.
+ **Payload**: any byte sequence #### Success Acknowledgment
iot-hub Quickstart Send Telemetry C https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-send-telemetry-c.md
Title: Quickstart - Send telemetry to Azure IoT Hub quickstart (C) | Microsoft Docs
-description: In this quickstart, you run two sample C applications to send simulated telemetry to an IoT hub and to read telemetry from the IoT hub for processing in the cloud.
+description: In this quickstart, you run two sample C applications in Windows to send simulated telemetry to an IoT hub and to read telemetry from the IoT hub for processing in the cloud.
IoT Hub is an Azure service that enables you to ingest high volumes of telemetry
The quickstart uses a C sample application from the [Azure IoT device SDK for C](iot-hub-device-sdk-c-intro.md) to send telemetry to an IoT hub. The Azure IoT device SDKs are written in [ANSI C (C99)](https://wikipedia.org/wiki/C99) for portability and broad platform compatibility. Before running the sample code, you will create an IoT hub and register the simulated device with that hub.
-This article is written for Windows, but you can complete this quickstart on Linux as well.
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Prerequisites
+# [Windows](#tab/windows)
+ * Install [Visual Studio 2019](https://www.visualstudio.com/vs/) with the ['Desktop development with C++'](https://www.visualstudio.com/vs/support/selecting-workloads-visual-studio-2017/) workload enabled. * Install the latest version of [Git](https://git-scm.com/download/).
This article is written for Windows, but you can complete this quickstart on Lin
[!INCLUDE [iot-hub-cli-version-info](../../includes/iot-hub-cli-version-info.md)]
+# [Linux](#tab/linux)
+
+* Install all dependencies before building the SDK.
+
+ ```bash
+ sudo apt-get update
+ sudo apt-get install -y git cmake build-essential curl libcurl4-openssl-dev libssl-dev uuid-dev ca-certificates
+ ```
+
+* Verify that CMake is at least version 2.8.12:
+
+ ```bash
+ cmake --version
+ ```
+
+* Verify that gcc is at least version 4.4.7:
+
+ ```bash
+ gcc --version
+ ```
+
+* Make sure that port 8883 is open in your firewall. The device sample in this quickstart uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+++ ## Prepare the development environment For this quickstart, you'll be using the [Azure IoT device SDK for C](iot-hub-device-sdk-c-intro.md). For the following environments, you can use the SDK by installing these packages and libraries:
-* **Linux**: apt-get packages are available for Ubuntu 16.04 and 18.04 using the following CPU architectures: amd64, arm64, armhf, and i386. For more information, see [Using apt-get to create a C device client project on Ubuntu](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md#set-up-a-linux-development-environment).
- * **mbed**: For developers creating device applications on the mbed platform, we've published a library and samples that will get you started in minutes witH Azure IoT Hub. For more information, see [Use the mbed library](https://github.com/Azure/azure-iot-sdk-c/blob/master/iothub_client/readme.md#mbed). * **Arduino**: If you're developing on Arduino, you can leverage the Azure IoT library available in the Arduino IDE library manager. For more information, see [The Azure IoT Hub library for Arduino](https://github.com/azure/azure-iot-arduino).
For the following environments, you can use the SDK by installing these packages
However, in this quickstart, you'll prepare a development environment used to clone and build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) from GitHub. The SDK on GitHub includes the sample code used in this quickstart.
+# [Windows](#tab/windows)
+ 1. Download the [CMake build system](https://cmake.org/download/). It is important that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system.
However, in this quickstart, you'll prepare a development environment used to cl
-- Build files have been written to: E:/IoT Testing/azure-iot-sdk-c/cmake ```
+# [Linux](#tab/linux)
+
+1. Find the tag name for the [latest release](https://github.com/Azure/azure-iot-sdk-c/releases/latest) of the SDK.
+
+1. Open a shell command prompt. Run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Use the tag you found in the previous step as the value for the `-b` parameter:
+
+ ```bash
+ git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
+ cd azure-iot-sdk-c
+ git submodule update --init
+ ```
+
+1. Create a `cmake` subdirectory in the root directory of the git repository, and navigate to that folder. Run the following commands from the `azure-iot-sdk-c` directory:
+
+ ```bash
+ mkdir cmake
+ cd cmake
+ ```
+
+1. Run the following command to build a version of the SDK specific to your development client platform.
+
+ ```bash
+ cmake ..
+ cmake --build .
+ ```
+++ ## Create an IoT hub [!INCLUDE [iot-hub-include-create-hub](../../includes/iot-hub-include-create-hub.md)]
A device must be registered with your IoT hub before it can connect. In this sec
The simulated device application connects to a device-specific endpoint on your IoT hub and sends a string as simulated telemetry.
+# [Windows](#tab/windows)
+ 1. Using a text editor, open the iothub_convenience_sample.c source file and review the sample code for sending telemetry. The file is located in the following location under the working directory where you cloned the Azure IoT C SDK: ```
The simulated device application connects to a device-specific endpoint on your
![Run the simulated device](media/quickstart-send-telemetry-c/simulated-device-app.png)
+# [Linux](#tab/linux)
+
+1. Using a text editor, open the iothub_convenience_sample.c source file and review the sample code for sending telemetry. The file is located in the following location under the working directory where you cloned the Azure IoT C SDK:
+
+ ```
+ azure-iot-sdk-c\iothub_client\samples\iothub_convenience_sample\iothub_convenience_sample.c
+ ```
+
+1. Find the declaration of the `connectionString` constant:
+
+ ```c
+ /* Paste in your device connection string */
+ static const char* connectionString = "[device connection string]";
+ ```
+
+ Replace the value of the `connectionString` constant with the device connection string you made a note of earlier. Then save your changes to **iothub_convenience_sample.c**.
+
+1. In a local terminal window, navigate to the *iothub_convenience_sample* project directory in the CMake directory that you created in the Azure IoT C SDK. Enter the following command from your working directory:
+
+ ```bash
+ cd azure-iot-sdk-c/cmake/iothub_client/samples/iothub_convenience_sample
+ ```
+
+1. Run CMake in your local terminal window to build the sample with your updated `connectionString` value:
+
+ ```bash
+ cmake --build . --target iothub_convenience_sample
+ ```
+
+1. In your local terminal window, run the following command to run the simulated device application:
+
+ ```bash
+ ./iothub_convenience_sample
+ ```
+
+ The following screenshot shows the output as the simulated device application sends telemetry to the IoT hub:
+
+ ![Run the simulated device](media/quickstart-send-telemetry-c/simulated-device-app.png)
+++ ## Read the telemetry from your hub In this section, you'll use the Azure Cloud Shell with the [IoT extension](/cli/azure/iot) to monitor the device messages that are sent by the simulated device.
iot-hub Quickstart Send Telemetry Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-send-telemetry-python.md
Last updated 06/16/2020
[!INCLUDE [iot-hub-quickstarts-1-selector](../../includes/iot-hub-quickstarts-1-selector.md)]
-In this quickstart, you send telemetry from a simulated device application through Azure IoT Hub to a back-end application for processing. IoT Hub is an Azure service that enables you to ingest high volumes of telemetry from your IoT devices into the cloud for storage or processing. This quickstart uses two pre-written Python applications: one to send the telemetry and one to read the telemetry from the hub. Before you run these two applications, you create an IoT hub and register a device with the hub.
+In this quickstart, you send telemetry from a simulated device application through Azure IoT Hub to a back-end application for processing. IoT Hub is an Azure service that enables you to ingest high volumes of telemetry from your IoT devices into the cloud for storage or processing. This quickstart uses two pre-written Python applications: one to send the telemetry and one to read the telemetry from the hub. Note that there are synchronous and asynchronous versions of the application to send telemetry. Before you run any of these applications, you create an IoT hub and register a device with the hub.
## Prerequisites
A device must be registered with your IoT hub before it can connect. In this qui
The simulated device application connects to a device-specific endpoint on your IoT hub and sends simulated temperature and humidity telemetry.
+> [!NOTE]
+> The following steps use the synchronous sample, **SimulatedDeviceSync.py**. You can perform the same steps with the asynchronous sample, **SimulatedDeviceAsync.py**.
+ 1. Download or clone the azure-iot-samples-python repository using the **Code** button on the [azure-iot-samples-python repository page](https://github.com/Azure-Samples/azure-iot-samples-python/).
-1. In a local terminal window, navigate to the root folder of the sample Python project. Then navigate to the **iot-hub\Quickstarts\simulated-device** folder.
+1. In a local terminal window, navigate to the root folder of the sample Python project. Then navigate to the **iot-hub\Quickstarts\simulated-device** folder. Both the synchronous and asynchronous samples are located in the same folder.
-1. Open the **SimulatedDevice.py** file in a text editor of your choice.
+1. Open the **SimulatedDeviceSync.py** file in a text editor of your choice.
- Replace the value of the `CONNECTION_STRING` variable with the device connection string you made a note of earlier. Then save your changes to **SimulatedDevice.py**.
+1. Create an environment variable that contains your connection string and restart the editor to pick up the new variable. The environment variable should be named *ConnectionString* to match the sample code.
-1. In the local terminal window, run the following commands to install the required libraries for the simulated device application:
+1. In the local terminal window, run the following command to install the required libraries for the simulated device application:
```cmd/sh pip install azure-iot-device ```
-1. In the local terminal window, run the following commands to run the simulated device application:
+1. In the local terminal window, run the following command to run the simulated device application:
```cmd/sh
- python SimulatedDevice.py
+ python SimulatedDeviceSync.py
``` The following screenshot shows the output as the simulated device application sends telemetry to your IoT hub:
The back-end application connects to the service-side **Events** endpoint on you
| `EVENTHUB_COMPATIBLE_PATH` | Replace the value of the variable with the Event Hubs-compatible path you made a note of earlier. | | `IOTHUB_SAS_KEY` | Replace the value of the variable with the service primary key you made a note of earlier. |
-3. In the local terminal window, run the following commands to install the required libraries for the back-end application:
+3. In the local terminal window, run the following command to install the required libraries for the back-end application:
```cmd/sh pip install azure-eventhub ```
-4. In the local terminal window, run the following commands to build and run the back-end application:
+4. In the local terminal window, run the following command to build and run the back-end application:
```cmd/sh python read_device_to_cloud_messages_sync.py
key-vault Basic Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/basic-concepts.md
To do any operations with Key Vault, you first need to authenticate to it. There
- **Service principal and certificate**: You can use a service principal and an associated certificate that has access to Key Vault. We don't recommend this approach because the application owner or developer must rotate the certificate. - **Service principal and secret**: Although you can use a service principal and a secret to authenticate to Key Vault, we don't recommend it. It's hard to automatically rotate the bootstrap secret that's used to authenticate to Key Vault.
+## Encryption of data in transit
+
+Azure Key Vault enforces [Transport Layer Security](https://en.wikipedia.org/wiki/Transport_Layer_Security) (TLS) protocol to protect data when itΓÇÖs traveling between Azure Key vault and clients. Clients negotiate a TLS connection with Azure Key Vault. TLS provides strong authentication, message privacy, and integrity (enabling detection of message tampering, interception, and forgery), interoperability, algorithm flexibility, and ease of deployment and use.
+
+[Perfect Forward Secrecy](https://en.wikipedia.org/wiki/Forward_secrecy) (PFS) protects connections between customersΓÇÖ client systems and Microsoft cloud services by unique keys. Connections also use RSA-based 2,048-bit encryption key lengths. This combination makes it difficult for someone to intercept and access data that is in transit.
## Key Vault roles
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/quick-create-net.md
Last updated 09/23/2020
-+ # Quickstart: Azure Key Vault key client library for .NET (SDK v4)
For more information about Key Vault and keys, see:
* [Azure CLI](/cli/azure/install-azure-cli) * A Key Vault - you can create one using [Azure portal](../general/quick-create-portal.md), [Azure CLI](../general/quick-create-cli.md), or [Azure PowerShell](../general/quick-create-powershell.md).
+This quickstart is using `dotnet` and Azure CLI
+ ## Setup
-This quickstart is using Azure Identity library to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls, for more information, see [Authenticate the client with Azure Identity client library](/dotnet/api/overview/azure/identity-readme?#authenticate-the-client&preserve-view=true).
+This quickstart is using Azure Identity library with Azure CLI to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls, for more information, see [Authenticate the client with Azure Identity client library](/dotnet/api/overview/azure/identity-readme?#authenticate-the-client&preserve-view=true).
### Sign in to Azure 1. Run the `login` command.
- # [Azure CLI](#tab/azure-cli)
```azurecli-interactive az login ```
- # [Azure PowerShell](#tab/azurepowershell)
-
- ```azurepowershell-interactive
- Connect-AzAccount
- ```
-
- If Azure CLI or Azure PowerShell can open your default browser, it will do so and load an Azure sign-in page.
+ If the CLI can open your default browser, it will do so and load an Azure sign-in page.
Otherwise, open a browser page at [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and enter the authorization code displayed in your terminal.
This quickstart is using Azure Identity library to authenticate user to Azure Se
Create an access policy for your key vault that grants key permissions to your user account
-# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```console
az keyvault set-policy --name <your-key-vault-name> --upn user@domain.com --key-permissions delete get list create purge ```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell-interactive
-Set-AzKeyVaultAccessPolicy -VaultName <your-key-vault-name> -UserPrincipalName user@domain.com -PermissionsToSecrets delete,get,list,set,purge
-```
- ### Create new .NET console app
Windows
set KEY_VAULT_NAME=<your-key-vault-name> ```` Windows PowerShell
-```azurepowershell
+```powershell
$Env:KEY_VAULT_NAME="<your-key-vault-name>" ```
To learn more about Key Vault and how to integrate it with your apps, see the fo
- See an [Access Key Vault from App Service Application Tutorial](../general/tutorial-net-create-vault-azure-web-app.md) - See an [Access Key Vault from Virtual Machine Tutorial](../general/tutorial-net-virtual-machine.md) - See the [Azure Key Vault developer's guide](../general/developers-guide.md)-- Review the [Key Vault security overview](../general/security-features.md)
+- Review the [Key Vault security overview](../general/security-overview.md)
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
+
+ Title: Create workflows using single-tenant Azure Logic Apps (portal)
+description: Create automated workflows that integrate apps, data, services, and systems using single-tenant Azure Logic Apps and the Azure portal.
+
+ms.suite: integration
++ Last updated : 05/10/2021++
+# Create an integration workflow using single-tenant Azure Logic Apps and the Azure portal (preview)
+
+> [!IMPORTANT]
+> This capability is in preview and is subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+This article shows how to create an example automated integration workflow that runs in the *single-tenant Logic Apps environment* by using the new **Logic App (Preview)** resource type. While this example workflow is cloud-based and has only two steps, you can create workflows from hundreds of operations that can connect a wide range of apps, data, services, and systems across cloud, on premises, and hybrid environments. If you're new to single-tenant Logic Apps and the **Logic App (Preview)** resource type, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+
+The example workflow starts with the built-in Request trigger and follows with an Office 365 Outlook action. The trigger creates a callable endpoint for the workflow and waits for an inbound HTTPS request from any caller. When the trigger receives a request and fires, the next action runs by sending email to the specified email address along with selected outputs from the trigger.
+
+> [!TIP]
+> If you don't have an Office 365 account, you can use any other available action that can send
+> messages from your email account, for example, Outlook.com.
+>
+> To create this example workflow using Visual Studio Code instead, follow the steps in
+> [Create integration workflows using single tenant Azure Logic Apps and Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). Both options provide the capability
+> to develop, run, and deploy logic app workflows in the same kinds of environments. However, with
+> Visual Studio Code, you can *locally* develop, test, and run workflows in your development environment.
+
+![Screenshot that shows the Azure portal with the workflow designer for the "Logic App (Preview)" resource.](./media/create-single-tenant-workflows-azure-portal/azure-portal-logic-apps-overview.png)
+
+As you progress, you'll complete these high-level tasks:
+
+* Create the logic app resource and add a blank [*stateful*](single-tenant-overview-compare.md#stateful-stateless) workflow.
+* Add a trigger and action.
+* Trigger a workflow run.
+* View the workflow's run and trigger history.
+* Enable or open the Application Insights after deployment.
+* Enable run history for stateless workflows.
+
+For more information, review the following documentation:
+
+* [What is Azure Logic Apps?](logic-apps-overview.md)
+* [What is the single-tenant Logic Apps environment?](single-tenant-overview-compare.md)
+
+## Prerequisites
+
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* An [Azure Storage account](../storage/common/storage-account-overview.md). If you don't have one, you can either create a storage account in advance or during logic app creation.
+
+ > [!NOTE]
+ > The **Logic App (Preview)** resource type is powered by Azure Functions and has [storage requirements similar to function apps](../azure-functions/storage-considerations.md).
+ > [Stateful workflows](single-tenant-overview-compare.md#stateful-stateless) perform storage transactions, such as
+ > using queues for scheduling and storing workflow states in tables and blobs. These transactions incur
+ > [storage charges](https://azure.microsoft.com/pricing/details/storage/). For more information about
+ > how stateful workflows store data in external storage, review [Stateful and stateless workflows](single-tenant-overview-compare.md#stateful-stateless).
+
+* To deploy to a Docker container, you need an existing Docker container image.
+
+ For example, you can create this image through [Azure Container Registry](../container-registry/container-registry-intro.md), [App Service](../app-service/overview.md), or [Azure Container Instance](../container-instances/container-instances-overview.md).
+
+* To create the same example workflow in this article, you need an Office 365 Outlook email account that uses a Microsoft work or school account to sign in.
+
+ If you choose a [different email connector](/connectors/connector-reference/connector-reference-logicapps-connectors), such as Outlook.com, you can still follow the example, and the general overall steps are the same. However, your options might differ in some ways. For example, if you use the Outlook.com connector, use your personal Microsoft account instead to sign in.
+
+* To test the example workflow in this article, you need a tool that can send calls to the endpoint created by the Request trigger. If you don't have such a tool, you can download, install, and use [Postman](https://www.postman.com/downloads/).
+
+* If you create your logic app resources with settings that support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you create your logic app or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
+
+## Create the logic app resource
+
+1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
+
+1. In the Azure portal search box, enter `logic apps`, and select **Logic apps**.
+
+ ![Screenshot that shows the Azure portal search box with the "logic app preview" search term and the "Logic App (Preview)" resource selected.](./media/create-single-tenant-workflows-azure-portal/find-logic-app-resource-template.png)
+
+1. On the **Logic apps** page, select **Add** > **Preview**.
+
+ This step creates a logic app resource that runs in the single-tenant Logic Apps environment and uses the [preview (single-tenant) pricing model](logic-apps-pricing.md#preview-pricing).
+
+1. On the **Create Logic App** page, on the **Basics** tab, provide the following information about your logic app resource:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Subscription** | Yes | <*Azure-subscription-name*> | The Azure subscription to use for your logic app. |
+ | **Resource Group** | Yes | <*Azure-resource-group-name*> | The Azure resource group where you create your logic app and related resources. This resource name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <p><p>This example creates a resource group named `Fabrikam-Workflows-RG`. |
+ | **Logic App name** | Yes | <*logic-app-name*> | The name to use for your logic app. This resource name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <p><p>This example creates a logic app named `Fabrikam-Workflows`. <p><p>**Note**: Your logic app's name automatically gets the suffix, `.azurewebsites.net`, because the **Logic App (Preview)** resource is powered by Azure Functions, which uses the same app naming convention. |
+ | **Publish** | Yes | <*deployment-environment*> | The deployment destination for your logic app. <p><p>- **Workflow**: Deploy to single-tenant Azure Logic Apps in the portal. <p><p>- **Docker Container**: Deploy to a container. If you don't have a container, first create your Docker container image. That way, after you select **Docker Container**, you can [specify the container that you want to use when creating your logic app](#set-docker-container). For example, you can create this image through [Azure Container Registry](../container-registry/container-registry-intro.md), [App Service](../app-service/overview.md), or [Azure Container Instance](../container-instances/container-instances-overview.md). <p><p>This example continues with the **Workflow** option. |
+ | **Region** | Yes | <*Azure-region*> | The Azure region to use when creating your resource group and resources. <p><p>This example uses **West US**. |
+ |||||
+
+ Here's an example:
+
+ ![Screenshot that shows the Azure portal and "Create Logic App" page.](./media/create-single-tenant-workflows-azure-portal/create-logic-app-resource-portal.png)
+
+1. On the **Hosting** tab, provide the following information about the storage solution and hosting plan to use for your logic app.
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Storage account** | Yes | <*Azure-storage-account-name*> | The [Azure Storage account](../storage/common/storage-account-overview.md) to use for storage transactions. This resource name must be unique across regions and have 3-24 characters with only numbers and lowercase letters. Either select an existing account or create a new account. <p><p>This example creates a storage account named `fabrikamstorageacct`. |
+ | **Plan type** | Yes | <*Azure-hosting-plan*> | The [hosting plan](../app-service/overview-hosting-plans.md) to use for deploying your logic app, which is either [**Functions Premium**](../azure-functions/functions-premium-plan.md) or [**App service plan**](../azure-functions/dedicated-plan.md). Your choice affects the capabilities and pricing tiers that are later available to you. <p><p>This example uses the **App service plan**. <p><p>**Note**: Similar to Azure Functions, the **Logic App (Preview)** resource type requires a hosting plan and pricing tier. Consumption plans aren't supported or available for this resource type. For more information, review the following documentation: <p><p>- [Azure Functions scale and hosting](../azure-functions/functions-scale.md) <br>- [App Service pricing details](https://azure.microsoft.com/pricing/details/app-service/) <p><p>For example, the Functions Premium plan provides access to networking capabilities, such as connect and integrate privately with Azure virtual networks, similar to Azure Functions when you create and deploy your logic apps. For more information, review the following documentation: <p><p>- [Azure Functions networking options](../azure-functions/functions-networking-options.md) <br>- [Azure Logic Apps Running Anywhere - Networking possibilities with Azure Logic Apps Preview](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047) |
+ | **Windows Plan** | Yes | <*plan-name*> | The plan name to use. Either select an existing plan or provide the name for a new plan. <p><p>This example uses the name `Fabrikam-Service-Plan`. |
+ | **SKU and size** | Yes | <*pricing-tier*> | The [pricing tier](../app-service/overview-hosting-plans.md) to use for hosting your logic app. Your choices are affected by the plan type that you previously chose. To change the default tier, select **Change size**. You can then select other pricing tiers, based on the workload that you need. <p><p>This example uses the free **F1 pricing tier** for **Dev / Test** workloads. For more information, review [App Service pricing details](https://azure.microsoft.com/pricing/details/app-service/). |
+ |||||
+
+1. Next, if your creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app.
+
+ 1. On the **Monitoring** tab, under **Application Insights**, set **Enable Application Insights** to **Yes** if not already selected.
+
+ 1. For the **Application Insights** setting, either select an existing Application Insights instance, or if you want to create a new instance, select **Create new** and provide the name that you want to use.
+
+1. After Azure validates your logic app's settings, on the **Review + create** tab, select **Create**.
+
+ For example:
+
+ ![Screenshot that shows the Azure portal and new logic app resource settings.](./media/create-single-tenant-workflows-azure-portal/check-logic-app-resource-settings.png)
+
+ > [!TIP]
+ > If you get a validation error after you select **Create**, open and review the error details.
+ > For example, if your selected region reaches a quota for resources that you're trying to create,
+ > you might have to try a different region.
+
+ After Azure finishes deployment, your logic app is automatically live and running but doesn't do anything yet because the resource is empty, and no workflows exist yet.
+
+1. On the deployment completion page, select **Go to resource** so that you can add a blank workflow. If you selected **Docker Container** for deploying your logic app, continue with the [steps to provide information about that Docker container](#set-docker-container).
+
+ ![Screenshot that shows the Azure portal and the finished deployment.](./media/create-single-tenant-workflows-azure-portal/logic-app-completed-deployment.png)
+
+<a name="set-docker-container"></a>
+
+## Specify Docker container for deployment
+
+Before you start these steps, you need a Docker container image. For example, you can create this image through [Azure Container Registry](../container-registry/container-registry-intro.md), [App Service](../app-service/overview.md), or [Azure Container Instance](../container-instances/container-instances-overview.md). You can then provide information about your Docker container after you create your logic app.
+
+1. In the Azure portal, go to your logic app resource.
+
+1. On the logic app menu, under **Settings**, select **Deployment Center**.
+
+1. On the **Deployment Center** pane, follow the instructions for providing and managing the details for your Docker container.
+
+<a name="add-workflow"></a>
+
+## Add a blank workflow
+
+1. After Azure opens the resource, on your logic app's menu, select **Workflows**. On the **Workflows** toolbar, select **Add**.
+
+ ![Screenshot that shows the logic app resource menu with "Workflows" selected, and then hen on the toolbar, "Add" is selected.](./media/create-single-tenant-workflows-azure-portal/logic-app-add-blank-workflow.png)
+
+1. After the **New workflow** pane opens, provide a name for your workflow, and choose the state type, either [**Stateful** or **Stateless**](single-tenant-overview-compare.md#stateful-stateless). When you're done, select **Create**.
+
+ This example adds a blank stateful workflow named `Fabrikam-Stateful-Workflow`. By default, the workflow is enabled but doesn't do anything until you add a trigger and actions.
+
+ ![Screenshot that shows the newly added blank stateful workflow "Fabrikam-Stateful-Workflow".](./media/create-single-tenant-workflows-azure-portal/logic-app-blank-workflow-created.png)
+
+1. Next, open the blank workflow in the designer so that you can add a trigger and an action.
+
+ 1. From the workflow list, select the blank workflow.
+
+ 1. On the workflow menu, under **Developer**, select **Designer**.
+
+ On the designer surface, the **Choose an operation** prompt already appears and is selected by default so that the **Add a trigger** pane also appears open.
+
+ ![Screenshot that shows the opened workflow designer with "Choose an operation" selected on the designer surface.](./media/create-single-tenant-workflows-azure-portal/opened-logic-app-designer-blank-workflow.png)
+
+<a name="add-trigger-actions"></a>
+
+## Add a trigger and an action
+
+This example builds a workflow that has these steps:
+
+* The built-in [Request trigger](../connectors/connectors-native-reqres.md), **When a HTTP request is received**, which receives inbound calls or requests and creates an endpoint that other services or logic apps can call.
+
+* The [Office 365 Outlook action](../connectors/connectors-create-api-office365-outlook.md), **Send an email**.
+
+### Add the Request trigger
+
+Before you can add a trigger to a blank workflow, make sure that the workflow designer is open and that the **Choose an operation** prompt is selected on the designer surface.
+
+1. Next to the designer surface, in the **Add a trigger** pane, under the **Choose an operation** search box, check that the **Built-in** tab is selected. This tab shows triggers that run natively in Azure Logic Apps.
+
+1. In the **Choose an operation** search box, enter `when a http request`, and select the built-in Request trigger that's named **When a HTTP request is received**.
+
+ ![Screenshot that shows the designer and **Add a trigger** pane with "When a HTTP request is received" trigger selected.](./media/create-single-tenant-workflows-azure-portal/find-request-trigger.png)
+
+ When the trigger appears on the designer, the trigger's details pane opens to show the trigger's properties, settings, and other actions.
+
+ ![Screenshot that shows the designer with the "When a HTTP request is received" trigger selected and trigger details pane open.](./media/create-single-tenant-workflows-azure-portal/request-trigger-added-to-designer.png)
+
+ > [!TIP]
+ > If the details pane doesn't appear, makes sure that the trigger is selected on the designer.
+
+1. If you need to delete an item from the designer, [follow these steps for deleting items from the designer](#delete-from-designer).
+
+1. To save your work, on the designer toolbar, select **Save**.
+
+ When you save a workflow for the first time, and that workflow starts with a Request trigger, the Logic Apps service automatically generates a URL for an endpoint that's created by the Request trigger. Later, when you test your workflow, you send a request to this URL, which fires the trigger and starts the workflow run.
+
+### Add the Office 365 Outlook action
+
+1. On the designer, under the trigger that you added, select the plus sign (**+**) > **Add an action**.
+
+ The **Choose an operation** prompt appears on the designer, and the **Add an action** pane reopens so that you can select the next action.
+
+ > [!NOTE]
+ > If the **Add an action** pane shows the error message, 'Cannot read property 'filter' of undefined`,
+ > save your workflow, reload the page, reopen your workflow, and try again.
+
+1. In the **Add an action** pane, under the **Choose an operation** search box, select **Azure**. This tab shows the managed connectors that are available and hosted in Azure.
+
+ > [!NOTE]
+ > If the **Add an action** pane shows the error message, `The access token expiry UTC time '{token-expiration-date-time}' is earlier than current UTC time '{current-date-time}'`,
+ > save your workflow, reload the page, reopen your workflow, and try adding the action again.
+
+ This example uses the Office 365 Outlook action named **Send an email (V2)**.
+
+ ![Screenshot that shows the designer and the **Add an action** pane with the Office 365 Outlook "Send an email" action selected.](./media/create-single-tenant-workflows-azure-portal/find-send-email-action.png)
+
+1. In the action's details pane, on the **Create Connection** tab, select **Sign in** so that you can create a connection to your email account.
+
+ ![Screenshot that shows the designer and the "Send an email (V2)" details pane with "Sign in" selected.](./media/create-single-tenant-workflows-azure-portal/send-email-action-sign-in.png)
+
+1. When you're prompted for access to your email account, sign in with your account credentials.
+
+ > [!NOTE]
+ > If you get the error message, `Failed with error: 'The browser is closed.'. Please sign in again`,
+ > check whether your browser blocks third-party cookies. If these cookies are blocked,
+ > try adding `https://portal.azure.com` to the list of sites that can use cookies.
+ > If you're using incognito mode, make sure that third-party cookies aren't blocked while working in that mode.
+ >
+ > If necessary, reload the page, open your workflow, add the email action again, and try creating the connection.
+
+ After Azure creates the connection, the **Send an email** action appears on the designer and is selected by default. If the action isn't selected, select the action so that its details pane is also open.
+
+1. In the action details pane, on the **Parameters** tab, provide the required information for the action, for example:
+
+ ![Screenshot that shows the designer and the "Send an email" details pane with the "Parameters" tab selected.](./media/create-single-tenant-workflows-azure-portal/send-email-action-details.png)
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **To** | Yes | <*your-email-address*> | The email recipient, which can be your email address for test purposes. This example uses the fictitious email, `sophiaowen@fabrikam.com`. |
+ | **Subject** | Yes | `An email from your example workflow` | The email subject |
+ | **Body** | Yes | `Hello from your example workflow!` | The email body content |
+ ||||
+
+ > [!NOTE]
+ > When making any changes in the details pane on the **Settings**, **Static Result**, or **Run After** tabs,
+ > make sure that you select **Done** to commit those changes before you switch tabs or change focus to the designer.
+ > Otherwise, the designer won't keep your changes.
+
+1. Save your work. On the designer toolbar, select **Save**.
+
+1. If your environment has strict network requirements or firewalls that limit traffic, you have to set up permissions for any trigger or action connections that exist in your workflow. To find the fully qualified domain names, review [Find domain names for firewall access](#firewall-setup).
+
+ Otherwise, to test your workflow, [manually trigger a run](#trigger-workflow).
+
+<a name="firewall-setup"></a>
+
+## Find domain names for firewall access
+
+Before you deploy your logic app and run your workflow in the Azure portal, if your environment has strict network requirements or firewalls that limit traffic, you have to set up network or firewall permissions for any trigger or action connections in the workflows that exist in your logic app.
+
+To find the fully qualified domain names (FQDNs) for these connections, follow these steps:
+
+1. On your logic app menu, under **Workflows**, select **Connections**. On the **API Connections** tab, select the connection's resource name, for example:
+
+ ![Screenshot that shows the Azure portal and logic app menu with the "Connections" and "office365" connection resource name selected.](./media/create-single-tenant-workflows-azure-portal/logic-app-connections.png)
+
+1. Expand your browser wide enough so that when **JSON View** appears in the browser's upper right corner, select **JSON View**.
+
+ ![Screenshot that shows the Azure portal and API Connection pane with "JSON View" selected.](./media/create-single-tenant-workflows-azure-portal/logic-app-connection-view-json.png)
+
+1. Find, copy, and save the `connectionRuntimeUrl` property value somewhere safe so that you can set up your firewall with this information.
+
+ ![Screenshot that shows the "connectionRuntimeUrl" property value selected.](./media/create-single-tenant-workflows-azure-portal/logic-app-connection-runtime-url.png)
+
+1. For each connection, repeat the relevant steps.
+
+<a name="trigger-workflow"></a>
+
+## Trigger the workflow
+
+In this example, the workflow runs when the Request trigger receives an inbound request, which is sent to the URL for the endpoint that's created by the trigger. When you saved the workflow for the first time, the Logic Apps service automatically generated this URL. So, before you can send this request to trigger the workflow, you need to find this URL.
+
+1. On the workflow designer, select the Request trigger that's named **When a HTTP request is received**.
+
+1. After the details pane opens, on the **Parameters** tab, find the **HTTP POST URL** property. To copy the generated URL, select the **Copy Url** (copy file icon), and save the URL somewhere else for now. The URL follows this format:
+
+ `http://<logic-app-name>.azurewebsites.net:443/api/<workflow-name>/triggers/manual/invoke?api-version=2020-05-01-preview&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<shared-access-signature>`
+
+ ![Screenshot that shows the designer with the Request trigger and endpoint URL in the "HTTP POST URL" property.](./media/create-single-tenant-workflows-azure-portal/find-request-trigger-url.png)
+
+ For this example, the URL looks like this:
+
+ `https://fabrikam-workflows.azurewebsites.net:443/api/Fabrikam-Stateful-Workflow/triggers/manual/invoke?api-version=2020-05-01-preview&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=xxxxxXXXXxxxxxXXXXxxxXXXXxxxxXXXX`
+
+ > [!TIP]
+ > You can also find the endpoint URL on your logic app's **Overview** pane in the **Workflow URL** property.
+ >
+ > 1. On the resource menu, select **Overview**.
+ > 1. On the **Overview** pane, find the **Workflow URL** property.
+ > 1. To copy the endpoint URL, move your pointer over the end of the endpoint URL text,
+ > and select **Copy to clipboard** (copy file icon).
+
+1. To test the URL by sending a request, open [Postman](https://www.postman.com/downloads/) or your preferred tool for creating and sending requests.
+
+ This example continues by using Postman. For more information, see [Postman Getting Started](https://learning.postman.com/docs/getting-started/introduction/).
+
+ 1. On the Postman toolbar, select **New**.
+
+ ![Screenshot that shows Postman with New button selected](./media/create-single-tenant-workflows-azure-portal/postman-create-request.png)
+
+ 1. On the **Create New** pane, under **Building Blocks**, select **Request**.
+
+ 1. In the **Save Request** window, under **Request name**, provide a name for the request, for example, `Test workflow trigger`.
+
+ 1. Under **Select a collection or folder to save to**, select **Create Collection**.
+
+ 1. Under **All Collections**, provide a name for the collection to create for organizing your requests, press Enter, and select **Save to <*collection-name*>**. This example uses `Logic Apps requests` as the collection name.
+
+ Postman's request pane opens so that you can send a request to the endpoint URL for the Request trigger.
+
+ ![Screenshot that shows Postman with the opened request pane](./media/create-single-tenant-workflows-azure-portal/postman-request-pane.png)
+
+ 1. On the request pane, in the address box that's next to the method list, which currently shows **GET** as the default request method, paste the URL that you previously copied, and select **Send**.
+
+ ![Screenshot that shows Postman and endpoint URL in the address box with Send button selected](./media/create-single-tenant-workflows-azure-portal/postman-test-endpoint-url.png)
+
+ When the trigger fires, the example workflow runs and sends an email that appears similar to this example:
+
+ ![Screenshot that shows Outlook email as described in the example](./media/create-single-tenant-workflows-azure-portal/workflow-app-result-email.png)
+
+<a name="view-run-history"></a>
+
+## Review run history
+
+For a stateful workflow, after each workflow run, you can view the run history, including the status for the overall run, for the trigger, and for each action along with their inputs and outputs. In the Azure portal, run history and trigger histories appear at the workflow level, not the logic app level. To review the trigger histories outside the run history context, see [Review trigger histories](#view-trigger-histories).
+
+1. In the Azure portal, on the workflow menu, select **Overview**.
+
+1. On the **Overview** pane, select **Run History**, which shows the run history for that workflow.
+
+ ![Screenshot that shows the workflow's "Overview" pane with "Run History" selected.](./media/create-single-tenant-workflows-azure-portal/find-run-history.png)
+
+ > [!TIP]
+ > If the most recent run status doesn't appear, on the **Overview** pane toolbar, select **Refresh**.
+ > No run happens for a trigger that's skipped due to unmet criteria or finding no data.
+
+ | Run status | Description |
+ ||-|
+ | **Aborted** | The run stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
+ | **Cancelled** | The run was triggered and started but received a cancel request. |
+ | **Failed** | At least one action in the run failed. No subsequent actions in the workflow were set up to handle the failure. |
+ | **Running** | The run was triggered and is in progress, but this status can also appear for a run that is throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Tip**: If you set up [diagnostics logging](monitor-logic-apps-log-analytics.md), you can get information about any throttle events that happen. |
+ | **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. |
+ | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <p><p>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. |
+ | **Waiting** | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. |
+ |||
+
+1. To review the status for each step in a run, select the run that you want to review.
+
+ The run details view opens and shows the status for each step in the run.
+
+ ![Screenshot that shows the run details view with the status for each step in the workflow.](./media/create-single-tenant-workflows-azure-portal/review-run-details.png)
+
+ Here are the possible statuses that each step in the workflow can have:
+
+ | Action status | Description |
+ ||-|
+ | **Aborted** | The action stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
+ | **Cancelled** | The action was running but received a cancel request. |
+ | **Failed** | The action failed. |
+ | **Running** | The action is currently running. |
+ | **Skipped** | The action was skipped because its `runAfter` conditions weren't met, for example, a preceding action failed. Each action has a `runAfter` object where you can set up conditions that must be met before the current action can run. |
+ | **Succeeded** | The action succeeded. |
+ | **Succeeded with retries** | The action succeeded but only after a single or multiple retries. To review the retry history, in the run history details view, select that action so that you can view the inputs and outputs. |
+ | **Timed out** | The action stopped due to the timeout limit specified by that action's settings. |
+ | **Waiting** | Applies to a webhook action that's waiting for an inbound request from a caller. |
+ |||
+
+ [aborted-icon]: ./media/create-single-tenant-workflows-azure-portal/aborted.png
+ [cancelled-icon]: ./media/create-single-tenant-workflows-azure-portal/cancelled.png
+ [failed-icon]: ./media/create-single-tenant-workflows-azure-portal/failed.png
+ [running-icon]: ./media/create-single-tenant-workflows-azure-portal/running.png
+ [skipped-icon]: ./media/create-single-tenant-workflows-azure-portal/skipped.png
+ [succeeded-icon]: ./media/create-single-tenant-workflows-azure-portal/succeeded.png
+ [succeeded-with-retries-icon]: ./media/create-single-tenant-workflows-azure-portal/succeeded-with-retries.png
+ [timed-out-icon]: ./media/create-single-tenant-workflows-azure-portal/timed-out.png
+ [waiting-icon]: ./media/create-single-tenant-workflows-azure-portal/waiting.png
+
+1. To review the inputs and outputs for a specific step, select that step.
+
+ ![Screenshot that shows the inputs and outputs in the selected "Send an email" action.](./media/create-single-tenant-workflows-azure-portal/review-step-inputs-outputs.png)
+
+1. To further review the raw inputs and outputs for that step, select **Show raw inputs** or **Show raw outputs**.
+
+<a name="view-trigger-histories"></a>
+
+## Review trigger histories
+
+For a stateful workflow, you can review the trigger history for each run, including the trigger status along with inputs and outputs, separately from the [run history context](#view-run-history). In the Azure portal, trigger history and run history appear at the workflow level, not the logic app level. To find this historical data, follow these steps:
+
+1. In the Azure portal, on the workflow menu, select **Overview**.
+
+1. On the **Overview** page, select **Trigger Histories**.
+
+ The **Trigger Histories** pane shows the trigger histories for your workflow's runs.
+
+1. To review a specific trigger history, select the ID for that run.
+
+<a name="enable-open-application-insights"></a>
+
+## Enable or open Application Insights after deployment
+
+During workflow run, your logic app emits telemetry along with other events. You can use this telemetry to get better visibility into how well your workflow runs and how the Logic Apps runtime works in various ways. You can monitor your workflow by using [Application Insights](../azure-monitor/app/app-insights-overview.md), which provides near real-time telemetry (live metrics). This capability can help you investigate failures and performance problems more easily when you use this data to diagnose issues, set up alerts, and build charts.
+
+If your logic app's creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you create your logic app in the Azure portal or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
+
+To enable Application Insights on a deployed logic app or open the Application Insights dashboard if already enabled, follow these steps:
+
+1. In the Azure portal, find your deployed logic app.
+
+1. On the logic app menu, under **Settings**, select **Application Insights**.
+
+1. If Application Insights isn't enabled, on the **Application Insights** pane, select **Turn on Application Insights**. After the pane updates, at the bottom, select **Apply** > **Yes**.
+
+ If Application Insights is enabled, on the **Application Insights** pane, select **View Application Insights data**.
+
+After Application Insights opens, you can review various metrics for your logic app. For more information, review these topics:
+
+* [Azure Logic Apps Running Anywhere - Monitor with Application Insights - part 1](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-monitor-with-application/ba-p/1877849)
+* [Azure Logic Apps Running Anywhere - Monitor with Application Insights - part 2](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-monitor-with-application/ba-p/2003332)
+
+<a name="enable-run-history-stateless"></a>
+
+## Enable run history for stateless workflows
+
+To debug a stateless workflow more easily, you can enable the run history for that workflow, and then disable the run history when you're done. Follow these steps for the Azure portal, or if you're working in Visual Studio Code, see [Create stateful and stateless workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless).
+
+1. In the [Azure portal](https://portal.azure.com), find and open your **Logic App (Preview)** resource.
+
+1. On the logic app's menu, under **Settings**, select **Configuration**.
+
+1. On the **Application settings** tab, select **New application setting**.
+
+1. On the **Add/Edit application setting** pane, in the **Name** box, enter this operation option name:
+
+ `Workflows.{yourWorkflowName}.OperationOptions`
+
+1. In the **Value** box, enter the following value: `WithStatelessRunHistory`
+
+ For example:
+
+ ![Screenshot that shows the Azure portal and Logic App (Preview) resource with the "Configuration" > "New application setting" < "Add/Edit application setting" pane open and the "Workflows.{yourWorkflowName}.OperationOptions" option set to "WithStatelessRunHistory".](./media/create-single-tenant-workflows-azure-portal/stateless-operation-options-run-history.png)
+
+1. To finish this task, select **OK**. On the **Configuration** pane toolbar, select **Save**.
+
+1. To disable the run history when you're done, either set the `Workflows.{yourWorkflowName}.OperationOptions`property to `None`, or delete the property and its value.
+
+<a name="delete-from-designer"></a>
+
+## Delete items from the designer
+
+To delete an item in your workflow from the designer, follow any of these steps:
+
+* Select the item, open the item's shortcut menu (Shift+F10), and select **Delete**. To confirm, select **OK**.
+
+* Select the item, and press the delete key. To confirm, select **OK**.
+
+* Select the item so that details pane opens for that item. In the pane's upper right corner, open the ellipses (**...**) menu, and select **Delete**. To confirm, select **OK**.
+
+ ![Screenshot that shows a selected item on designer with the opened details pane plus the selected ellipses button and "Delete" command.](./media/create-single-tenant-workflows-azure-portal/delete-item-from-designer.png)
+
+ > [!TIP]
+ > If the ellipses menu isn't visible, expand your browser window wide enough so that
+ > the details pane shows the ellipses (**...**) button in the upper right corner.
+
+<a name="restart-stop-start"></a>
+
+## Restart, stop, or start logic apps
+
+You can stop or start a [single logic app](#restart-stop-start-single-logic-app) or [multiple logic apps at the same time](#stop-start-multiple-logic-apps). You can also restart a single logic app without first stopping. Your single-tenant logic app can include multiple workflows, so you can either stop the entire logic app or [disable only workflows](#disable-enable-workflows).
+
+> [!NOTE]
+> The stop logic app and disable workflow operations have different effects. For more information, review
+> [Considerations for stopping logic apps](#considerations-stop-logic-apps) and [considerations for disabling workflows](#disable-enable-workflows).
+
+<a name="considerations-stop-logic-apps"></a>
+
+### Considerations for stopping logic apps
+
+Stopping a logic app affects workflow instances in the following ways:
+
+* The Logic Apps service cancels all in-progress and pending runs immediately.
+
+* The Logic Apps service doesn't create or run new workflow instances.
+
+* Triggers won't fire the next time that their conditions are met. However, trigger states remember the points where the logic app was stopped. So, if you restart the logic app, the triggers fire for all unprocessed items since the last run.
+
+ To stop each workflow from triggering on unprocessed items since the last run, clear the trigger state before you restart the logic app by following these steps:
+
+ 1. In the Azure portal, find and open your logic app.
+ 1. On the logic app menu, under **Workflows**, select **Workflows**.
+ 1. Open a workflow, and edit any part of that workflow's trigger.
+ 1. Save your changes. This step resets the trigger's current state.
+ 1. Repeat for each workflow.
+ 1. When you're done, [restart your logic app](#restart-stop-start-single-logic-app).
+
+<a name="restart-stop-start-single-logic-app"></a>
+
+### Restart, stop, or start a single logic app
+
+1. In the Azure portal, find and open your logic app.
+
+1. On the logic app menu, select **Overview**.
+
+ * To restart a logic app without stopping, on the Overview pane toolbar, select **Restart**.
+ * To stop a running logic app, on the Overview pane toolbar, select **Stop**. Confirm your selection.
+ * To start a stopped logic app, on the Overview pane toolbar, select **Start**.
+
+ > [!NOTE]
+ > If your logic app is already stopped, you only see the **Start** option.
+ > If your logic app is already running, you only see the **Stop** option.
+ > You can restart your logic app anytime.
+
+1. To confirm whether your operation succeeded or failed, on main Azure toolbar, open the **Notifications** list (bell icon).
+
+<a name="stop-start-multiple-logic-apps"></a>
+
+### Stop or start multiple logic apps
+
+You can stop or start multiple logic apps at the same time, but you can't restart multiple logic apps without stopping them first.
+
+1. In the Azure portal's main search box, enter `logic apps`, and select **Logic apps**.
+
+1. On the **Logic apps** page, review the logic app's **Status** column.
+
+1. In the checkbox column, select the logic apps that you want to stop or start.
+
+ * To stop the selected running logic apps, on the Overview pane toolbar, select **Disable/Stop**. Confirm your selection.
+ * To start the selected stopped logic apps, on the Overview pane toolbar, select **Enable/Start**.
+
+1. To confirm whether your operation succeeded or failed, on main Azure toolbar, open the **Notifications** list (bell icon).
+
+<a name="disable-enable-workflows"></a>
+
+## Disable or enable workflows
+
+To stop the trigger from firing the next time when the trigger condition is met, disable your workflow. You can disable or enable a single workflow, but you can't disable or enable multiple workflows at the same time. Disabling a workflow affects workflow instances in the following ways:
+
+* The Logic Apps services continues all in-progress and pending runs until they finish. Based on the volume or backlog, this process might take time to complete.
+
+* The Logic Apps service doesn't create or run new workflow instances.
+
+* The trigger won't fire the next time that its conditions are met. However, the trigger state remembers the point at which the workflow was disabled. So, if you re-enable the workflow, the trigger fires for all the unprocessed items since the last run.
+
+ To stop the trigger from firing on unprocessed items since the last run, clear the trigger's state before you reactivate the workflow:
+
+ 1. In the workflow, edit any part of the workflow's trigger.
+ 1. Save your changes. This step resets your trigger's current state.
+ 1. [Reactivate your workflow](#disable-enable-workflows).
+
+> [!NOTE]
+> The disable workflow and stop logic app operations have different effects. For more information, review
+> [Considerations for stopping logic apps](#considerations-stop-logic-apps).
+
+<a name="disable-workflow"></a>
+
+### Disable workflow
+
+1. On the logic app menu, under **Workflows**, select **Workflows**. In the checkbox column, select the workflow to disable.
+
+1. On the **Workflows** pane toolbar, select **Disable**.
+
+1. To confirm whether your operation succeeded or failed, on main Azure toolbar, open the **Notifications** list (bell icon).
+
+<a name="enable-workflow"></a>
+
+### Enable workflow
+
+1. On the logic app menu, under **Workflows**, select **Workflows**. In the checkbox column, select the workflow to enable.
+
+1. On the **Workflows** pane toolbar, select **Enable**.
+
+1. To confirm whether your operation succeeded or failed, on main Azure toolbar, open the **Notifications** list (bell icon).
+
+<a name="delete"></a>
+
+## Delete logic apps or workflows
+
+You can [delete a single or multiple logic apps at the same time](#delete-logic-apps). Your single-tenant logic app can include multiple workflows, so you can either delete the entire logic app or [delete only workflows](#delete-workflows).
+
+<a name="delete-logic-apps"></a>
+
+### Delete logic apps
+
+Deleting a logic app cancels in-progress and pending runs immediately, but doesn't run cleanup tasks on the storage used by the app.
+
+1. In the Azure portal's main search box, enter `logic apps`, and select **Logic apps**.
+
+1. From the **Logic apps** list, in the checkbox column, select a single or multiple logic apps to delete. On the toolbar, select **Delete**.
+
+1. When the confirmation box appears, enter `yes`, and select **Delete**.
+
+1. To confirm whether your operation succeeded or failed, on main Azure toolbar, open the **Notifications** list (bell icon).
+
+<a name="delete-workflows"></a>
+
+### Delete workflows
+
+Deleting a workflow affects workflow instances in the following ways:
+
+* The Logic Apps service cancels in-progress and pending runs immediately, but runs cleanup tasks on the storage used by the workflow.
+
+* The Logic Apps service doesn't create or run new workflow instances.
+
+* If you delete a workflow and then recreate the same workflow, the recreated workflow won't have the same metadata as the deleted workflow. You have to resave any workflow that called the deleted workflow. That way, the caller gets the correct information for the recreated workflow. Otherwise, calls to the recreated workflow fail with an `Unauthorized` error. This behavior also applies to workflows that use artifacts in integration accounts and workflows that call Azure functions.
+
+1. In the Azure portal, find and open your logic app.
+
+1. On the logic app menu, under **Workflows**, select **Workflows**. In the checkbox column, select a single or multiple workflows to delete.
+
+1. On the toolbar, select **Delete**.
+
+1. To confirm whether your operation succeeded or failed, on main Azure toolbar, open the **Notifications** list (bell icon).
+
+<a name="troubleshoot"></a>
+
+## Troubleshoot problems and errors
+
+<a name="missing-triggers-actions"></a>
+
+### New triggers and actions are missing from the designer picker for previously created workflows
+
+Azure Logic Apps Preview supports built-in actions for Azure Function Operations, Liquid Operations, and XML Operations, such as **XML Validation** and **Transform XML**. However, for previously created logic apps, these actions might not appear in the designer for you to select if your logic app uses an outdated version of the extension bundle, `Microsoft.Azure.Functions.ExtensionBundle.Workflows`.
+
+To fix this problem, follow these steps to delete the outdated version so that the extension bundle can automatically update to the latest version.
+
+> [!NOTE]
+> This specific solution applies only to **Logic App (Preview)** resources that you create using
+> the Azure portal, not the logic apps that you create and deploy using Visual Studio Code and the
+> Azure Logic Apps (Preview) extension. See [Supported triggers and actions are missing from the designer in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#missing-triggers-actions).
+
+1. In the Azure portal, stop your logic app.
+
+ 1. On your logic app menu, select **Overview**.
+
+ 1. On the **Overview** pane's toolbar, select **Stop**.
+
+1. On your logic app menu, under **Development Tools**, select **Advanced Tools**.
+
+1. On the **Advanced Tools** pane, select **Go**, which opens the Kudu environment for your logic app.
+
+1. On the Kudu toolbar, open the **Debug console** menu, and select **CMD**.
+
+ A console window opens so that you can browse to the bundle folder using the command prompt. Or, you can browse the directory structure that appears the console window.
+
+1. Browse to the following folder, which contains versioned folders for the existing bundle:
+
+ `...\home\data\Functions\ExtensionBundles\Microsoft.Azure.Functions.ExtensionBundle.Workflows`
+
+1. Delete the version folder for the existing bundle. In the console window, you can run this command where you replace `{bundle-version}` with the existing version:
+
+ `rm -rf {bundle-version}`
+
+ For example: `rm -rf 1.1.3`
+
+ > [!TIP]
+ > If you get an error such as "permission denied" or "file in use", refresh the page in your browser,
+ > and try the previous steps again until the folder is deleted.
+
+1. In the Azure portal, return to your logic app's **Overview** page, and select **Restart**.
+
+ The portal automatically gets and uses the latest bundle.
+
+## Next steps
+
+We'd like to hear from you about your experiences with this scenario!
+
+* For bugs or problems, [create your issues in GitHub](https://github.com/Azure/logicapps/issues).
+* For questions, requests, comments, and other feedback, [use this feedback form](https://aka.ms/lafeedback).
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
+
+ Title: Create Logic Apps Preview workflows in Visual Studio Code
+description: Build and run workflows for automation and integration scenarios in Visual Studio Code with the Azure Logic Apps (Preview) extension.
+
+ms.suite: integration
++ Last updated : 04/23/2021++
+# Create stateful and stateless workflows in Visual Studio Code with the Azure Logic Apps (Preview) extension
+
+> [!IMPORTANT]
+> This capability is in public preview, is provided without a service level agreement, and is not recommended for production workloads.
+> Certain features might not be supported or might have constrained capabilities. For more information, see
+> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+With [Azure Logic Apps Preview](single-tenant-overview-compare.md), you can build automation and integration solutions across apps, data, cloud services, and systems by creating and running logic apps that include [*stateful* and *stateless* workflows](single-tenant-overview-compare.md#stateful-stateless) in Visual Studio Code by using the Azure Logic Apps (Preview) extension. By using this new logic app type, you can build multiple workflows that are powered by the redesigned Azure Logic Apps Preview runtime, which provides portability, better performance, and flexibility for deploying and running in various hosting environments, not only Azure, but also Docker containers. To learn more about the new logic app type, see [Overview for Azure Logic Apps Preview](single-tenant-overview-compare.md).
+
+![Screenshot that shows Visual Studio Code, logic app project, and workflow.](./media/create-single-tenant-workflows-visual-studio-code/visual-studio-code-logic-apps-overview.png)
+
+In Visual Studio Code, you can start by creating a project where you can *locally* build and run your logic app's workflows in your development environment by using the Azure Logic Apps (Preview) extension. While you can also start by [creating a new **Logic App (Preview)** resource in the Azure portal](create-single-tenant-workflows-azure-portal.md), both approaches provide the capability for you to deploy and run your logic app in the same kinds of hosting environments.
+
+Meanwhile, you can still create the original logic app type. Although the development experiences in Visual Studio Code differ between the original and new logic app types, your Azure subscription can include both types. You can view and access all the deployed logic apps in your Azure subscription, but the apps are organized into their own categories and sections.
+
+This article shows how to create your logic app and a workflow in Visual Studio Code by using the Azure Logic Apps (Preview) extension and performing these high-level tasks:
+
+* Create a project for your logic app and workflow.
+
+* Add a trigger and an action.
+
+* Run, test, debug, and review run history locally.
+
+* Find domain name details for firewall access.
+
+* Deploy to Azure, which includes optionally enabling Application Insights.
+
+* Manage your deployed logic app in Visual Studio Code and the Azure portal.
+
+* Enable run history for stateless workflows.
+
+* Enable or open the Application Insights after deployment.
+
+* Deploy to a Docker container that you can run anywhere.
+
+> [!NOTE]
+> For information about current known issues, review the [Logic Apps Public Preview Known Issues page in GitHub](https://github.com/Azure/logicapps/blob/master/articles/logic-apps-public-preview-known-issues.md).
+
+## Prerequisites
+
+### Access and connectivity
+
+* Access to the internet so that you can download the requirements, connect from Visual Studio Code to your Azure account, and publish from Visual Studio Code to Azure, a Docker container, or other environment.
+
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* To build the same example logic app in this article, you need an Office 365 Outlook email account that uses a Microsoft work or school account to sign in.
+
+ If you choose to use a different [email connector that's supported by Azure Logic Apps](/connectors/), such as Outlook.com or [Gmail](../connectors/connectors-google-data-security-privacy-policy.md), you can still follow the example, and the general overall steps are the same, but your user interface and options might differ in some ways. For example, if you use the Outlook.com connector, use your personal Microsoft account instead to sign in.
+
+<a name="storage-requirements"></a>
+
+### Storage requirements
+
+#### Windows
+
+To locally build and run your logic app project in Visual Studio Code when using Windows, follow these steps to set up the Azure Storage Emulator:
+
+1. Download and install [Azure Storage Emulator 5.10](https://go.microsoft.com/fwlink/p/?linkid=717179).
+
+1. If you don't have one already, you need to have a local SQL DB installation, such as the free [SQL Server 2019 Express Edition](https://go.microsoft.com/fwlink/p/?linkid=866658), so that the emulator can run.
+
+ For more information, see [Use the Azure Storage emulator for development and testing](../storage/common/storage-use-emulator.md).
+
+1. Before you can run your project, make sure that you start the emulator.
+
+ ![Screenshot that shows the Azure Storage Emulator running.](./media/create-single-tenant-workflows-visual-studio-code/start-storage-emulator.png)
+
+#### macOS and Linux
+
+To locally build and run your logic app project in Visual Studio Code when using macOS or Linux, follow these steps to create and set up an Azure Storage account.
+
+> [!NOTE]
+> Currently, the designer in Visual Studio Code doesn't work on Linux OS, but you can still run build, run, and deploy
+> logic apps that use the Logic Apps Preview runtime to Linux-based virtual machines. For now, you can build your logic
+> apps in Visual Studio Code on Windows or macOS and then deploy to a Linux-based virtual machine.
+
+1. Sign in to the [Azure portal](https://portal.azure.com), and [create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal), which is a [prerequisite for Azure Functions](../azure-functions/storage-considerations.md).
+
+1. On the storage account menu, under **Settings**, select **Access keys**.
+
+1. On the **Access keys** pane, find and copy the storage account's connection string, which looks similar to this example:
+
+ `DefaultEndpointsProtocol=https;AccountName=fabrikamstorageacct;AccountKey=<access-key>;EndpointSuffix=core.windows.net`
+
+ ![Screenshot that shows the Azure portal with storage account access keys and connection string copied.](./media/create-single-tenant-workflows-visual-studio-code/find-storage-account-connection-string.png)
+
+ For more information, review [Manage storage account keys](../storage/common/storage-account-keys-manage.md?tabs=azure-portal#view-account-access-keys).
+
+1. Save the connection string somewhere safe. After you create your logic app project in Visual Studio Code, you have to add the string to the **local.settings.json** file in your project's root level folder.
+
+ > [!IMPORTANT]
+ > If you plan to deploy to a Docker container, you also need to use this connection string with the Docker file that you use for deployment.
+ > For production scenarios, make sure that you protect and secure such secrets and sensitive information, for example, by using a key vault.
+
+### Tools
+
+* [Visual Studio Code 1.30.1 (January 2019) or higher](https://code.visualstudio.com/), which is free. Also, download and install these tools for Visual Studio Code, if you don't have them already:
+
+ * [Azure Account extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account), which provides a single common Azure sign-in and subscription filtering experience for all other Azure extensions in Visual Studio Code.
+
+ * [C# for Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp), which enables F5 functionality to run your logic app.
+
+ * [Azure Functions Core Tools 3.0.3245 or later](https://github.com/Azure/azure-functions-core-tools/releases/tag/3.0.3245) by using the Microsoft Installer (MSI) version, which is `func-cli-3.0.3245-x*.msi`.
+
+ These tools include a version of the same runtime that powers the Azure Functions runtime, which the Preview extension uses in Visual Studio Code.
+
+ > [!IMPORTANT]
+ > If you have an installation that's earlier than these versions, uninstall that version first,
+ > or make sure that the PATH environment variable points at the version that you download and install.
+
+ * [Azure Logic Apps (Preview) extension for Visual Studio Code](https://go.microsoft.com/fwlink/p/?linkid=2143167). This extension provides the capability for you to create logic apps where you can build stateful and stateless workflows that locally run in Visual Studio Code and then deploy those logic apps directly to Azure or to Docker containers.
+
+ Currently, you can have both the original Azure Logic Apps extension and the Public Preview extension installed in Visual Studio Code. Although the development experiences differ in some ways between the extensions, your Azure subscription can include both logic app types that you create with the extensions. Visual Studio Code shows all the deployed logic apps in your Azure subscription, but organizes them into different sections by extension names, **Logic Apps** and **Azure Logic Apps (Preview)**.
+
+ > [!IMPORTANT]
+ > If you created logic app projects with the earlier private preview extension, these projects won't work with the Public
+ > Preview extension. However, you can migrate these projects after you uninstall the private preview extension, delete the
+ > associated files, and install the public preview extension. You then create a new project in Visual Studio Code, and copy
+ > your previously created logic app's **workflow.definition** file into your new project. For more information, see
+ > [Migrate from the private preview extension](#migrate-private-preview).
+ >
+ > If you created logic app projects with the earlier public preview extension, you can continue using those projects
+ > without any migration steps.
+
+ **To install the **Azure Logic Apps (Preview)** extension, follow these steps:**
+
+ 1. In Visual Studio Code, on the left toolbar, select **Extensions**.
+
+ 1. In the extensions search box, enter `azure logic apps preview`. From the results list, select **Azure Logic Apps (Preview)** **>** **Install**.
+
+ After the installation completes, the Preview extension appears in the **Extensions: Installed** list.
+
+ ![Screenshot that shows Visual Studio Code's installed extensions list with the "Azure Logic Apps (Preview)" extension underlined.](./media/create-single-tenant-workflows-visual-studio-code/azure-logic-apps-extension-installed.png)
+
+ > [!TIP]
+ > If the extension doesn't appear in the installed list, try restarting Visual Studio Code.
+
+* To use the [Inline Code Operations action](../logic-apps/logic-apps-add-run-inline-code.md) that runs JavaScript, install [Node.js versions 10.x.x, 11.x.x, or 12.x.x](https://nodejs.org/en/download/releases/).
+
+ > [!TIP]
+ > For Windows, download the MSI version. If you use the ZIP version instead, you have to
+ > manually make Node.js available by using a PATH environment variable for your operating system.
+
+* To locally run webhook-based triggers and actions, such as the [built-in HTTP Webhook trigger](../connectors/connectors-native-webhook.md), in Visual Studio Code, you need to [set up forwarding for the callback URL](#webhook-setup).
+
+* To test the example logic app that you create in this article, you need a tool that can send calls to the Request trigger, which is the first step in example logic app. If you don't have such a tool, you can download, install, and use [Postman](https://www.postman.com/downloads/).
+
+* If you create your logic app and deploy with settings that support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you deploy your logic app from Visual Studio Code or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you deploy your logic app, or after deployment.
+
+<a name="migrate-private-preview"></a>
+
+## Migrate from private preview extension
+
+Any logic app projects that you created with the **Azure Logic Apps (Private Preview)** extension won't work with the Public Preview extension. However, you can migrate these projects to new projects by following these steps:
+
+1. Uninstall the private preview extension.
+
+1. Delete any associated extension bundle and NuGet package folders in these locations:
+
+ * The **Microsoft.Azure.Functions.ExtensionBundle.Workflows** folder, which contains previous extension bundles and is located along either path here:
+
+ * `C:\Users\{userName}\AppData\Local\Temp\Functions\ExtensionBundles`
+
+ * `C:\Users\{userName}\.azure-functions-core-tools\Functions\ExtensionBundles`
+
+ * The **microsoft.azure.workflows.webjobs.extension** folder, which is the [NuGet](/nuget/what-is-nuget) cache for the private preview extension and is located along this path:
+
+ `C:\Users\{userName}\.nuget\packages`
+
+1. Install the **Azure Logic Apps (Preview)** extension.
+
+1. Create a new project in Visual Studio Code.
+
+1. Copy your previously created logic app's **workflow.definition** file to your new project.
+
+<a name="set-up"></a>
+
+## Set up Visual Studio Code
+
+1. To make sure that all the extensions are correctly installed, reload or restart Visual Studio Code.
+
+1. Confirm that Visual Studio Code automatically finds and installs extension updates so that your Preview extension gets the latest updates. Otherwise, you have to manually uninstall the outdated version and install the latest version.
+
+ 1. On the **File** menu, go to **Preferences** **>** **Settings**.
+
+ 1. On the **User** tab, go to **Features** **>** **Extensions**.
+
+ 1. Confirm that **Auto Check Updates** and **Auto Update** are selected.
+
+Also, by default, the following settings are enabled and set for the Logic Apps preview extension:
+
+* **Azure Logic Apps V2: Project Runtime**, which is set to version **~3**
+
+ > [!NOTE]
+ > This version is required to use the [Inline Code Operations actions](../logic-apps/logic-apps-add-run-inline-code.md).
+
+* **Azure Logic Apps V2: Experimental View Manager**, which enables the latest designer in Visual Studio Code. If you experience problems on the designer, such as dragging and dropping items, turn off this setting.
+
+To find and confirm these settings, follow these steps:
+
+1. On the **File** menu, go to **Preferences** **>** **Settings**.
+
+1. On the **User** tab, go to **>** **Extensions** **>** **Azure Logic Apps (Preview)**.
+
+ For example, you can find the **Azure Logic Apps V2: Project Runtime** setting here or use the search box to find other settings:
+
+ ![Screenshot that shows Visual Studio Code settings for "Azure Logic Apps (Preview)" extension.](./media/create-single-tenant-workflows-visual-studio-code/azure-logic-apps-preview-settings.png)
+
+<a name="connect-azure-account"></a>
+
+## Connect to your Azure account
+
+1. On the Visual Studio Code Activity Bar, select the Azure icon.
+
+ ![Screenshot that shows Visual Studio Code Activity Bar and selected Azure icon.](./media/create-single-tenant-workflows-visual-studio-code/visual-studio-code-azure-icon.png)
+
+1. In the Azure pane, under **Azure: Logic Apps (Preview)**, select **Sign in to Azure**. When the Visual Studio Code authentication page appears, sign in with your Azure account.
+
+ ![Screenshot that shows Azure pane and selected link for Azure sign in.](./media/create-single-tenant-workflows-visual-studio-code/sign-in-azure-subscription.png)
+
+ After you sign in, the Azure pane shows the subscriptions in your Azure account. If you also have the publicly released extension, you can find any logic apps that you created with that extension in the **Logic Apps** section, not the **Logic Apps (Preview)** section.
+
+ If the expected subscriptions don't appear, or you want the pane to show only specific subscriptions, follow these steps:
+
+ 1. In the subscriptions list, move your pointer next to the first subscription until the **Select Subscriptions** button (filter icon) appears. Select the filter icon.
+
+ ![Screenshot that shows Azure pane and selected filter icon.](./media/create-single-tenant-workflows-visual-studio-code/filter-subscription-list.png)
+
+ Or, in the Visual Studio Code status bar, select your Azure account.
+
+ 1. When another subscriptions list appears, select the subscriptions that you want, and then make sure that you select **OK**.
+
+<a name="create-project"></a>
+
+## Create a local project
+
+Before you can create your logic app, create a local project so that you can manage, run, and deploy your logic app from Visual Studio Code. The underlying project is similar to an Azure Functions project, also known as a function app project. However, these project types are separate from each other, so logic apps and function apps can't exist in the same project.
+
+1. On your computer, create an *empty* local folder to use for the project that you'll later create in Visual Studio Code.
+
+1. In Visual Studio Code, close any and all open folders.
+
+1. In the Azure pane, next to **Azure: Logic Apps (Preview)**, select **Create New Project** (icon that shows a folder and lightning bolt).
+
+ ![Screenshot that shows Azure pane toolbar with "Create New Project" selected.](./media/create-single-tenant-workflows-visual-studio-code/create-new-project-folder.png)
+
+1. If Windows Defender Firewall prompts you to grant network access for `Code.exe`, which is Visual Studio Code, and for `func.exe`, which is the Azure Functions Core Tools, select **Private networks, such as my home or work network** **>** **Allow access**.
+
+1. Browse to the location where you created your project folder, select that folder and continue.
+
+ ![Screenshot that shows "Select Folder" dialog box with a newly created project folder and the "Select" button selected.](./media/create-single-tenant-workflows-visual-studio-code/select-project-folder.png)
+
+1. From the templates list that appears, select either **Stateful Workflow** or **Stateless Workflow**. This example selects **Stateful Workflow**.
+
+ ![Screenshot that shows the workflow templates list with "Stateful Workflow" selected.](./media/create-single-tenant-workflows-visual-studio-code/select-stateful-stateless-workflow.png)
+
+1. Provide a name for your workflow and press Enter. This example uses `Fabrikam-Stateful-Workflow` as the name.
+
+ ![Screenshot that shows the "Create new Stateful Workflow (3/4)" box and "Fabrikam-Stateful-Workflow" as the workflow name.](./media/create-single-tenant-workflows-visual-studio-code/name-your-workflow.png)
+
+ Visual Studio Code finishes creating your project, and opens the **workflow.json** file for your workflow in the code editor.
+
+ > [!NOTE]
+ > If you're prompted to select how to open your project, select **Open in current window**
+ > if you want to open your project in the current Visual Studio Code window. To open a new
+ > instance for Visual Studio Code, select **Open in new window**.
+
+1. From the Visual Studio toolbar, open the Explorer pane, if not already open.
+
+ The Explorer pane shows your project, which now includes automatically generated project files. For example, the project has a folder that shows your workflow's name. Inside this folder, the **workflow.json** file contains your workflow's underlying JSON definition.
+
+ ![Screenshot that shows the Explorer pane with project folder, workflow folder, and "workflow.json" file.](./media/create-single-tenant-workflows-visual-studio-code/local-project-created.png)
+
+1. If you're using macOS or Linux, set up access to your storage account by following these steps, which are required for locally running your project:
+
+ 1. In your project's root folder, open the **local.settings.json** file.
+
+ ![Screenshot that shows Explorer pane and 'local.settings.json' file in your project.](./media/create-single-tenant-workflows-visual-studio-code/local-settings-json-files.png)
+
+ 1. Replace the `AzureWebJobsStorage` property value with the storage account's connection string that you saved earlier, for example:
+
+ Before:
+
+ ```json
+ {
+ "IsEncrypted": false,
+ "Values": {
+ "AzureWebJobsStorage": "UseDevelopmentStorage=true",
+ "FUNCTIONS_WORKER_RUNTIME": "dotnet"
+ }
+ }
+ ```
+
+ After:
+
+ ```json
+ {
+ "IsEncrypted": false,
+ "Values": {
+ "AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=fabrikamstorageacct;AccountKey=<access-key>;EndpointSuffix=core.windows.net",
+ "FUNCTIONS_WORKER_RUNTIME": "dotnet"
+ }
+ }
+ ```
+
+ > [!IMPORTANT]
+ > For production scenarios, make sure that you protect and secure such secrets and sensitive information, for example, by using a key vault.
+
+ 1. When you're done, make sure that you save your changes.
+
+<a name="enable-built-in-connector-authoring"></a>
+
+## Enable built-in connector authoring
+
+You can create your own built-in connectors for any service you need by using the [preview release's extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similar to built-in connectors such as Azure Service Bus and SQL Server, these connectors provide higher throughput, low latency, local connectivity, and run natively in the same process as the preview runtime.
+
+The authoring capability is currently available only in Visual Studio Code, but isn't enabled by default. To create these connectors, you need to first convert your project from extension bundle-based (Node.js) to NuGet package-based (.NET).
+
+> [!IMPORTANT]
+> This action is a one-way operation that you can't undo.
+
+1. In the Explorer pane, at your project's root, move your mouse pointer over any blank area below all the other files and folders, open the shortcut menu, and select **Convert to Nuget-based Logic App project**.
+
+ ![Screenshot that shows that shows Explorer pane with the project's shortcut menu opened from a blank area in the project window.](./media/create-single-tenant-workflows-visual-studio-code/convert-logic-app-project.png)
+
+1. When the prompt appears, confirm the project conversion.
+
+1. To continue, review and follow the steps in the article, [Azure Logic Apps Running Anywhere - Built-in connector extensibility](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272).
+
+<a name="open-workflow-definition-designer"></a>
+
+## Open the workflow definition file in the designer
+
+1. Check the versions that are installed on your computer by running this command:
+
+ `..\Users\{yourUserName}\dotnet --list-sdks`
+
+ If you have .NET Core SDK 5.x, this version might prevent you from opening the logic app's underlying workflow definition in the designer. Rather than uninstall this version, at your project's root folder, create a **global.json** file that references the .NET Core runtime 3.x version that you have that's later than 3.1.201, for example:
+
+ ```json
+ {
+ "sdk": {
+ "version": "3.1.8",
+ "rollForward": "disable"
+ }
+ }
+ ```
+
+ > [!IMPORTANT]
+ > Make sure that you explicitly add the **global.json** file in your project's
+ > root folder from inside Visual Studio Code. Otherwise, the designer won't open.
+
+1. Expand the project folder for your workflow. Open the **workflow.json** file's shortcut menu, and select **Open in Designer**.
+
+ ![Screenshot that shows Explorer pane and shortcut window for the workflow.json file with "Open in Designer" selected.](./media/create-single-tenant-workflows-visual-studio-code/open-definition-file-in-designer.png)
+
+1. From the **Enable connectors in Azure** list, select **Use connectors from Azure**, which applies to all managed connectors that are available and deployed in Azure, not just connectors for Azure services.
+
+ ![Screenshot that shows Explorer pane with "Enable connectors in Azure" list open and "Use connectors from Azure" selected.](./media/create-single-tenant-workflows-visual-studio-code/use-connectors-from-azure.png)
+
+ > [!NOTE]
+ > Stateless workflows currently support only *actions* for [managed connectors](../connectors/managed.md),
+ > which are deployed in Azure, and not triggers. Although you have the option to enable connectors in Azure for your stateless workflow,
+ > the designer doesn't show any managed connector triggers for you to select.
+
+1. From the **Select subscription** list, select the Azure subscription to use for your logic app project.
+
+ ![Screenshot that shows Explorer pane with the "Select subscription" box and your subscription selected.](./media/create-single-tenant-workflows-visual-studio-code/select-azure-subscription.png)
+
+1. From the resource groups list, select **Create new resource group**.
+
+ ![Screenshot that shows Explorer pane with resource groups list and "Create new resource group" selected.](./media/create-single-tenant-workflows-visual-studio-code/create-select-resource-group.png)
+
+1. Provide a name for the resource group, and press Enter. This example uses `Fabrikam-Workflows-RG`.
+
+ ![Screenshot that shows Explorer pane and resource group name box.](./media/create-single-tenant-workflows-visual-studio-code/enter-name-for-resource-group.png)
+
+1. From the locations list, find and select the Azure region to use when creating your resource group and resources. This example uses **West Central US**.
+
+ ![Screenshot that shows Explorer pane with locations list and "West Central US" selected.](./media/create-single-tenant-workflows-visual-studio-code/select-azure-region.png)
+
+ After you perform this step, Visual Studio Code opens the workflow designer.
+
+ > [!NOTE]
+ > When Visual Studio Code starts the workflow design-time API, you might get a message
+ > that startup might take a few seconds. You can ignore this message or select **OK**.
+ >
+ > If the designer won't open, review the troubleshooting section, [Designer fails to open](#designer-fails-to-open).
+
+ After the designer appears, the **Choose an operation** prompt appears on the designer and is selected by default, which shows the **Add an action** pane.
+
+ ![Screenshot that shows the workflow designer.](./media/create-single-tenant-workflows-visual-studio-code/workflow-app-designer.png)
+
+1. Next, [add a trigger and actions](#add-trigger-actions) to your workflow.
+
+<a name="add-trigger-actions"></a>
+
+## Add a trigger and actions
+
+After you open the designer, the **Choose an operation** prompt appears on the designer and is selected by default. You can now start creating your workflow by adding a trigger and actions.
+
+The workflow in this example uses this trigger and these actions:
+
+* The built-in [Request trigger](../connectors/connectors-native-reqres.md), **When a HTTP request is received**, which receives inbound calls or requests and creates an endpoint that other services or logic apps can call.
+
+* The [Office 365 Outlook action](../connectors/connectors-create-api-office365-outlook.md), **Send an email**.
+
+* The built-in [Response action](../connectors/connectors-native-reqres.md), which you use to send a reply and return data back to the caller.
+
+### Add the Request trigger
+
+1. Next to the designer, in the **Add a trigger** pane, under the **Choose an operation** search box, make sure that **Built-in** is selected so that you can select a trigger that runs natively.
+
+1. In the **Choose an operation** search box, enter `when a http request`, and select the built-in Request trigger that's named **When a HTTP request is received**.
+
+ ![Screenshot that shows the workflow designer and **Add a trigger** pane with "When a HTTP request is received" trigger selected.](./media/create-single-tenant-workflows-visual-studio-code/add-request-trigger.png)
+
+ When the trigger appears on the designer, the trigger's details pane opens to show the trigger's properties, settings, and other actions.
+
+ ![Screenshot that shows the workflow designer with the "When a HTTP request is received" trigger selected and trigger details pane open.](./media/create-single-tenant-workflows-visual-studio-code/request-trigger-added-to-designer.png)
+
+ > [!TIP]
+ > If the details pane doesn't appear, makes sure that the trigger is selected on the designer.
+
+1. If you need to delete an item from the designer, [follow these steps for deleting items from the designer](#delete-from-designer).
+
+### Add the Office 365 Outlook action
+
+1. On the designer, under the trigger that you added, select **New step**.
+
+ The **Choose an operation** prompt appears on the designer, and the **Add an action** pane reopens so that you can select the next action.
+
+1. On the **Add an action** pane, under the **Choose an operation** search box, select **Azure** so that you can find and select an action for a managed connector that's deployed in Azure.
+
+ This example selects and uses the Office 365 Outlook action, **Send an email (V2)**.
+
+ ![Screenshot that shows the workflow designer and **Add an action** pane with Office 365 Outlook "Send an email" action selected.](./media/create-single-tenant-workflows-visual-studio-code/add-send-email-action.png)
+
+1. In the action's details pane, select **Sign in** so that you can create a connection to your email account.
+
+ ![Screenshot that shows the workflow designer and **Send an email (V2)** pane with "Sign in" selected.](./media/create-single-tenant-workflows-visual-studio-code/send-email-action-sign-in.png)
+
+1. When Visual Studio Code prompts you for consent to access your email account, select **Open**.
+
+ ![Screenshot that shows the Visual Studio Code prompt to permit access.](./media/create-single-tenant-workflows-visual-studio-code/visual-studio-code-open-external-website.png)
+
+ > [!TIP]
+ > To prevent future prompts, select **Configure Trusted Domains**
+ > so that you can add the authentication page as a trusted domain.
+
+1. Follow the subsequent prompts to sign in, allow access, and allow returning to Visual Studio Code.
+
+ > [!NOTE]
+ > If too much time passes before you complete the prompts, the authentication process times out and fails.
+ > In this case, return to the designer and retry signing in to create the connection.
+
+1. When the Azure Logic Apps (Preview) extension prompts you for consent to access your email account, select **Open**. Follow the subsequent prompt to allow access.
+
+ ![Screenshot that shows the Preview extension prompt to permit access.](./media/create-single-tenant-workflows-visual-studio-code/allow-preview-extension-open-uri.png)
+
+ > [!TIP]
+ > To prevent future prompts, select **Don't ask again for this extension**.
+
+ After Visual Studio Code creates your connection, some connectors show the message that `The connection will be valid for {n} days only`. This time limit applies only to the duration while you author your logic app in Visual Studio Code. After deployment, this limit no longer applies because your logic app can authenticate at runtime by using its automatically enabled [system-assigned managed identity](../logic-apps/create-managed-service-identity.md). This managed identity differs from the authentication credentials or connection string that you use when you create a connection. If you disable this system-assigned managed identity, connections won't work at runtime.
+
+1. On the designer, if the **Send an email** action doesn't appear selected, select that action.
+
+1. On the action's details pane, on the **Parameters** tab, provide the required information for the action, for example:
+
+ ![Screenshot that shows the workflow designer with details for Office 365 Outlook "Send an email" action.](./media/create-single-tenant-workflows-visual-studio-code/send-email-action-details.png)
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **To** | Yes | <*your-email-address*> | The email recipient, which can be your email address for test purposes. This example uses the fictitious email, `sophiaowen@fabrikam.com`. |
+ | **Subject** | Yes | `An email from your example workflow` | The email subject |
+ | **Body** | Yes | `Hello from your example workflow!` | The email body content |
+ ||||
+
+ > [!NOTE]
+ > If you want to make any changes in the details pane on the **Settings**, **Static Result**, or **Run After** tab,
+ > make sure that you select **Done** to commit those changes before you switch tabs or change focus to the designer.
+ > Otherwise, Visual Studio Code won't keep your changes. For more information, review the
+ > [Logic Apps Public Preview Known Issues page in GitHub](https://github.com/Azure/logicapps/blob/master/articles/logic-apps-public-preview-known-issues.md).
+
+1. On the designer, select **Save**.
+
+> [!IMPORTANT]
+> To locally run a workflow that uses a webhook-based trigger or actions, such as the
+> [built-in HTTP Webhook trigger or action](../connectors/connectors-native-webhook.md),
+> you must enable this capability by [setting up forwarding for the webhook's callback URL](#webhook-setup).
+
+<a name="webhook-setup"></a>
+
+## Enable locally running webhooks
+
+When you use a webhook-based trigger or action, such as **HTTP Webhook**, with a logic app running in Azure, the Logic Apps runtime subscribes to the service endpoint by generating and registering a callback URL with that endpoint. The trigger or action then waits for the service endpoint to call the URL. However, when you're working in Visual Studio Code, the generated callback URL starts with `http://localhost:7071/...`. This URL is for your localhost server, which is private so the service endpoint can't call this URL.
+
+To locally run webhook-based triggers and actions in Visual Studio Code, you need to set up a public URL that exposes your localhost server and securely forwards calls from the service endpoint to the webhook callback URL. You can use a forwarding service and tool such as [**ngrok**](https://ngrok.com/), which opens an HTTP tunnel to your localhost port, or you can use your own tool.
+
+#### Set up call forwarding using **ngrok**
+
+1. [Sign up for an **ngrok** account](https://dashboard.ngrok.com/signup) if you don't have one. Otherwise, [sign in to your account](https://dashboard.ngrok.com/login).
+
+1. Get your personal authentication token, which your **ngrok** client needs to connect and authenticate access to your account.
+
+ 1. To find your [authentication token page](https://dashboard.ngrok.com/auth/your-authtoken), on your account dashboard menu, expand **Authentication**, and select **Your Authtoken**.
+
+ 1. From the **Your Authtoken** box, copy the token to a safe location.
+
+1. From the [**ngrok** download page](https://ngrok.com/download) or [your account dashboard](https://dashboard.ngrok.com/get-started/setup), download the **ngrok** version that you want, and extract the .zip file. For more information, see [Step 1: Unzip to install](https://ngrok.com/download).
+
+1. On your computer, open your command prompt tool. Browse to the location where you have the **ngrok.exe** file.
+
+1. Connect the **ngrok** client to your **ngrok** account by running the following command. For more information, see [Step 2: Connect your account](https://ngrok.com/download).
+
+ `ngrok authtoken <your_auth_token>`
+
+1. Open the HTTP tunnel to localhost port 7071 by running the following command. For more information, see [Step 3: Fire it up](https://ngrok.com/download).
+
+ `ngrok http 7071`
+
+1. From the output, find the following line:
+
+ `http://<domain>.ngrok.io -> http://localhost:7071`
+
+1. Copy and save the URL that has this format: `http://<domain>.ngrok.io`
+
+#### Set up the forwarding URL in your app settings
+
+1. In Visual Studio Code, on the designer, add the **HTTP + Webhook** trigger or action.
+
+1. When the prompt appears for the host endpoint location, enter the forwarding (redirection) URL that you previously created.
+
+ > [!NOTE]
+ > Ignoring the prompt causes a warning to appear that you must provide the forwarding URL,
+ > so select **Configure**, and enter the URL. After you finish this step, the prompt won't
+ > reappear for subsequent webhook triggers or actions that you might add.
+ >
+ > To make the prompt reappear, at your project's root level, open the **local.settings.json**
+ > file's shortcut menu, and select **Configure Webhook Redirect Endpoint**. The prompt now
+ > appears so you can provide the forwarding URL.
+
+ Visual Studio Code adds the forwarding URL to the **local.settings.json** file in your project's root folder. In the `Values` object, the property named `Workflows.WebhookRedirectHostUri` now appears and is set to the forwarding URL, for example:
+
+ ```json
+ {
+ "IsEncrypted": false,
+ "Values": {
+ "AzureWebJobsStorage": "UseDevelopmentStorage=true",
+ "FUNCTIONS_WORKER_RUNTIME": "node",
+ "FUNCTIONS_V2_COMPATIBILITY_MODE": "true",
+ <...>
+ "Workflows.WebhookRedirectHostUri": "http://xxxXXXXxxxXXX.ngrok.io",
+ <...>
+ }
+ }
+ ```
+
+The first time when you start a local debugging session or run the workflow without debugging, the Logic Apps runtime registers the workflow with the service endpoint and subscribes to that endpoint for notifying the webhook operations. The next time that your workflow runs, the runtime won't register or resubscribe because the subscription registration already exists in local storage.
+
+When you stop the debugging session for a workflow run that uses locally run webhook-based triggers or actions, the existing subscription registrations aren't deleted. To unregister, you have to manually remove or delete the subscription registrations.
+
+> [!NOTE]
+> After your workflow starts running, the terminal window might show errors like this example:
+>
+> `message='Http request failed with unhandled exception of type 'InvalidOperationException' and message: 'System.InvalidOperationException: Synchronous operations are disallowed. Call ReadAsync or set AllowSynchronousIO to true instead.`
+>
+> In this case, open the **local.settings.json** file in your project's root folder, and make sure that the property is set to `true`:
+>
+> `"FUNCTIONS_V2_COMPATIBILITY_MODE": "true"`
+
+<a name="manage-breakpoints"></a>
+
+## Manage breakpoints for debugging
+
+Before you run and test your logic app workflow by starting a debugging session, you can set [breakpoints](https://code.visualstudio.com/docs/editor/debugging#_breakpoints) inside the **workflow.json** file for each workflow. No other setup is required.
+
+At this time, breakpoints are supported only for actions, not triggers. Each action definition has these breakpoint locations:
+
+* Set the starting breakpoint on the line that shows the action's name. When this breakpoint hits during the debugging session, you can review the action's inputs before they're evaluated.
+
+* Set the ending breakpoint on the line that shows the action's closing curly brace (**}**). When this breakpoint hits during the debugging session, you can review the action's results before the action finishes running.
+
+To add a breakpoint, follow these steps:
+
+1. Open the **workflow.json** file for the workflow that you want to debug.
+
+1. On the line where you want to set the breakpoint, in the left column, select inside that column. To remove the breakpoint, select that breakpoint.
+
+ When you start your debugging session, the Run view appears on the left side of the code window, while the Debug toolbar appears near the top.
+
+ > [!NOTE]
+ > If the Run view doesn't automatically appear, press Ctrl+Shift+D.
+
+1. To review the available information when a breakpoint hits, in the Run view, examine the **Variables** pane.
+
+1. To continue workflow execution, on the Debug toolbar, select **Continue** (play button).
+
+You can add and remove breakpoints at any time during the workflow run. However, if you update the **workflow.json** file after the run starts, breakpoints don't automatically update. To update the breakpoints, restart the logic app.
+
+For general information, see [Breakpoints - Visual Studio Code](https://code.visualstudio.com/docs/editor/debugging#_breakpoints).
+
+<a name="run-test-debug-locally"></a>
+
+## Run, test, and debug locally
+
+To test your logic app, follow these steps to start a debugging session, and find the URL for the endpoint that's created by the Request trigger. You need this URL so that you can later send a request to that endpoint.
+
+1. To debug a stateless workflow more easily, you can [enable the run history for that workflow](#enable-run-history-stateless).
+
+1. On the Visual Studio Code Activity Bar, open the **Run** menu, and select **Start Debugging** (F5).
+
+ The **Terminal** window opens so that you can review the debugging session.
+
+ > [!NOTE]
+ > If you get the error, **"Error exists after running preLaunchTask 'generateDebugSymbols'"**,
+ > see the troubleshooting section, [Debugging session fails to start](#debugging-fails-to-start).
+
+1. Now, find the callback URL for the endpoint on the Request trigger.
+
+ 1. Reopen the Explorer pane so that you can view your project.
+
+ 1. From the **workflow.json** file's shortcut menu, select **Overview**.
+
+ ![Screenshot that shows the Explorer pane and shortcut window for the workflow.json file with "Overview" selected.](./media/create-single-tenant-workflows-visual-studio-code/open-workflow-overview.png)
+
+ 1. Find the **Callback URL** value, which looks similar to this URL for the example Request trigger:
+
+ `http://localhost:7071/api/<workflow-name>/triggers/manual/invoke?api-version=2020-05-01-preview&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<shared-access-signature>`
+
+ ![Screenshot that shows your workflow's overview page with callback URL](./media/create-single-tenant-workflows-visual-studio-code/find-callback-url.png)
+
+1. To test the callback URL by triggering the logic app workflow, open [Postman](https://www.postman.com/downloads/) or your preferred tool for creating and sending requests.
+
+ This example continues by using Postman. For more information, see [Postman Getting Started](https://learning.postman.com/docs/getting-started/introduction/).
+
+ 1. On the Postman toolbar, select **New**.
+
+ ![Screenshot that shows Postman with New button selected](./media/create-single-tenant-workflows-visual-studio-code/postman-create-request.png)
+
+ 1. On the **Create New** pane, under **Building Blocks**, select **Request**.
+
+ 1. In the **Save Request** window, under **Request name**, provide a name for the request, for example, `Test workflow trigger`.
+
+ 1. Under **Select a collection or folder to save to**, select **Create Collection**.
+
+ 1. Under **All Collections**, provide a name for the collection to create for organizing your requests, press Enter, and select **Save to <*collection-name*>**. This example uses `Logic Apps requests` as the collection name.
+
+ Postman's request pane opens so that you can send a request to the callback URL for the Request trigger.
+
+ ![Screenshot that shows Postman with the opened request pane](./media/create-single-tenant-workflows-visual-studio-code/postman-request-pane.png)
+
+ 1. Return to Visual Studio Code. from the workflow's overview page, copy the **Callback URL** property value.
+
+ 1. Return to Postman. On the request pane, next the method list, which currently shows **GET** as the default request method, paste the callback URL that you previously copied in the address box, and select **Send**.
+
+ ![Screenshot that shows Postman and callback URL in the address box with Send button selected](./media/create-single-tenant-workflows-visual-studio-code/postman-test-call-back-url.png)
+
+ The example logic app workflow sends an email that appears similar to this example:
+
+ ![Screenshot that shows Outlook email as described in the example](./media/create-single-tenant-workflows-visual-studio-code/workflow-app-result-email.png)
+
+1. In Visual Studio Code, return to your workflow's overview page.
+
+ If you created a stateful workflow, after the request that you sent triggers the workflow, the overview page shows the workflow's run status and history.
+
+ > [!TIP]
+ > If the run status doesn't appear, try refreshing the overview page by selecting **Refresh**.
+ > No run happens for a trigger that's skipped due to unmet criteria or finding no data.
+
+ ![Screenshot that shows the workflow's overview page with run status and history](./media/create-single-tenant-workflows-visual-studio-code/post-trigger-call.png)
+
+ | Run status | Description |
+ ||-|
+ | **Aborted** | The run stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
+ | **Cancelled** | The run was triggered and started but received a cancellation request. |
+ | **Failed** | At least one action in the run failed. No subsequent actions in the workflow were set up to handle the failure. |
+ | **Running** | The run was triggered and is in progress, but this status can also appear for a run that is throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Tip**: If you set up [diagnostics logging](monitor-logic-apps-log-analytics.md), you can get information about any throttle events that happen. |
+ | **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. |
+ | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <p><p>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. |
+ | **Waiting** | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. |
+ |||
+
+1. To review the statuses for each step in a specific run and the step's inputs and outputs, select the ellipses (**...**) button for that run, and select **Show Run**.
+
+ ![Screenshot that shows your workflow's run history row with ellipses button and "Show Run" selected](./media/create-single-tenant-workflows-visual-studio-code/show-run-history.png)
+
+ Visual Studio Code opens the monitoring view and shows the status for each step in the run.
+
+ ![Screenshot that shows each step in the workflow run and their status](./media/create-single-tenant-workflows-visual-studio-code/run-history-action-status.png)
+
+ > [!NOTE]
+ > If a run failed and a step in monitoring view shows the `400 Bad Request` error, this problem might result
+ > from a longer trigger name or action name that causes the underlying Uniform Resource Identifier (URI) to exceed
+ > the default character limit. For more information, see ["400 Bad Request"](#400-bad-request).
+
+ Here are the possible statuses that each step in the workflow can have:
+
+ | Action status | Icon | Description |
+ |||-|
+ | **Aborted** | ![Icon for "Aborted" action status][aborted-icon] | The action stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
+ | **Cancelled** | ![Icon for "Cancelled" action status][cancelled-icon] | The action was running but received a request to cancel. |
+ | **Failed** | ![Icon for "Failed" action status][failed-icon] | The action failed. |
+ | **Running** | ![Icon for "Running" action status][running-icon] | The action is currently running. |
+ | **Skipped** | ![Icon for "Skipped" action status][skipped-icon] | The action was skipped because the immediately preceding action failed. An action has a `runAfter` condition that requires that the preceding action finishes successfully before the current action can run. |
+ | **Succeeded** | ![Icon for "Succeeded" action status][succeeded-icon] | The action succeeded. |
+ | **Succeeded with retries** | ![Icon for "Succeeded with retries" action status][succeeded-with-retries-icon] | The action succeeded but only after one or more retries. To review the retry history, in the run history details view, select that action so that you can view the inputs and outputs. |
+ | **Timed out** | ![Icon for "Timed out" action status][timed-out-icon] | The action stopped due to the timeout limit specified by that action's settings. |
+ | **Waiting** | ![Icon for "Waiting" action status][waiting-icon] | Applies to a webhook action that's waiting for an inbound request from a caller. |
+ ||||
+
+ [aborted-icon]: ./media/create-single-tenant-workflows-visual-studio-code/aborted.png
+ [cancelled-icon]: ./media/create-single-tenant-workflows-visual-studio-code/cancelled.png
+ [failed-icon]: ./media/create-single-tenant-workflows-visual-studio-code/failed.png
+ [running-icon]: ./media/create-single-tenant-workflows-visual-studio-code/running.png
+ [skipped-icon]: ./media/create-single-tenant-workflows-visual-studio-code/skipped.png
+ [succeeded-icon]: ./media/create-single-tenant-workflows-visual-studio-code/succeeded.png
+ [succeeded-with-retries-icon]: ./media/create-single-tenant-workflows-visual-studio-code/succeeded-with-retries.png
+ [timed-out-icon]: ./media/create-single-tenant-workflows-visual-studio-code/timed-out.png
+ [waiting-icon]: ./media/create-single-tenant-workflows-visual-studio-code/waiting.png
+
+1. To review the inputs and outputs for each step, select the step that you want to inspect.
+
+ ![Screenshot that shows the status for each step in the workflow plus the inputs and outputs in the expanded "Send an email" action](./media/create-single-tenant-workflows-visual-studio-code/run-history-details.png)
+
+1. To further review the raw inputs and outputs for that step, select **Show raw inputs** or **Show raw outputs**.
+
+1. To stop the debugging session, on the **Run** menu, select **Stop Debugging** (Shift + F5).
+
+<a name="return-response"></a>
+
+## Return a response
+
+To return a response to the caller that sent a request to your logic app, you can use the built-in [Response action](../connectors/connectors-native-reqres.md) for a workflow that starts with the Request trigger.
+
+1. On the workflow designer, under the **Send an email** action, select **New step**.
+
+ The **Choose an operation** prompt appears on the designer, and the **Add an action pane** reopens so that you can select the next action.
+
+1. On the **Add an action** pane, under the **Choose an action** search box, make sure that **Built-in** is selected. In the search box, enter `response`, and select the **Response** action.
+
+ ![Screenshot that shows the workflow designer with the Response action selected.](./media/create-single-tenant-workflows-visual-studio-code/add-response-action.png)
+
+ When the **Response** action appears on the designer, the action's details pane automatically opens.
+
+ ![Screenshot that shows the workflow designer with the "Response" action's details pane open and the "Body" property set to the "Send an email" action's "Body" property value.](./media/create-single-tenant-workflows-visual-studio-code/response-action-details.png)
+
+1. On the **Parameters** tab, provide the required information for the function that you want to call.
+
+ This example returns the **Body** property value that's output from the **Send an email** action.
+
+ 1. Click inside the **Body** property box so that the dynamic content list appears and shows the available output values from the preceding trigger and actions in the workflow.
+
+ ![Screenshot that shows the "Response" action's details pane with the mouse pointer inside the "Body" property so that the dynamic content list appears.](./media/create-single-tenant-workflows-visual-studio-code/open-dynamic-content-list.png)
+
+ 1. In the dynamic content list, under **Send an email**, select **Body**.
+
+ ![Screenshot that shows the open dynamic content list. In the list, under the "Send an email" header, the "Body" output value is selected.](./media/create-single-tenant-workflows-visual-studio-code/select-send-email-action-body-output-value.png)
+
+ When you're done, the Response action's **Body** property is now set to the **Send an email** action's **Body** output value.
+
+ ![Screenshot that shows the status for each step in the workflow plus the inputs and outputs in the expanded "Response" action.](./media/create-single-tenant-workflows-visual-studio-code/response-action-details-body-property.png)
+
+1. On the designer, select **Save**.
+
+<a name="retest-workflow"></a>
+
+## Retest your logic app
+
+After you make updates to your logic app, you can run another test by rerunning the debugger in Visual Studio and sending another request to trigger your updated logic app, similar to the steps in [Run, test, and debug locally](#run-test-debug-locally).
+
+1. On the Visual Studio Code Activity Bar, open the **Run** menu, and select **Start Debugging** (F5).
+
+1. In Postman or your tool for creating and sending requests, send another request to trigger your workflow.
+
+1. If you created a stateful workflow, on the workflow's overview page, check the status for the most recent run. To view the status, inputs, and outputs for each step in that run, select the ellipses (**...**) button for that run, and select **Show Run**.
+
+ For example, here's the step-by-step status for a run after the sample workflow was updated with the Response action.
+
+ ![Screenshot that shows the status for each step in the updated workflow plus the inputs and outputs in the expanded "Response" action.](./media/create-single-tenant-workflows-visual-studio-code/run-history-details-rerun.png)
+
+1. To stop the debugging session, on the **Run** menu, select **Stop Debugging** (Shift + F5).
+
+<a name="firewall-setup"></a>
+
+## Find domain names for firewall access
+
+Before you deploy and run your logic app workflow in the Azure portal, if your environment has strict network requirements or firewalls that limit traffic, you have to set up permissions for any trigger or action connections that exist in your workflow.
+
+To find the fully qualified domain names (FQDNs) for these connections, follow these steps:
+
+1. In your logic app project, open the **connections.json** file, which is created after you add the first connection-based trigger or action to your workflow, and find the `managedApiConnections` object.
+
+1. For each connection that you created, find, copy, and save the `connectionRuntimeUrl` property value somewhere safe so that you can set up your firewall with this information.
+
+ This example **connections.json** file contains two connections, an AS2 connection and an Office 365 connection with these `connectionRuntimeUrl` values:
+
+ * AS2: `"connectionRuntimeUrl": https://9d51d1ffc9f77572.00.common.logic-{Azure-region}.azure-apihub.net/apim/as2/11d3fec26c87435a80737460c85f42ba`
+
+ * Office 365: `"connectionRuntimeUrl": https://9d51d1ffc9f77572.00.common.logic-{Azure-region}.azure-apihub.net/apim/office365/668073340efe481192096ac27e7d467f`
+
+ ```json
+ {
+ "managedApiConnections": {
+ "as2": {
+ "api": {
+ "id": "/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{Azure-region}/managedApis/as2"
+ },
+ "connection": {
+ "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Web/connections/{connection-resource-name}"
+ },
+ "connectionRuntimeUrl": https://9d51d1ffc9f77572.00.common.logic-{Azure-region}.azure-apihub.net/apim/as2/11d3fec26c87435a80737460c85f42ba,
+ "authentication": {
+ "type":"ManagedServiceIdentity"
+ }
+ },
+ "office365": {
+ "api": {
+ "id": "/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{Azure-region}/managedApis/office365"
+ },
+ "connection": {
+ "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Web/connections/{connection-resource-name}"
+ },
+ "connectionRuntimeUrl": https://9d51d1ffc9f77572.00.common.logic-{Azure-region}.azure-apihub.net/apim/office365/668073340efe481192096ac27e7d467f,
+ "authentication": {
+ "type":"ManagedServiceIdentity"
+ }
+ }
+ }
+ }
+ ```
+
+<a name="deploy-azure"></a>
+
+## Deploy to Azure
+
+From Visual Studio Code, you can directly publish your project to Azure, which deploys your logic app using the new **Logic App (Preview)** resource type. Similar to the function app resource in Azure Functions, deployment for this new resource type requires that you select a [hosting plan and pricing tier](../app-service/overview-hosting-plans.md), which you can set up during deployment. For more information about hosting plans and pricing, review these topics:
+
+* [Scale up an in Azure App Service](../app-service/manage-scale-up.md)
+* [Azure Functions scale and hosting](../azure-functions/functions-scale.md)
+
+You can publish your logic app as a new resource, which automatically creates any necessary resources, such as an [Azure Storage account, similar to function app requirements](../azure-functions/storage-considerations.md). Or, you can publish your logic app to a previously deployed **Logic App (Preview)** resource, which overwrites that logic app.
+
+### Publish to a new Logic App (Preview) resource
+
+1. On the Visual Studio Code Activity Bar, select the Azure icon.
+
+1. On the **Azure: Logic Apps (Preview)** pane toolbar, select **Deploy to Logic App**.
+
+ ![Screenshot that shows the "Azure: Logic Apps (Preview)" pane and pane's toolbar with "Deploy to Logic App" selected.](./media/create-single-tenant-workflows-visual-studio-code/deploy-to-logic-app.png)
+
+1. If prompted, select the Azure subscription to use for your logic app deployment.
+
+1. From the list that Visual Studio Code opens, select from these options:
+
+ * **Create new Logic App (Preview) in Azure** (quick)
+ * **Create new Logic App (Preview) in Azure Advanced**
+ * A previously deployed **Logic App (Preview)** resource, if any exist
+
+ This example continues with **Create new Logic App (Preview) in Azure Advanced**.
+
+ ![Screenshot that shows the "Azure: Logic Apps (Preview)" pane with a list with "Create new Logic App (Preview) in Azure" selected.](./media/create-single-tenant-workflows-visual-studio-code/select-create-logic-app-options.png)
+
+1. To create your new **Logic App (Preview)** resource, follow these steps:
+
+ 1. Provide a globally unique name for your new logic app, which is the name to use for the **Logic App (Preview)** resource. This example uses `Fabrikam-Workflows-App`.
+
+ ![Screenshot that shows the "Azure: Logic Apps (Preview)" pane and a prompt to provide a name for the new logic app to create.](./media/create-single-tenant-workflows-visual-studio-code/enter-logic-app-name.png)
+
+ 1. Select a [hosting plan](../app-service/overview-hosting-plans.md) for your new logic app, either [**App Service Plan** (Dedicated)](../azure-functions/dedicated-plan.md) or [**Premium**](../azure-functions/functions-premium-plan.md).
+
+ > [!IMPORTANT]
+ > Consumption plans aren't supported nor available for this resource type. Your selected plan affects the
+ > capabilities and pricing tiers that are later available to you. For more information, review these topics:
+ >
+ > * [Azure Functions scale and hosting](../azure-functions/functions-scale.md)
+ > * [App Service pricing details](https://azure.microsoft.com/pricing/details/app-service/)
+ >
+ > For example, the Premium plan provides access to networking capabilities, such as connect and integrate
+ > privately with Azure virtual networks, similar to Azure Functions when you create and deploy your logic apps.
+ > For more information, review these topics:
+ >
+ > * [Azure Functions networking options](../azure-functions/functions-networking-options.md)
+ > * [Azure Logic Apps Running Anywhere - Networking possibilities with Azure Logic Apps Preview](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047)
+
+ This example uses the **App Service Plan**.
+
+ ![Screenshot that shows the "Azure: Logic Apps (Preview)" pane and a prompt to select "App Service Plan" or "Premium".](./media/create-single-tenant-workflows-visual-studio-code/select-hosting-plan.png)
+
+ 1. Create a new App Service plan or select an existing plan. This example selects **Create new App Service Plan**.
+
+ ![Screenshot that shows the "Azure: Logic Apps (Preview)" pane and a prompt to "Create new App Service Plan" or select an existing App Service plan.](./media/create-single-tenant-workflows-visual-studio-code/create-app-service-plan.png)
+
+ 1. Provide a name for your App Service plan, and then select a [pricing tier](../app-service/overview-hosting-plans.md) for the plan. This example selects the **F1 Free** plan.
+
+ ![Screenshot that shows the "Azure: Logic Apps (Preview)" pane and a prompt to select a pricing tier.](./media/create-single-tenant-workflows-visual-studio-code/select-pricing-tier.png)
+
+ 1. For optimal performance, find and select the same resource group as your project for the deployment.
+
+ > [!NOTE]
+ > Although you can create or use a different resource group, doing so might affect performance.
+ > If you create or choose a different resource group, but cancel after the confirmation prompt appears,
+ > your deployment is also canceled.
+
+ 1. For stateful workflows, select **Create new storage account** or an existing storage account.
+
+ ![Screenshot that shows the "Azure: Logic Apps (Preview)" pane and a prompt to create or select a storage account.](./media/create-single-tenant-workflows-visual-studio-code/create-storage-account.png)
+
+ 1. If your logic app's creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you deploy your logic app from Visual Studio Code or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you deploy your logic app, or after deployment.
+
+ To enable logging and tracing now, follow these steps:
+
+ 1. Select either an existing Application Insights resource or **Create new Application Insights resource**.
+
+ 1. In the [Azure portal](https://portal.azure.com), go to your Application Insights resource.
+
+ 1. On the resource menu, select **Overview**. Find and copy the **Instrumentation Key** value.
+
+ 1. In Visual Studio Code, in your project's root folder, open the **local.settings.json** file.
+
+ 1. In the `Values` object, add the `APPINSIGHTS_INSTRUMENTATIONKEY` property, and set the value to the instrumentation key, for example:
+
+ ```json
+ {
+ "IsEncrypted": false,
+ "Values": {
+ "AzureWebJobsStorage": "UseDevelopmentStorage=true",
+ "FUNCTIONS_WORKER_RUNTIME": "dotnet",
+ "APPINSIGHTS_INSTRUMENTATIONKEY": <instrumentation-key>
+ }
+ }
+ ```
+
+ > [!TIP]
+ > You can check whether the trigger and action names correctly appear in your Application Insights instance.
+ >
+ > 1. In the Azure portal, go to your Application Insights resource.
+ >
+ > 2. On the resource resource menu, under **Investigate**, select **Application map**.
+ >
+ > 3. Review the operation names that appear in the map.
+ >
+ > Some inbound requests from built-in triggers might appear as duplicates in the Application Map.
+ > Rather than use the `WorkflowName.ActionName` format, these duplicates use the workflow name as
+ > the operation name and originate from the Azure Functions host.
+
+ 1. Next, you can optionally adjust the severity level for the tracing data that your logic app collects and sends to your Application Insights instance.
+
+ Each time that a workflow-related event happens, such as when a workflow is triggered or when an action runs, the runtime emits various traces. These traces cover the workflow's lifetime and include, but aren't limited to, the following event types:
+
+ * Service activity, such as start, stop, and errors.
+ * Jobs and dispatcher activity.
+ * Workflow activity, such as trigger, action, and run.
+ * Storage request activity, such as success or failure.
+ * HTTP request activity, such as inbound, outbound, success, and failure.
+ * Any development traces, such as debug messages.
+
+ Each event type is assigned to a severity level. For example, the `Trace` level captures the most detailed messages, while the `Information` level captures general activity in your workflow, such as when your logic app, workflow, trigger, and actions start and stop. This table describes the severity levels and their trace types:
+
+ | Severity level | Trace type |
+ |-||
+ | Critical | Logs that describe an unrecoverable failure in your logic app. |
+ | Debug | Logs that you can use for investigation during development, for example, inbound and outbound HTTP calls. |
+ | Error | Logs that indicate a failure in workflow execution, but not a general failure in your logic app. |
+ | Information | Logs that track the general activity in your logic app or workflow, for example: <p><p>- When a trigger, action, or run starts and ends. <br>- When your logic app starts or ends. |
+ | Trace | Logs that contain the most detailed messages, for example, storage requests or dispatcher activity, plus all the messages that are related to workflow execution activity. |
+ | Warning | Logs that highlight an abnormal state in your logic app but doesn't prevent its running. |
+ |||
+
+ To set the severity level, at your project's root level, open the **host.json** file, and find the `logging` object. This object controls the log filtering for all the workflows in your logic app and follows the [ASP.NET Core layout for log type filtering](/aspnet/core/fundamentals/logging/?view=aspnetcore-2.1&preserve-view=true#log-filtering).
+
+ ```json
+ {
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "samplingExcludedTypes": "Request",
+ "samplingSettings": {
+ "isEnabled": true
+ }
+ }
+ }
+ }
+ ```
+
+ If the `logging` object doesn't contain a `logLevel` object that includes the `Host.Triggers.Workflow` property, add those items. Set the property to the severity level for the trace type that you want, for example:
+
+ ```json
+ {
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "samplingExcludedTypes": "Request",
+ "samplingSettings": {
+ "isEnabled": true
+ }
+ },
+ "logLevel": {
+ "Host.Triggers.Workflow": "Information"
+ }
+ }
+ }
+ ```
+
+ When you're done with the deployment steps, Visual Studio Code starts creating and deploying the resources necessary for publishing your logic app.
+
+1. To review and monitor the deployment process, on the **View** menu, select **Output**. From the Output window toolbar list, select **Azure Logic Apps**.
+
+ ![Screenshot that shows the Output window with the "Azure Logic Apps" selected in the toolbar list along with the deployment progress and statuses.](./media/create-single-tenant-workflows-visual-studio-code/logic-app-deployment-output-window.png)
+
+ When Visual Studio Code finishes deploying your logic app to Azure, the following message appears:
+
+ ![Screenshot that shows a message that deployment to Azure successfully completed.](./media/create-single-tenant-workflows-visual-studio-code/deployment-to-azure-completed.png)
+
+ Congratulations, your logic app is now live in Azure and enabled by default.
+
+Next, you can learn how to perform these tasks:
+
+* [Add a blank workflow to your project](#add-workflow-existing-project).
+
+* [Manage deployed logic apps in Visual Studio Code](#manage-deployed-apps-vs-code) or by using the [Azure portal](#manage-deployed-apps-portal).
+
+* [Enable run history on stateless workflows](#enable-run-history-stateless).
+
+* [Enable monitoring view in the Azure portal for a deployed logic app](#enable-monitoring).
+
+<a name="add-workflow-existing-project"></a>
+
+## Add blank workflow to project
+
+You can have multiple workflows in your logic app project. To add a blank workflow to your project, follow these steps:
+
+1. On the Visual Studio Code Activity Bar, select the Azure icon.
+
+1. In the Azure pane, next to **Azure: Logic Apps (Preview)**, select **Create Workflow** (icon for Azure Logic Apps).
+
+1. Select the workflow type that you want to add: **Stateful** or **Stateless**
+
+1. Provide a name for your workflow.
+
+When you're done, a new workflow folder appears in your project along with a **workflow.json** file for the workflow definition.
+
+<a name="manage-deployed-apps-vs-code"></a>
+
+## Manage deployed logic apps in Visual Studio Code
+
+In Visual Studio Code, you can view all the deployed logic apps in your Azure subscription, whether they are the original **Logic Apps** or the **Logic App (Preview)** resource type, and select tasks that help you manage those logic apps. However, to access both resource types, you need both the **Azure Logic Apps** and the **Azure Logic Apps (Preview)** extensions for Visual Studio Code.
+
+1. On the left toolbar, select the Azure icon. In the **Azure: Logic Apps (Preview)** pane, expand your subscription, which shows all the deployed logic apps for that subscription.
+
+1. Open the logic app that you want to manage. From the logic app's shortcut menu, select the task that you want to perform.
+
+ For example, you can select tasks such as stopping, starting, restarting, or deleting your deployed logic app. You can [disable or enable a workflow by using the Azure portal](create-single-tenant-workflows-azure-portal.md#disable-enable-workflows).
+
+ > [!NOTE]
+ > The stop logic app and delete logic app operations affect workflow instances in different ways.
+ > For more information, review [Considerations for stopping logic apps](#considerations-stop-logic-apps) and
+ > [Considerations for deleting logic apps](#considerations-delete-logic-apps).
+
+ ![Screenshot that shows Visual Studio Code with the opened "Azure Logic Apps (Preview)" extension pane and the deployed workflow.](./media/create-single-tenant-workflows-visual-studio-code/find-deployed-workflow-visual-studio-code.png)
+
+1. To view all the workflows in the logic app, expand your logic app, and then expand the **Workflows** node.
+
+1. To view a specific workflow, open the workflow's shortcut menu, and select **Open in Designer**, which opens the workflow in read-only mode.
+
+ To edit the workflow, you have these options:
+
+ * In Visual Studio Code, open your project's **workflow.json** file in the workflow designer, make your edits, and redeploy your logic app to Azure.
+
+ * In the Azure portal, [find and open your logic app](#manage-deployed-apps-portal). Find, edit, and save the workflow.
+
+1. To open the deployed logic app in the Azure portal, open the logic app's shortcut menu, and select **Open in Portal**.
+
+ The Azure portal opens in your browser, signs you in to the portal automatically if you're signed in to Visual Studio Code, and shows your logic app.
+
+ ![Screenshot that shows the Azure portal page for your logic app in Visual Studio Code.](./media/create-single-tenant-workflows-visual-studio-code/deployed-workflow-azure-portal.png)
+
+ You can also sign in separately to the Azure portal, use the portal search box to find your logic app, and then select your logic app from the results list.
+
+ ![Screenshot that shows the Azure portal and the search bar with search results for deployed logic app, which appears selected.](./media/create-single-tenant-workflows-visual-studio-code/find-deployed-workflow-azure-portal.png)
+
+<a name="considerations-stop-logic-apps"></a>
+
+### Considerations for stopping logic apps
+
+Stopping a logic app affects workflow instances in the following ways:
+
+* The Logic Apps service cancels all in-progress and pending runs immediately.
+
+* The Logic Apps service doesn't create or run new workflow instances.
+
+* Triggers won't fire the next time that their conditions are met. However, trigger states remember the points where the logic app was stopped. So, if you restart the logic app, the triggers fire for all unprocessed items since the last run.
+
+ To stop a trigger from firing on unprocessed items since the last run, clear the trigger state before you restart the logic app:
+
+ 1. In Visual Studio Code, on the left toolbar, select the Azure icon.
+ 1. In the **Azure: Logic Apps (Preview)** pane, expand your subscription, which shows all the deployed logic apps for that subscription.
+ 1. Expand your logic app, and then expand the **Workflows** node.
+ 1. Open a workflow, and edit any part of that workflow's trigger.
+ 1. Save your changes. This step resets the trigger's current state.
+ 1. Repeat for each workflow.
+ 1. When you're done, restart your logic app.
+
+<a name="considerations-delete-logic-apps"></a>
+
+### Considerations for deleting logic apps
+
+Deleting a logic app affects workflow instances in the following ways:
+
+* The Logic Apps service cancels in-progress and pending runs immediately, but doesn't run cleanup tasks on the storage used by the app.
+
+* The Logic Apps service doesn't create or run new workflow instances.
+
+* If you delete a workflow and then recreate the same workflow, the recreated workflow won't have the same metadata as the deleted workflow. You have to resave any workflow that called the deleted workflow. That way, the caller gets the correct information for the recreated workflow. Otherwise, calls to the recreated workflow fail with an `Unauthorized` error. This behavior also applies to workflows that use artifacts in integration accounts and workflows that call Azure functions.
+
+<a name="manage-deployed-apps-portal"></a>
+
+## Manage deployed logic apps in the portal
+
+After you deploy a logic app to the Azure portal from Visual Studio Code, you can view all the deployed logic apps that are in your Azure subscription, whether they are the original **Logic Apps** resource type or the **Logic App (Preview)** resource type. Currently, each resource type is organized and managed as separate categories in Azure. To find logic apps that have the **Logic App (Preview)** resource type, follow these steps:
+
+1. In the Azure portal search box, enter `logic app preview`. When the results list appears, under **Services**, select **Logic App (Preview)**.
+
+ ![Screenshot that shows the Azure portal search box with the "logic app preview" search text.](./media/create-single-tenant-workflows-visual-studio-code/portal-find-logic-app-preview-resource.png)
+
+1. On the **Logic App (Preview)** pane, find and select the logic app that you deployed from Visual Studio Code.
+
+ ![Screenshot that shows the Azure portal and the Logic App (Preview) resources deployed in Azure.](./media/create-single-tenant-workflows-visual-studio-code/logic-app-preview-resources-pane.png)
+
+ The Azure portal opens the individual resource page for the selected logic app.
+
+ ![Screenshot that shows your logic app workflow's resource page in the Azure portal.](./media/create-single-tenant-workflows-visual-studio-code/deployed-workflow-azure-portal.png)
+
+1. To view the workflows in this logic app, on the logic app's menu, select **Workflows**.
+
+ The **Workflows** pane shows all the workflows in the current logic app. This example shows the workflow that you created in Visual Studio Code.
+
+ ![Screenshot that shows a "Logic App (Preview)" resource page with the "Workflows" pane open and the deployed workflow](./media/create-single-tenant-workflows-visual-studio-code/deployed-logic-app-workflows-pane.png)
+
+1. To view a workflow, on the **Workflows** pane, select that workflow.
+
+ The workflow pane opens and shows more information and tasks that you can perform on that workflow.
+
+ For example, to view the steps in the workflow, select **Designer**.
+
+ ![Screenshot that shows the selected workflow's "Overview" pane, while the workflow menu shows the selected "Designer" command.](./media/create-single-tenant-workflows-visual-studio-code/workflow-overview-pane-select-designer.png)
+
+ The workflow designer opens and shows the workflow that you built in Visual Studio Code. You can now make changes to this workflow in the Azure portal.
+
+ ![Screenshot that shows the workflow designer and workflow deployed from Visual Studio Code.](./media/create-single-tenant-workflows-visual-studio-code/opened-workflow-designer.png)
+
+<a name="add-workflow-portal"></a>
+
+## Add another workflow in the portal
+
+Through the Azure portal, you can add blank workflows to a **Logic App (Preview)** resource that you deployed from Visual Studio Code and build those workflows in the Azure portal.
+
+1. In the [Azure portal](https://portal.azure.com), find and select your deployed **Logic App (Preview)** resource.
+
+1. On the logic app menu, select **Workflows**. On the **Workflows** pane, select **Add**.
+
+ ![Screenshot that shows the selected logic app's "Workflows" pane and toolbar with "Add" command selected.](./media/create-single-tenant-workflows-visual-studio-code/add-new-workflow.png)
+
+1. In the **New workflow** pane, provide name for the workflow. Select either **Stateful** or **Stateless** **>** **Create**.
+
+ After Azure deploys your new workflow, which appears on the **Workflows** pane, select that workflow so that you can manage and perform other tasks, such as opening the designer or code view.
+
+ ![Screenshot that shows the selected workflow with management and review options.](./media/create-single-tenant-workflows-visual-studio-code/view-new-workflow.png)
+
+ For example, opening the designer for a new workflow shows a blank canvas. You can now build this workflow in the Azure portal.
+
+ ![Screenshot that shows the workflow designer and a blank workflow.](./media/create-single-tenant-workflows-visual-studio-code/opened-blank-workflow-designer.png)
+
+<a name="enable-run-history-stateless"></a>
+
+## Enable run history for stateless workflows
+
+To debug a stateless workflow more easily, you can enable the run history for that workflow, and then disable the run history when you're done. Follow these steps for Visual Studio Code, or if you're working in the Azure portal, see [Create single-tenant based workflows in the Azure portal](create-single-tenant-workflows-azure-portal.md#enable-run-history-stateless).
+
+1. In your Visual Studio Code project, expand the **workflow-designtime** folder, and open the **local.settings.json** file.
+
+1. Add the `Workflows.{yourWorkflowName}.operationOptions` property and set the value to `WithStatelessRunHistory`, for example:
+
+ **Windows**
+
+ ```json
+ {
+ "IsEncrypted": false,
+ "Values": {
+ "AzureWebJobsStorage": "UseDevelopmentStorage=true",
+ "FUNCTIONS_WORKER_RUNTIME": "dotnet",
+ "Workflows.{yourWorkflowName}.OperationOptions": "WithStatelessRunHistory"
+ }
+ }
+ ```
+
+ **macOS or Linux**
+
+ ```json
+ {
+ "IsEncrypted": false,
+ "Values": {
+ "AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=fabrikamstorageacct; \
+ AccountKey=<access-key>;EndpointSuffix=core.windows.net",
+ "FUNCTIONS_WORKER_RUNTIME": "dotnet",
+ "Workflows.{yourWorkflowName}.OperationOptions": "WithStatelessRunHistory"
+ }
+ }
+ ```
+
+1. To disable the run history when you're done, either set the `Workflows.{yourWorkflowName}.OperationOptions`property to `None`, or delete the property and its value.
+
+<a name="enable-monitoring"></a>
+
+## Enable monitoring view in the Azure portal
+
+After you deploy a **Logic App (Preview)** resource from Visual Studio Code to Azure, you can review any available run history and details for a workflow in that resource by using the Azure portal and the **Monitor** experience for that workflow. However, you first have to enable the **Monitor** view capability on that logic app resource.
+
+1. In the [Azure portal](https://portal.azure.com), find and select the deployed **Logic App (Preview)** resource.
+
+1. On that resource's menu, under **API**, select **CORS**.
+
+1. On the **CORS** pane, under **Allowed Origins**, add the wildcard character (*).
+
+1. When you're done, on the **CORS** toolbar, select **Save**.
+
+ ![Screenshot that shows the Azure portal with a deployed Logic App (Preview) resource. On the resource menu, "CORS" is selected with a new entry for "Allowed Origins" set to the wildcard "*" character.](./media/create-single-tenant-workflows-visual-studio-code/enable-run-history-deployed-logic-app.png)
+
+<a name="enable-open-application-insights"></a>
+
+## Enable or open Application Insights after deployment
+
+During workflow execution, your logic app emits telemetry along with other events. You can use this telemetry to get better visibility into how well your workflow runs and how the Logic Apps runtime works in various ways. You can monitor your workflow by using [Application Insights](../azure-monitor/app/app-insights-overview.md), which provides near real-time telemetry (live metrics). This capability can help you investigate failures and performance problems more easily when you use this data to diagnose issues, set up alerts, and build charts.
+
+If your logic app's creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you deploy your logic app from Visual Studio Code or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you deploy your logic app, or after deployment.
+
+To enable Application Insights on a deployed logic app or to review Application Insights data when already enabled, follow these steps:
+
+1. In the Azure portal, find your deployed logic app.
+
+1. On the logic app menu, under **Settings**, select **Application Insights**.
+
+1. If Application Insights isn't enabled, on the **Application Insights** pane, select **Turn on Application Insights**. After the pane updates, at the bottom, select **Apply**.
+
+ If Application Insights is enabled, on the **Application Insights** pane, select **View Application Insights data**.
+
+After Application Insights opens, you can review various metrics for your logic app. For more information, review these topics:
+
+* [Azure Logic Apps Running Anywhere - Monitor with Application Insights - part 1](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-monitor-with-application/ba-p/1877849)
+* [Azure Logic Apps Running Anywhere - Monitor with Application Insights - part 2](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-monitor-with-application/ba-p/2003332)
+
+<a name="deploy-docker"></a>
+
+## Deploy to Docker
+
+You can deploy your logic app to a [Docker container](/visualstudio/docker/tutorials/docker-tutorial#what-is-a-container) as the hosting environment by using the [.NET CLI](/dotnet/core/tools/). With these commands, you can build and publish your logic app's project. You can then build and run your Docker container as the destination for deploying your logic app.
+
+If you're not familiar with Docker, review these topics:
+
+* [What is Docker?](/dotnet/architecture/microservices/container-docker-introduction/docker-defined)
+* [Introduction to Containers and Docker](/dotnet/architecture/microservices/container-docker-introduction/)
+* [Introduction to .NET and Docker](/dotnet/core/docker/introduction)
+* [Docker containers, images, and registries](/dotnet/architecture/microservices/container-docker-introduction/docker-containers-images-registries)
+* [Tutorial: Get started with Docker (Visual Studio Code)](/visualstudio/docker/tutorials/docker-tutorial)
+
+### Requirements
+
+* The Azure Storage account that your logic app uses for deployment
+
+* A Docker file for the workflow that you use when building your Docker container
+
+ For example, this sample Docker file deploys a logic app and specifies the connection string that contains the access key for the Azure Storage account that was used for publishing the logic app to the Azure portal. To find this string, see [Get storage account connection string](#find-storage-account-connection-string). For more information, review [Best practices for writing Docker files](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/).
+
+ > [!IMPORTANT]
+ > For production scenarios, make sure that you protect and secure such secrets and sensitive information, for example, by using a key vault.
+ > For Docker files specifically, review [Build images with BuildKit](https://docs.docker.com/develop/develop-images/build_enhancements/)
+ > and [Manage sensitive data with Docker Secrets](https://docs.docker.com/engine/swarm/secrets/).
+
+ ```text
+ FROM mcr.microsoft.com/azure-functions/node:3.0
+
+ ENV AzureWebJobsStorage <storage-account-connection-string>
+ ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
+ AzureFunctionsJobHost__Logging__Console__IsEnabled=true \
+ FUNCTIONS_V2_COMPATIBILITY_MODE=true
+
+ COPY . /home/site/wwwroot
+
+ RUN cd /home/site/wwwroot
+ ```
+
+<a name="find-storage-account-connection-string"></a>
+
+### Get storage account connection string
+
+Before you can build and run your Docker container image, you need to get the connection string that contains the access key to your storage account. Earlier, you created this storage account either as to use the extension on macOS or Linux, or when you deployed your logic app to the Azure portal.
+
+To find and copy this connection string, follow these steps:
+
+1. In the Azure portal, on the storage account menu, under **Settings**, select **Access keys**.
+
+1. On the **Access keys** pane, find and copy the storage account's connection string, which looks similar to this example:
+
+ `DefaultEndpointsProtocol=https;AccountName=fabrikamstorageacct;AccountKey=<access-key>;EndpointSuffix=core.windows.net`
+
+ ![Screenshot that shows the Azure portal with storage account access keys and connection string copied.](./media/create-single-tenant-workflows-visual-studio-code/find-storage-account-connection-string.png)
+
+ For more information, review [Manage storage account keys](../storage/common/storage-account-keys-manage.md?tabs=azure-portal#view-account-access-keys).
+
+1. Save the connection string somewhere safe so that you can add this string to the Docker file that you use for deployment.
+
+<a name="find-storage-account-master-key"></a>
+
+### Find master key for storage account
+
+When your workflow contains a Request trigger, you need to [get the trigger's callback URL](#get-callback-url-request-trigger) after you build and run your Docker container image. For this task, you also need to specify the master key value for the storage account that you use for deployment.
+
+1. To find this master key, in your project, open the **azure-webjobs-secrets/{deployment-name}/host.json** file.
+
+1. Find the `AzureWebJobsStorage` property, and copy the key value from this section:
+
+ ```json
+ {
+ <...>
+ "masterKey": {
+ "name": "master",
+ "value": "<master-key>",
+ "encrypted": false
+ },
+ <...>
+ }
+ ```
+
+1. Save this key value somewhere safe for you to use later.
+
+<a name="build-run-docker-container-image"></a>
+
+### Build and run your Docker container image
+
+1. Build your Docker container image by using your Docker file and running this command:
+
+ `docker build --tag local/workflowcontainer .`
+
+ For more information, see [docker build](https://docs.docker.com/engine/reference/commandline/build/).
+
+1. Run the container locally by using this command:
+
+ `docker run -e WEBSITE_HOSTNAME=localhost -p 8080:80 local/workflowcontainer`
+
+ For more information, see [docker run](https://docs.docker.com/engine/reference/commandline/run/).
+
+<a name="get-callback-url-request-trigger"></a>
+
+### Get callback URL for Request trigger
+
+For a workflow that uses the Request trigger, get the trigger's callback URL by sending this request:
+
+`POST /runtime/webhooks/workflow/api/management/workflows/{workflow-name}/triggers/{trigger-name}/listCallbackUrl?api-version=2020-05-01-preview&code={master-key}`
+
+The `{trigger-name}` value is the name for the Request trigger that appears in the workflow's JSON definition. The `{master-key}` value is defined in the Azure Storage account that you set for the `AzureWebJobsStorage` property within the file, **azure-webjobs-secrets/{deployment-name}/host.json**. For more information, see [Find storage account master key](#find-storage-account-master-key).
+
+<a name="delete-from-designer"></a>
+
+## Delete items from the designer
+
+To delete an item in your workflow from the designer, follow any of these steps:
+
+* Select the item, open the item's shortcut menu (Shift+F10), and select **Delete**. To confirm, select **OK**.
+
+* Select the item, and press the delete key. To confirm, select **OK**.
+
+* Select the item so that details pane opens for that item. In the pane's upper right corner, open the ellipses (**...**) menu, and select **Delete**. To confirm, select **OK**.
+
+ ![Screenshot that shows a selected item on designer with the opened details pane plus the selected ellipses button and "Delete" command.](./media/create-single-tenant-workflows-visual-studio-code/delete-item-from-designer.png)
+
+ > [!TIP]
+ > If the ellipses menu isn't visible, expand Visual Studio Code window wide enough so that
+ > the details pane shows the ellipses (**...**) button in the upper right corner.
+
+<a name="troubleshooting"></a>
+
+## Troubleshoot errors and problems
+
+<a name="designer-fails-to-open"></a>
+
+### Designer fails to open
+
+When you try to open the designer, you get this error, **"Workflow design time could not be started"**. If you previously tried to open the designer, and then discontinued or deleted your project, the extension bundle might not be downloading correctly. To check whether this cause is the problem, follow these steps:
+
+ 1. In Visual Studio Code, open the Output window. From the **View** menu, select **Output**.
+
+ 1. From the list in the Output window's title bar, select **Azure Logic Apps (Preview)** so that you can review output from the extension, for example:
+
+ ![Screenshot that shows the Output window with "Azure Logic Apps" selected.](./media/create-single-tenant-workflows-visual-studio-code/check-outout-window-azure-logic-apps.png)
+
+ 1. Review the output and check whether this error message appears:
+
+ ```text
+ A host error has occurred during startup operation '{operationID}'.
+ System.Private.CoreLib: The file 'C:\Users\{userName}\AppData\Local\Temp\Functions\
+ ExtensionBundles\Microsoft.Azure.Functions.ExtensionBundle.Workflows\1.1.7\bin\
+ DurableTask.AzureStorage.dll' already exists.
+ Value cannot be null. (Parameter 'provider')
+ Application is shutting down...
+ Initialization cancellation requested by runtime.
+ Stopping host...
+ Host shutdown completed.
+ ```
+
+ To resolve this error, delete the **ExtensionBundles** folder at this location **...\Users\{your-username}\AppData\Local\Temp\Functions\ExtensionBundles**, and retry opening the **workflow.json** file in the designer.
+
+<a name="missing-triggers-actions"></a>
+
+### New triggers and actions are missing from the designer picker for previously created workflows
+
+Azure Logic Apps Preview supports built-in actions for Azure Function Operations, Liquid Operations, and XML Operations, such as **XML Validation** and **Transform XML**. However, for previously created logic apps, these actions might not appear in the designer picker for you to select if Visual Studio Code uses an outdated version of the extension bundle, `Microsoft.Azure.Functions.ExtensionBundle.Workflows`.
+
+Also, the **Azure Function Operations** connector and actions don't appear in the designer picker unless you enabled or selected **Use connectors from Azure** when you created your logic app. If you didn't enable the Azure-deployed connectors at app creation time, you can enable them from your project in Visual Studio Code. Open the **workflow.json** shortcut menu, and select **Use Connectors from Azure**.
+
+To fix the outdated bundle, follow these steps to delete the outdated bundle, which makes Visual Studio Code automatically update the extension bundle to the latest version.
+
+> [!NOTE]
+> This solution applies only to logic apps that you create and deploy using Visual Studio Code with
+> the Azure Logic Apps (Preview) extension, not the logic apps that you created using the Azure portal.
+> See [Supported triggers and actions are missing from the designer in the Azure portal](create-single-tenant-workflows-azure-portal.md#missing-triggers-actions).
+
+1. Save any work that you don't want to lose, and close Visual Studio.
+
+1. On your computer, browse to the following folder, which contains versioned folders for the existing bundle:
+
+ `...\Users\{your-username}\.azure-functions-core-tools\Functions\ExtensionBundles\Microsoft.Azure.Functions.ExtensionBundle.Workflows`
+
+1. Delete the version folder for the earlier bundle, for example, if you have a folder for version 1.1.3, delete that folder.
+
+1. Now, browse to the following folder, which contains versioned folders for required NuGet package:
+
+ `...\Users\{your-username}\.nuget\packages\microsoft.azure.workflows.webjobs.extension`
+
+1. Delete the version folder for the earlier package, for example, if you have a folder for version 1.0.0.8-preview, delete that folder.
+
+1. Reopen Visual Studio Code, your project, and the **workflow.json** file in the designer.
+
+The missing triggers and actions now appear in the designer.
+
+<a name="400-bad-request"></a>
+
+### "400 Bad Request" appears on a trigger or action
+
+When a run fails, and you inspect the run in monitoring view, this error might appear on a trigger or action that has a longer name, which causes the underlying Uniform Resource Identifier (URI) to exceed the default character limit.
+
+To resolve this problem and adjust for the longer URI, edit the `UrlSegmentMaxCount` and `UrlSegmentMaxLength` registry keys on your computer by following the steps below. These key's default values are described in this topic, [Http.sys registry settings for Windows](/troubleshoot/iis/httpsys-registry-windows).
+
+> [!IMPORTANT]
+> Before you start, make sure that you save your work. This solution requires you
+> to restart your computer after you're done so that the changes can take effect.
+
+1. On your computer, open the **Run** window, and run the `regedit` command, which opens the registry editor.
+
+1. In the **User Account Control** box, select **Yes** to permit your changes to your computer.
+
+1. In the left pane, under **Computer**, expand the nodes along the path, **HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters**, and then select **Parameters**.
+
+1. In the right pane, find the `UrlSegmentMaxCount` and `UrlSegmentMaxLength` registry keys.
+
+1. Increase these key values enough so that the URIs can accommodate the names that you want to use. If these keys don't exist, add them to the **Parameters** folder by following these steps:
+
+ 1. From the **Parameters** shortcut menu, select **New** > **DWORD (32-bit) Value**.
+
+ 1. In the edit box that appears, enter `UrlSegmentMaxCount` as the new key name.
+
+ 1. Open the new key's shortcut menu, and select **Modify**.
+
+ 1. In the **Edit String** box that appears, enter the **Value data** key value that you want in hexadecimal or decimal format. For example, `400` in hexadecimal is equivalent to `1024` in decimal.
+
+ 1. To add the `UrlSegmentMaxLength` key value, repeat these steps.
+
+ After you increase or add these key values, the registry editor looks like this example:
+
+ ![Screenshot that shows the registry editor.](media/create-single-tenant-workflows-visual-studio-code/edit-registry-settings-uri-length.png)
+
+1. When you're ready, restart your computer so that the changes can take effect.
+
+<a name="debugging-fails-to-start"></a>
+
+### Debugging session fails to start
+
+When you try to start a debugging session, you get the error, **"Error exists after running preLaunchTask 'generateDebugSymbols'"**. To resolve this problem, edit the **tasks.json** file in your project to skip symbol generation.
+
+1. In your project, expand the **.vscode** folder, and open the **tasks.json** file.
+
+1. In the following task, delete the line, `"dependsOn: "generateDebugSymbols"`, along with the comma that ends the preceding line, for example:
+
+ Before:
+
+ ```json
+ {
+ "type": "func",
+ "command": "host start",
+ "problemMatcher": "$func-watch",
+ "isBackground": true,
+ "dependsOn": "generateDebugSymbols"
+ }
+ ```
+
+ After:
+
+ ```json
+ {
+ "type": "func",
+ "command": "host start",
+ "problemMatcher": "$func-watch",
+ "isBackground": true
+ }
+ ```
+
+## Next steps
+
+We'd like to hear from you about your experiences with the Azure Logic Apps (Preview) extension!
+
+* For bugs or problems, [create your issues in GitHub](https://github.com/Azure/logicapps/issues).
+* For questions, requests, comments, and other feedback, [use this feedback form](https://aka.ms/lafeedback).
logic-apps Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/devops-deployment-single-tenant-azure-logic-apps.md
+
+ Title: DevOps deployment for single-tenant Azure Logic Apps (preview)
+description: Learn about DevOps deployment for single-tenant Azure Logic Apps (preview).
+
+ms.suite: integration
++ Last updated : 05/10/2021+
+# As a developer, I want to learn about DevOps deployment support for single-tenant Azure Logic Apps.
++
+# DevOps deployment for single-tenant Azure Logic Apps (preview)
+
+With the trend towards distributed and native cloud apps, organizations are dealing with more distributed components across more environments. To maintain control and consistency, you can automate your environments and deploy more components faster and more confidently by using DevOps tools and processes.
+
+This article provides an introduction and overview about the current continuous integration and continuous deployment (CI/CD) experience for single-tenant Azure Logic Apps.
+
+<a name="single-tenant-versus-multi-tenant"></a>
+
+## Single-tenant versus multi-tenant
+
+In the original multi-tenant Azure Logic Apps, resource deployment is based on Azure Resource Manager (ARM) templates, which combine and handle resource provisioning for both logic apps and infrastructure. In single-tenant Azure Logic Apps, deployment becomes easier because you can use separate provisioning between apps and infrastructure.
+
+When you create logic apps using the **Logic App (Preview)** resource type, your workflows are powered by the redesigned Azure Logic Apps (Preview) runtime. This runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) extensibility and is [hosted as an extension on the Azure Functions runtime](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-runtime-deep-dive/ba-p/1835564). This design provides portability, flexibility, and more performance for your logic apps plus other capabilities and benefits inherited from the Azure Functions platform and Azure App Service ecosystem.
+
+For example, you can package the redesigned runtime and workflows together as part of your logic app. You can use generic steps or tasks that build, assemble, and zip your logic app resources into ready-to-deploy artifacts. To deploy your apps, copy the artifacts to the host environment and then start your apps to run your workflows. Or, integrate your artifacts into deployment pipelines using the tools and processes that you already know and use. For example, if your scenario requires containers, you can containerize your logic apps and integrate them into your existing pipelines.
+
+To set up and deploy your infrastructure resources, such as virtual networks and connectivity, you can continue using ARM templates and separately provision those resources along with other processes and pipelines that you use for those purposes.
+
+By using standard build and deploy options, you can focus on app development separately from infrastructure deployment. As a result, you get a more generic project model where you can apply many similar or the same deployment options that you use for a generic app. You also benefit from a more consistent experience for building deployment pipelines around your app projects and for running the required tests and validations before publishing to production. No matter which technology stack you use, you can deploy logic apps using your own chosen tools.
+
+<a name="devops-deployment-features"></a>
+
+## DevOps deployment capabilities
+
+single-tenant Azure Logic Apps inherits many capabilities and benefits from the Azure Functions platform and Azure App Service ecosystem. These updates include a whole new deployment model and more ways to use DevOps for your logic app workflows.
+
+<a name="local-development-testing"></a>
+
+### Local development and testing
+
+When you use Visual Studio Code with the Azure Logic Apps (Preview) extension, you can locally develop, build, and run **Logic App (Preview)** workflows in your development environment without having to deploy to Azure. You can also run your workflows anywhere that Azure Functions can run. For example, if your scenario requires containers, you can containerize your logic apps and deploy as Docker containers.
+
+This capability is a major improvement and provides a substantial benefit compared to the multi-tenant model, which requires you to develop against an existing and running resource in Azure.
+
+<a name="separate-concerns"></a>
+
+### Separate concerns
+
+The single-tenant model gives you the capability to separate the concerns between app and the underlying infrastructure. For example, you can develop, build, zip, and deploy your app separately as an immutable artifact to different environments. Logic app workflows typically have "application code" that you update more often than the underlying infrastructure. By separating these layers, you can focus more on building out your logic app's workflow and spend less on your effort to deploy the required resources across multiple environments.
+
+![Conceptual diagram showing separate deployment pipelines for apps and infrastructure.](./media/devops-deployment-single-tenant/deployment-pipelines-logic-apps.png)
+
+<a name="resource-structure"></a>
+
+### Resource structure
+
+Single-tenant Azure Logic Apps introduces a new resource structure where your logic app can host multiple workflows. This structure differs from the multi-tenant version where you have a 1:1 mapping between logic app resource and workflow. With this 1-to-many relationship, workflows in the same logic app can share and reuse other resources. Plus, these workflows also benefit from improved performance due to shared tenancy and proximity to each other.
+
+This resource structure looks and works similarly to Azure Functions where a function app can host many functions. If you're working in a logic app project within Visual Studio Code, your project folder and file structure looks like the following example:
+
+```text
+MyLogicAppProjectName
+| .vscode
+| Artifacts
+ || Maps
+ ||| MapName1
+ ||| ...
+ || Schemas
+ ||| SchemaName1
+ ||| ...
+| WorkflowName1
+ || workflow.json
+ || ...
+| WorkflowName2
+ || workflow.json
+ || ...
+| connections.json
+| host.json
+| local.settings.json
+| Dockerfile
+```
+
+At your project's root level, you can find the following files and folders, along with other items depending on your project is extension bundle-based (Node.js), which is the default, or is NuGet package-based (.NET).
+
+| Name | Folder or file | Description |
+|||-|
+| .vscode | Folder | Contains Visual Studio Code-related settings files, such as extensions.json, launch.json, settings.json, and tasks.json files |
+| Artifacts | Folder | Contains integration account artifacts that you define and use in workflows that support business-to-business (B2B) scenarios. For example, the sample structure includes maps and schemas for XML transform and validation operations. |
+| <WorkflowName> | Folder | For each workflow, the <WorkflowName> folder includes a workflow.json file, which contains that workflow's underlying JSON definition. |
+| workflow-designtime | Folder | Contains development environment-related settings files. |
+| .funcignore | File | Review [Work with Azure Functions Core Tools](../azure-functions/functions-run-local.md) |
+| connections.json | File | Contains the metadata, endpoints, and keys for any managed connections and Azure functions that your workflows use. <p><p>**Important**: To use different connections and functions for each environment, make sure that you parameterize this **connections.json** file and update the endpoints. |
+| host.json | File | Contains runtime-specific configuration settings and values, for example, the default limits for the single-tenant Azure Logic Apps platform, logic apps, workflows, triggers, and actions. |
+| local.settings.json | File | Contains the local environment variables that provides the `appSettings` values to use for your logic app when running locally. |
+| Dockerfile | Folder | Contains one or more Dockerfiles to use for deploying the logic app as a container. |
+||||
+
+For example, to create custom built-in operations, you must have a NuGet based project, not an extension bundle-based project. A NuGet-based project includes a .bin folder that contains packages and other library files that your app needs, while a bundle-based project doesn't include this folder and files. For more information about converting your project to use NuGet, review [Enable built-connector authoring](create-stateful-stateless-workflows-visual-studio-code.md#enable-built-in-connector-authoring).
+
+For more information and best practices about how to best organize workflows in your logic app, performance, and scaling, review the similar [guidance for Azure Functions](../azure-functions/functions-best-practices.md) that you can generally apply to single-tenant Azure Logic Apps.
+
+<a name="deployment-containers"></a>
+
+### Container deployment
+
+Single-tenant Azure Logic Apps supports deployment to containers, which means that you can containerize your logic app workflows and run them anywhere that containers can run. After you containerize your app, deployment works mostly the same as any other container you deploy and manage.
+
+For examples that include Azure DevOps, review [CI/CD for Containers](https://azure.microsoft.com/solutions/architecture/cicd-for-containers/). For more information about containerizing logic apps and deploying to Docker, review [Deploy your logic app to a Docker container from Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md#deploy-to-docker).
+
+<a name="app-settings-parameters"></a>
+
+### App settings and parameters
+
+In multi-tenant Azure Logic Apps, maintaining environment variables for logic apps across poses a challenge across various dev, test, and production environments. Everything in an ARM template is defined at deployment. If you need to change just a single variable, you have to redeploy everything.
+
+In single-tenant Azure Logic Apps, you can call and reference your environment variables at runtime by using app settings and parameters, so you don't have to redeploy as often.
+
+<a name="managed-connectors-built-in-operations"></a>
+
+## Managed connectors and built-in operations
+
+The Azure Logic Apps ecosystem provides [hundreds of Microsoft-managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) and built-in operations as part of a constantly growing collection that you can use in the single-tenant Azure Logic Apps service. The way that Microsoft maintains these connectors and built-in operations stays mostly the same in single-tenant Azure Logic Apps.
+
+The most significant improvement is that the single-tenant service makes more popular managed connectors also available as built-in operations. For example, you can use built-in operations for Azure Service Bus, Azure Event Hubs, SQL, and others. Meanwhile, the managed connector versions are still available and continue to work.
+
+The connections that you create using built-in operations are called built-in connections, or *service provider connections*. Built-in operations and their connections run locally in the same process that runs your workflows. Both are hosted on the redesigned Logic Apps runtime. In contrast, managed connections, or API connections, are created and run separately as Azure resources, which you deploy using ARM templates. As a result, built-in operations and their connections provide better performance due to their proximity to your workflows. This design also works well with deployment pipelines because the service provider connections are packaged into the same build artifact.
+
+In Visual Studio Code, when you use the designer to develop or make changes to your workflows, the Logic Apps engine automatically generates any necessary connection metadata in your project's **connections.json** file. The following sections describe the three kinds of connections that you can create in your workflows. Each connection type has a different JSON structure, which is important to understand because endpoints change when you move between environments.
+
+<a name="service-provider-connections"></a>
+
+### Service provider connections
+
+When you use a built-in operation for a service such as Azure Service Bus or Azure Event Hubs in the single-tenant Azure Logic Apps service, you create a service provider connection that runs in the same process as your workflow. This connection infrastructure is hosted and managed as part of your logic app, and your app settings store the connection strings for any service provider-based built-in operation that your workflows use.
+
+In your logic app project, each workflow has a workflow.json file that contains the workflow's underlying JSON definition. This workflow definition then references the necessary connection strings in your project's connections.json file.
+
+The following example shows how the service provider connection for a built-in Service Bus operation appears in your project's connections.json file:
+
+```json
+"serviceProviderConnections": {
+ "{service-bus-connection-name}": {
+ "parameterValues": {
+ "connectionString": "@appsetting('servicebus_connectionString')"
+ },
+ "serviceProvider": {
+ "id": "/serviceProviders/serviceBus"
+ },
+ "displayName": "{service-bus-connection-name}"
+ },
+ ...
+}
+```
+
+<a name="managed-connections"></a>
+
+### Managed connections
+
+When you use a managed connector for the first time in your workflow, you're prompted to create a managed API connection for the target service or system and authenticate your identity. These connectors are managed by the shared connectors ecosystem in Azure. The API connections exist and run as separate resources in Azure.
+
+In Visual Studio Code, while you continue to create and develop your workflow using the designer, the Logic Apps engine automatically creates the necessary resources in Azure for the managed connectors in your workflow. The engine automatically adds these connection resources to the Azure resource group that you designed to contain your logic app.
+
+The following example shows how an API connection for the managed Service Bus connector appears in your project's connections.json file:
+
+```json
+"managedApiConnections": {
+ "{service-bus-connection-name}": {
+ "api": {
+ "id": "/subscriptions/{subscription-ID}/providers/Microsoft.Web/locations/{region}/managedApis/servicebus"
+ },
+ "connection": {
+ "id": "/subscriptions/{subscription-ID}/resourcegroups/{resource-group-name}/providers/Microsoft.Web/connections/servicebus"
+ },
+ "connectionRuntimeUrl": "{connection-runtime-URL}",
+ "authentication": {
+ "type": "Raw",
+ "scheme": "Key",
+ "parameter": "@appsetting('servicebus_1-connectionKey')"
+ },
+ },
+ ...
+}
+```
+
+<a name="azure-functions-connections"></a>
+
+### Azure Functions connections
+
+To call functions created and hosted in Azure Functions, you use the built-in Azure Functions operation. Connection metadata for Azure Functions calls is different from other built-in-connections. This metadata is stored in your logic app project's connections.json file, but looks different:
+
+```json
+"functionConnections": {
+ "{function-operation-name}": {
+ "function": {
+ "id": "/subscriptions/{subscription-ID}/resourceGroups/{resource-group-name}/providers/Microsoft.Web/sites/{function-app-name}/functions/{function-name}"
+ },
+ "triggerUrl": "{function-url}",
+ "authentication": {
+ "type": "QueryString",
+ "name": "Code",
+ "value": "@appsetting('azureFunctionOperation_functionAppKey')"
+ },
+ "displayName": "{functions-connection-display-name}"
+ },
+ ...
+}
+```
+
+## Authentication
+
+In the single-tenant Azure Logic Apps service, the hosting model for logic app workflows is a single tenant where your workloads benefit from more isolation than in the multi-tenant version. Plus, the service runtime is portable, which means you can run your workflows anywhere that Azure Functions can run. Still, this design requires a way for logic apps to authenticate their identity so they can access the managed connector ecosystem in Azure. Your apps also need the correct permissions to run operations when using managed connections.
+
+By default, each single-tenant based logic app has an automatically enabled system-assigned managed identity. This identity differs from the authentication credentials or connection string used for creating a connection. At runtime, your logic app uses this identity to authenticate its connections through Azure access policies. If you disable this identity, connections won't work at runtime.
+
+The following sections provide more information about the authentication types that you can use to authenticate managed connections, based on where your logic app runs. For each managed connection, your logic app project's connections.json file has an `authentication` object that specifies the authentication type that your logic app can use to authenticate that managed connection.
+
+### Managed identity
+
+For a logic app that is hosted and run in Azure, a [managed identity](create-managed-service-identity.md) is the default and recommended authentication type to use for authenticating managed connections that are hosted and run in Azure. In your logic app project's connections.json file, the managed connection has an `authentication` object that specifies `ManagedServiceIdentity` as the authentication type:
+
+```json
+"authentication": {
+ "type": "ManagedServiceIdentity"
+}
+```
+
+### Raw
+
+For logic apps that run in your local development environment using Visual Studio Code, raw authentication keys are used for authenticating managed connections that are hosted and run in Azure. These keys are designed for development use only, not production, and have a 7-day expiration. In your logic app project's connections.json file, the managed connection has an `authentication` object specifies the following the authentication information:
+
+```json
+"authentication": {
+ "type": "Raw",
+ "scheme": "Key",
+ "parameter": "@appsetting('connectionKey')"
+ }
+```
+
+## Next steps
+
+* [Set up DevOps deployment for single-tenant Azure Logic Apps (Preview)](set-up-devops-deployment-single-tenant-azure-logic-apps.md)
+
+We'd like to hear about your experiences with the preview logic app resource type and preview single-tenant model!
+
+- For bugs or problems, [create your issues in GitHub](https://github.com/Azure/logicapps/issues).
+- For questions, requests, comments, and other feedback, [use this feedback form](https://aka.ms/logicappsdevops).
logic-apps Estimate Storage Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/estimate-storage-costs.md
+
+ Title: Estimate storage costs for single-tenant Azure Logic Apps
+description: Estimate storage costs for your workflows using the Azure Logic Apps Storage Calculator and Cost Analysis API.
+
+ms.suite: integration
++ Last updated : 05/13/2021++
+# Estimate storage costs for workflows in single-tenant Azure Logic Apps
+
+Azure Logic Apps uses [Azure Storage](/azure/storage/) for any storage operations. In traditional *multi-tenant* Azure Logic Apps, any storage usage and costs are attached to the logic app. Now, in *single-tenant* Azure Logic Apps (preview), you can use your own storage account. These storage costs are listed separately in your Azure billing invoice. This capability gives you more flexibility and control over your logic app data.
+
+> [!NOTE]
+> This article applies to workflows in the single-tenant Azure Logic Apps environment. These workflows exist in the same logic app and in a single tenant that share the same storage. For more information, see [Single-tenant (preview) versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+
+Storage costs change based on your workflows' content. Different triggers, actions, and payloads result in different storage operations and needs. This article describes how to estimate your storage costs when you're using your own Azure Storage account with **single-tenant** logic apps. First, you can [estimate the number of storage operations you'll perform](#estimate-storage-needs) using the Logic Apps storage calculator. Then, you can [estimate your possible storage costs](#estimate-storage-costs) using these numbers in the Azure pricing calculator.
+
+## Prerequisites
+
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* A single-tenant Logic Apps workflow. You can create a workflow [using the Azure portal](create-stateful-stateless-workflows-azure-portal.md) or [using Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md). If you don't have a workflow yet, you can use the sample small, medium, and large workflows in the storage calculator.
+* An Azure storage account that you want to use with your workflow. If you don't have a storage account, [create a storage account](../storage/common/storage-account-create.md)
+
+## Estimate storage needs
+
+Before you can estimate your storage needs, get your workflow's JSON code.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Go to the **Logic apps** service, and select your workflow.
+1. In your logic app's menu, under **Development tools**, select **Logic app code view**.
+1. Copy the workflow's JSON code.
+
+Next, use the Logic Apps storage calculator:
+
+1. Go to the [Logic Apps storage calculator](https://logicapps.azure.com/calculator).
+ :::image type="content" source="./media/estimate-storage-costs/storage-calculator.png" alt-text="Screenshot of Logic Apps storage calculator, showing input interface." lightbox="./media/estimate-storage-costs/storage-calculator.png":::
+1. Enter, upload, or select a single-tenant logic app workflow's JSON code.
+ * If you copied code in the previous section, paste it into the **Paste or upload workflow.json** box.
+ * If you have your **workflow.json** file saved locally, choose **Browse Files** under **Select workflow**. Choose your file, then select **Open**.
+ * If you don't have a workflow yet, choose one of the sample workflows under **Select workflow**.
+1. Review the options under **Advanced Options**. These settings depend on your workflow type and may include:
+ * An option to enter the number of times your loops run.
+ * An option to select all actions with payloads over 32 KB.
+1. For **Monthly runs**, enter the number of times that you run your workflow each month.
+1. Select **Calculate** and wait for the calculation to run.
+1. Under **Storage Operation Breakdown and Calculation Steps**, review the **Operation Counts** estimates.
+
+ You can see estimated operation counts by run and by month in the two tables. The following operations are shown:
+
+ * **Blob (read)**, for Azure Blob Storage read operations.
+ * **Blob (write)**, for Azure Blob Storage write operations.
+ * **Queue**, for Azure Queues Queue Class 2 operations.
+ * **Tables**, for Azure Table Storage operations.
+
+ Each operation has a minimum, maximum and "best guess" count number. Choose the most relevant number to use for [estimating your storage operation costs](#estimate-storage-costs) based on your individual scenario. Typically, it's recommended to use the "best guess" count for accuracy. However, you might also use the maximum count to make sure your cost estimate has a buffer.
+
+ :::image type="content" source="./media/estimate-storage-costs/storage-calculator-results.png" alt-text="Screenshot of Logic Apps storage calculator, showing output with estimated operations." lightbox="./media/estimate-storage-costs/storage-calculator-results.png":::
+
+## Estimate storage costs
+
+After you've [calculated your Logic Apps storage needs](#estimate-storage-needs), you can estimate your possible monthly storage costs. You can estimate prices for the following storage operation types:
+
+* [Blob storage read and write operations](#estimate-blob-storage-operations-costs)
+* [Queue storage operations](#estimate-queue-operations-costs)
+* [Table storage operations](#estimate-table-operations-costs)
+
+### Estimate blob storage operations costs
+
+> [!NOTE]
+> This feature is currently unavailable. For now, you can still use the calculator to estimate [queue storage](#estimate-queue-operations-costs) and [table storage](#estimate-table-operations-costs).
+
+To estimate monthly costs for your logic app's blob storage operations:
+
+1. Go to the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+1. On the **Products** tab, select **Storage** &gt; **Storage Accounts**. Or, in the **Search Bar** search box, enter **Storage Accounts** and select the tile.
+ :::image type="content" source="./media/estimate-storage-costs/pricing-calculator-storage-tile.png" alt-text="Screenshot of Azure pricing calculator, showing tile to add Storage Accounts view." lightbox="./media/estimate-storage-costs/pricing-calculator-storage-tile.png":::
+1. On the **Storage Accounts added** notification, select **View** to see the **Storage Accounts** section of the calculator. Or, go to the **Storage Accounts** section manually.
+1. For **Region**, select your logic app's region.
+1. For **Type**, select **Block Blob Storage**.
+1. For **Performance Tier**, select your performance tier.
+1. For **Redundancy**, select your redundancy level.
+1. Adjust any other settings as needed.
+1. Under **Write Operations**, enter your **Blob (write)** operations number from the Logic Apps storage calculator *as is*.
+1. Under **Read Operations**, enter your **Blob (read)** operations number from the Logic Apps storage calculator *as is*.
+1. Review the estimated blob storage operations costs.
+
+### Estimate queue operations costs
+
+To estimate monthly costs for your logic app's queue operations:
+
+1. Go to the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+1. On the **Products** tab, select **Storage** &gt; **Storage Accounts**. Or, in the **Search Bar** search box, enter **Storage Accounts** and select the tile.
+ :::image type="content" source="./media/estimate-storage-costs/pricing-calculator-storage-tile.png" alt-text="Screenshot of Azure pricing calculator, showing tile to add Storage Accounts view." lightbox="./media/estimate-storage-costs/pricing-calculator-storage-tile.png":::
+1. On the **Storage Accounts added** notification, select **View** to see the **Storage Accounts** section of the calculator. Or, go to the **Storage Accounts** section manually.
+1. For **Region**, select your logic app's region.
+1. For **Type**, select **Queue Storage**.
+1. For **Storage Account Type**, select your storage account type.
+1. For **Redundancy**, select your redundancy level.
+1. Under **Queue Class 2 operations**, enter your **Queue** operations number from the Logic Apps storage calculator *divided by 10,000*. This step is necessary because the calculator works in transactional units for queue operations.
+1. Review the estimated queue operations costs.
+
+### Estimate table operations costs
+
+To estimate monthly costs for your logic app's table storage operations:
+
+1. Go to the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+1. On the **Products** tab, select **Storage** &gt; **Storage Accounts**. Or, in the **Search Bar** search box, enter **Storage Accounts** and select the tile.
+ :::image type="content" source="./media/estimate-storage-costs/pricing-calculator-storage-tile.png" alt-text="Screenshot of Azure pricing calculator, showing tile to add Storage Accounts view." lightbox="./media/estimate-storage-costs/pricing-calculator-storage-tile.png":::
+1. On the **Storage Accounts added** notification, select **View** to see the **Storage Accounts** section of the calculator. Or, go to the **Storage Accounts** section manually.
+1. For **Region**, select your logic app's region.
+1. For **Type**, select **Table Storage**.
+1. For **Tier**, select your performance tier.
+1. For **Redundancy**, select your redundancy level.
+1. Under **Storage transactions**, enter your **Table** operations number from the Logic Apps storage calculator *divided by 10,000*. This step is necessary because the calculator works in transactional units for queue operations.
+1. Review the estimated table storage operations costs.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Plan and manage costs for Logic Apps](plan-manage-costs.md)
logic-apps Logic Apps Add Run Inline Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-add-run-inline-code.md
In this article, the example logic app triggers when a new email arrives in a wo
For example, [Free-tier](../logic-apps/logic-apps-pricing.md#integration-accounts) integration accounts are meant only for exploratory scenarios and workloads, not production scenarios, are limited in usage and throughput, and aren't supported by a service-level agreement (SLA). Other tiers incur costs, but include SLA support, offer more throughput, and have higher limits. Learn more about integration account [tiers](../logic-apps/logic-apps-pricing.md#integration-accounts), [pricing](https://azure.microsoft.com/pricing/details/logic-apps/), and [limits](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits).
- * If you don't want to use an integration account, you can try using [Azure Logic Apps Preview](logic-apps-overview-preview.md), and create a logic app from the **Logic App (Preview)** resource type.
+ * If you don't want to use an integration account, you can try using [Azure Logic Apps Preview](single-tenant-overview-compare.md), and create a logic app from the **Logic App (Preview)** resource type.
In Azure Logic Apps Preview, **Inline Code** is now named **Inline Code Operations** along with these other differences:
In this article, the example logic app triggers when a new email arrives in a wo
You can start from either option here:
- * Create the logic app from the **Logic App (Preview)** resource type [by using the Azure portal](create-stateful-stateless-workflows-azure-portal.md).
+ * Create the logic app from the **Logic App (Preview)** resource type [by using the Azure portal](create-single-tenant-workflows-azure-portal.md).
- * Create a project for the logic app [by using Visual Studio Code and the Azure Logic Apps (Preview) extension](create-stateful-stateless-workflows-visual-studio-code.md)
+ * Create a project for the logic app [by using Visual Studio Code and the Azure Logic Apps (Preview) extension](create-single-tenant-workflows-visual-studio-code.md)
## Add inline code
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
In the designer, the same setting controls the maximum number of days that a wor
* For the multi-tenant service, the 90-day default limit is the same as the maximum limit. You can only decrease this value.
-* For the single-tenant service (preview), you can decrease or increase the 90-day default limit. For more information, see [Create workflows for single-tenant Azure Logic Apps using Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md).
+* For the single-tenant service (preview), you can decrease or increase the 90-day default limit. For more information, see [Create workflows for single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md).
* For an integration service environment, you can decrease or increase the 90-day default limit.
The following table lists the values for a single workflow definition:
| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes | ||--|-||-|
-| Maximum number of code characters | 1,024 characters | 100,000 characters | 1,024 characters | To use the higher limit, create a **Logic App (Preview)** resource, which runs in single-tenant (preview) Logic Apps, either [by using the Azure portal](create-stateful-stateless-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Preview)** extension](create-stateful-stateless-workflows-visual-studio-code.md). |
-| Maximum duration for running code | 5 sec | 15 sec | 1,024 characters | To use the higher limit, create a **Logic App (Preview)** resource, which runs in single-tenant (preview) Logic Apps, either [by using the Azure portal](create-stateful-stateless-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Preview)** extension](create-stateful-stateless-workflows-visual-studio-code.md). |
+| Maximum number of code characters | 1,024 characters | 100,000 characters | 1,024 characters | To use the higher limit, create a **Logic App (Preview)** resource, which runs in single-tenant (preview) Logic Apps, either [by using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Preview)** extension](create-single-tenant-workflows-visual-studio-code.md). |
+| Maximum duration for running code | 5 sec | 15 sec | 1,024 characters | To use the higher limit, create a **Logic App (Preview)** resource, which runs in single-tenant (preview) Logic Apps, either [by using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Preview)** extension](create-single-tenant-workflows-visual-studio-code.md). |
|||||| <a name="custom-connector-limits"></a>
The following table lists the values for custom connectors:
For more information, review the following documentation: * [Custom managed connectors overview](/connectors/custom-connectors)
-* [Enable built-in connector authoring - Visual Studio Code with Azure Logic Apps (Preview)](create-stateful-stateless-workflows-visual-studio-code.md#enable-built-in-connector-authoring)
+* [Enable built-in connector authoring - Visual Studio Code with Azure Logic Apps (Preview)](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring)
<a name="managed-identity"></a>
logic-apps Quickstart Create Logic Apps Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/quickstart-create-logic-apps-visual-studio-code.md
Deleting a logic app affects workflow instances in the following ways:
## Next steps > [!div class="nextstepaction"]
-> [Create stateful and stateless logic apps in Visual Studio Code (Preview)](../logic-apps/create-stateful-stateless-workflows-visual-studio-code.md)
+> [Create stateful and stateless logic apps in Visual Studio Code (Preview)](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
+
+ Title: Secure traffic between single-tenant workflows and virtual networks
+description: Secure traffic between virtual networks, storage accounts, and single-tenant workflows in Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 05/13/2021+
+# As a developer, I want to connect to my single-tenant workflows from virtual networks using private endpoints.
++
+# Secure traffic between virtual networks and single-tenant workflows in Azure Logic Apps using private endpoints (preview)
+
+To securely and privately communicate between your logic app workflow and a virtual network, you can set up *private endpoints* for inbound traffic and use virtual network integration for outbound traffic.
+
+A private endpoint is a network interface that privately and securely connects to a service powered by Azure Private Link. This service can be an Azure service such as Azure Logic Apps, Azure Storage, Azure Cosmos DB, SQL, or your own Private Link Service. The private endpoint uses a private IP address from your virtual network, which effectively brings the service into your virtual network.
+
+This article shows how to set up access through private endpoints for inbound traffic, outbound traffic, and connection to storage accounts.
+
+For more information, review the following documentation:
+
+- [What is Azure Private Endpoint?](../private-link/private-endpoint-overview.md)
+
+- [What is Azure Private Link?](../private-link/private-link-overview.md)
+
+- [What is single-tenant logic app workflow in Azure Logic Apps?](single-tenant-overview-compare.md)
+
+## Prerequisites
+
+You need to have a new or existing Azure virtual network that includes a subnet without any delegations. This subnet is used to deploy and allocate private IP addresses from the virtual network.
+
+For more information, review the following documentation:
+
+* [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md)
+
+* [What is subnet delegation?](../virtual-network/subnet-delegation-overview.md)
+
+* [Add or remove a subnet delegation](../virtual-network/manage-subnet-delegation.md)
+
+<a name="set-up-inbound"></a>
+
+## Set up inbound traffic through private endpoints
+
+To secure inbound traffic to your workflow, complete these high-level steps:
+
+1. Start your workflow with a built-in trigger that can receive and handle inbound requests, such as the Request trigger or the HTTP + Webhook trigger. This trigger sets up your workflow with a callable endpoint.
+
+1. Add a private endpoint to your virtual network.
+
+1. Make test calls to check access to the endpoint. To call your logic app workflow after you set up this endpoint, you must be connected to the virtual network.
+
+### Prerequisites for inbound traffic through private endpoints
+
+In addition to the [virtual network setup in the top-level prerequisites](#prerequisites), you need to have a new or existing single-tenant based logic app workflow that starts with a built-in trigger that can receive requests.
+
+For example, the Request trigger creates an endpoint on your workflow that can receive and handle inbound requests from other callers, including workflows. This endpoint provides a URL that you can use to call and trigger the workflow. For this example, the steps continue with the Request trigger.
+
+For more information, review the following documentation:
+
+* [Create single-tenant logic app workflows in Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
+
+* [Receive and respond to inbound HTTP requests using Azure Logic Apps](../connectors/connectors-native-reqres.md)
+
+### Create the workflow
+
+1. If you haven't already, create a single-tenant based logic app, and a blank workflow.
+
+1. After the designer opens, add the Request trigger as the first step in your workflow.
+
+ > [!NOTE]
+ > You can call Request triggers and webhook triggers only from inside your virtual network.
+ > Managed API webhook triggers and actions won't work because they require a public endpoint to receive calls.
+
+1. Based on your scenario requirements, add other actions that you want to run in your workflow.
+
+1. When you're done, save your workflow.
+
+For more information, review [Create single-tenant logic app workflows in Azure Logic Apps](create-single-tenant-workflows-azure-portal.md).
+
+#### Copy the endpoint URL
+
+1. On the workflow menu, select **Overview**.
+
+1. On the **Overview** page, copy and save the **Workflow URL** for later use.
+
+ To trigger the workflow, you call or send a request to this URL.
+
+1. Make sure that the URL works by calling or sending a request to the URL. You can use any tool you want to send the request, for example, Postman.
+
+### Set up private endpoint connection
+
+1. On your logic app menu, under **Settings**, select **Networking**.
+
+1. On the **Networking** page, under **Private Endpoint connections**, select **Configure your private endpoint connections**.
+
+1. On the **Private Endpoint connections page**, select **Add**.
+
+1. On the **Add Private Endpoint** pane that opens, provide the requested information about the endpoint.
+
+ For more information, review [Private Endpoint properties](../private-link/private-endpoint-overview.md#private-endpoint-properties).
+
+1. After Azure successfully provisions the private endpoint, try again to call the workflow URL.
+
+ This time, you get an expected `403 Forbidden` error, which is means that the private endpoint is set up and works correctly.
+
+1. To make sure the connection is working correctly, create a virtual machine in the same virtual network that has the private endpoint, and try calling the logic app workflow.
+
+### Considerations for inbound traffic through private endpoints
+
+* If accessed from outside your virtual network, monitoring view can't access the inputs and outputs from triggers and actions.
+
+* Deployment from Visual Studio Code or Azure CLI works only from inside the virtual network. You can use the Deployment Center to link your logic app to a GitHub repo. You can then use Azure infrastructure to build and deploy your code.
+
+ For GitHub integration to work, remove the `WEBSITE_RUN_FROM_PACKAGE` setting from your logic app or set the value to `0`.
+
+* Enabling Private Link doesn't affect outbound traffic, which still flows through the App Service infrastructure.
+
+<a name="set-up-outbound"></a>
+
+## Set up outbound traffic through private endpoints
+
+To secure outbound traffic from your logic app, you can integrate your logic app with a virtual network. By default, outbound traffic from your logic app is only affected by network security groups (NSGs) and user-defined routes (UDRs) when going to a private address, such as `10.0.0.0/8`, `172.16.0.0/12`, and `192.168.0.0/16`. However, by routing all outbound traffic through your own virtual network, you can subject all outbound traffic to NSGs, routes, and firewalls. To make sure that all outbound traffic is affected by the NSGs and UDRs on your integration subnet, set the logic app setting, `WEBSITE_VNET_ROUTE_ALL` to `1`.
+
+> [!IMPORTANT]
+> For the Logic Apps runtime to work, you need to have an uninterrupted connection to
+> the backend storage. For Azure-hosted managed connectors to work, you need to have
+> an uninterrupted connection to the managed API service.
+
+To make sure that your logic app uses private domain name server (DNS) zones in your virtual network, set WEBSITE_DNS_SERVER to 168.63.129.16 to ensure your app uses private DNS zones in your vNET
+
+### Considerations for outbound traffic through private endpoints
+
+Setting up virtual network integration doesn't affect inbound traffic, which continues to use the App Service shared endpoint. To secure inbound traffic, review [Set up inbound traffic through private endpoints](#set-up-inbound).
+
+For more information, review the following documentation:
+
+- [Integrate your app with an Azure virtual network](../app-service/web-sites-integrate-with-vnet.md)
+- [Network security groups](../virtual-network/network-security-groups-overview.md)
+- [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md)
+
+## Connect to storage account with private endpoints
+
+You can restrict storage account access so that only resources inside a virtual network can connect. Azure Storage supports adding private endpoints to your storage account. Logic app workflows can then use these endpoints to communicate with the storage account.
+
+In your logic app settings, set `AzureWebJobsStorage` to the connection string for the storage account that has the private endpoints by choosing one of these options:
+
+* **Azure portal**: On your logic app menu, select **Configuration**. Update the `AzureWebJobsStorage` setting with the connection string for the storage account.
+
+* **Visual Studio Code**: In your project root-level **local.settings.json** file, update the `AzureWebJobsStorage` setting with the connection string for the storage account.
+
+ For more information, review the [Use private endpoints for Azure Storage documentation](../storage/common/storage-private-endpoints.md).
+
+### Considerations for private endpoints on storage accounts
+
+* Create different private endpoints for each of the Table, Queue, and Blob storage services.
+
+* Route all outbound traffic through your virtual network by using this setting:
+
+ `"WEBSITE_VNET_ROUTE_ALL": "1"`
+
+* To make your logic app use private domain name server (DNS) zones in your virtual network, set the logic app's `WEBSITE_DNS_SERVER` setting to `168.63.129.16`.
+
+* You need to have a separate publicly accessible storage account when you deploy your logic app. Make sure that you set the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` setting to the connection string for that storage account.
+
+* If your logic app uses private endpoints, deploy using [GitHub Integrations](https://docs.github.com/en/github/customizing-your-github-workflow/about-integrations).
+
+ If your logic app doesn't use private endpoints, you can deploy from Visual Studio Code and set the `WEBSITE_RUN_FROM_PACKAGE` setting to `1`.
+
+## Next steps
+
+* [Logic Apps Anywhere: Networking possibilities with Logic Apps (single-tenant)](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047)
logic-apps Set Up Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/set-up-devops-deployment-single-tenant-azure-logic-apps.md
+
+ Title: Set up DevOps for single-tenant Azure Logic Apps (preview)
+description: How to set up DevOps deployment for workflows in single-tenant Azure Logic Apps (preview).
+
+ms.suite: integration
++ Last updated : 05/10/2021+
+# As a developer, I want to automate deployment for workflows hosted in single-tenant Azure Logic Apps by using DevOps tools and processes.
++
+# Set up DevOps deployment for single-tenant Azure Logic Apps (preview)
+
+This article shows how to deploy a single-tenant based logic app project from Visual Studio Code to your infrastructure by using DevOps tools and processes. Based on whether you prefer GitHub or Azure DevOps for deployment, choose the path and tools that work best for your scenario. You can use the included samples that contain example logic app projects plus examples for Azure deployment using either GitHub or Azure DevOps. For more information about DevOps for single-tenant, review [DevOps deployment for single-tenant Azure Logic Apps (preview)](devops-deployment-single-tenant-azure-logic-apps.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- A single-tenant based logic app project created with [Visual Studio Code and the Azure Logic Apps (Preview) extension](create-stateful-stateless-workflows-visual-studio-code.md#prerequisites).
+
+ If you don't already have a logic app project or infrastructure set up, you can use the included sample projects to deploy an example app and infrastructure, based on the source and deployment options you prefer to use. For more information about these sample projects and the resources included to run the example logic app, review [Deploy your infrastructure](#deploy-infrastructure).
+
+- If you want to deploy to Azure, you need an existing **Logic App (Preview)** resource created in Azure. To quickly create an empty logic app resource, review [Create single-tenant based logic app workflows - Portal](create-stateful-stateless-workflows-azure-portal.md).
+
+<a name="deploy-infrastructure"></a>
+
+## Deploy infrastructure resources
+
+If you don't already have a logic app project or infrastructure set up, you can use the following sample projects to deploy an example app and infrastructure, based on the source and deployment options you prefer to use:
+
+- [GitHub sample for single-tenant Azure Logic Apps](https://github.com/Azure/logicapps/tree/master/github-sample)
+
+ This sample includes an example logic app project for single-tenant Azure Logic Apps plus examples for Azure deployment and GitHub Actions.
+
+- [Azure DevOps sample for single-tenant Azure Logic Apps](https://github.com/Azure/logicapps/tree/master/azure-devops-sample)
+
+ This sample includes an example logic app project for single-tenant Azure Logic Apps plus examples for Azure deployment and Azure Pipelines.
+
+Both samples include the following resources that a logic app uses to run.
+
+| Resource name | Required | Description |
+||-|-|
+| Logic App (Preview) | Yes | This Azure resource contains the workflows that run in single-tenant Azure Logic Apps. |
+| Premium or App Service hosting plan | Yes | This Azure resource specifies the hosting resources to use for running your logic app, such as compute, processing, storage, networking, and so on. |
+| Azure storage account | Yes, for stateless workflows | This Azure resource stores the metadata, state, inputs, outputs, run history, and other information about your workflows. |
+| Application Insights | Optional | This Azure resource provides monitoring capabilities for your workflows. |
+| API connections | Optional, if none exist | These Azure resources define any managed API connections that your workflows use to run managed connector operations, such as Office 365, SharePoint, and so on. <p><p>**Important**: In your logic app project, the **connections.json** file contains metadata, endpoints, and keys for any managed API connections and Azure functions that your workflows use. To use different connections and functions in each environment, make sure that you parameterize the **connections.json** file and update the endpoints. <p><p>For more information, review [API connection resources and access policies](#api-connection-resources). |
+| Azure Resource Manager (ARM) template | Optional | This Azure resource defines a baseline infrastructure deployment that you can reuse or [export](../azure-resource-manager/templates/template-tutorial-export-template.md). The template also includes the required access policies, for example, to use managed API connections. <p><p>**Important**: Exporting the ARM template won't include all the related parameters for any API connection resources that your workflows use. For more information, review [Find API connection parameters](#find-api-connection-parameters). |
+||||
+
+<a name="api-connection-resources"></a>
+
+## API connection resources and access policies
+
+In single-tenant Azure Logic Apps, every managed or API connection resource in your workflows requires an associated access policy. This policy needs your logic app's identity to provide the correct permissions for accessing the managed connector infrastructure. The included sample projects include an ARM template that includes all the necessary infrastructure resources, including these access policies.
+
+The following diagram shows the dependencies between your logic app project and infrastructure resources:
+
+![Conceptual diagram showing infrastructure dependencies for a logic app project in the single-tenant Azure Logic Apps model.](./media/set-up-devops-deployment-single-tenant-azure-logic-apps/infrastructure-dependencies.png)
+
+<a name="find-api-connection-parameters"></a>
+
+### Find API connection parameters
+
+If your workflows use managed API connections, using the export template capability won't include all related parameters. In an ARM template, every [API connection resource definition](logic-apps-azure-resource-manager-templates-overview.md#connection-resource-definitions) has the following general format:
+
+```json
+{
+ "type": "Microsoft.Web/connections",
+ "apiVersion": "2016ΓÇô06ΓÇô01",
+ "location": "[parameters('location')]",
+ "name": "[parameters('connectionName')]",
+ "properties": {}
+}
+```
+
+To find the values that you need to use in the `properties` object for completing the connection resource definition, you can use the following API for a specific connector:
+
+`PUT https://management.azure.com/subscriptions/{subscription-ID}/providers/Microsoft.Web/locations/{location}/managedApis/{connector-name}?api-version=2018ΓÇô07ΓÇô01-preview`
+
+In the response, find the `connectionParameters` object, which contains all the information necessary for you to complete resource definition for that specific connector. The following example shows an example resource definition for a SQL managed connection:
+
+```json
+{
+ "type": "Microsoft.Web/connections",
+ "apiVersion": "2016ΓÇô06ΓÇô01",
+ "location": "[parameters('location')]",
+ "name": "[parameters('connectionName')]",
+ "properties": {
+ "displayName": "sqltestconnector",
+ "api": {
+ "id": "/subscriptions/{subscription-ID}/providers/Microsoft.Web/locations/{location}/managedApis/sql"
+ },
+ "parameterValues": {
+ "authType": "windows",
+ "database": "TestDB",
+ "password": "TestPassword",
+ "server": "TestServer",
+ "username": "TestUserName"
+ }
+ }
+}
+```
+
+As an alternative, you can review the network trace for when you create a connection in the Logic Apps designer. Find the `PUT` call to the managed API for the connector as previously described, and review the request body for all the information you need.
+
+## Deploy logic app resources (zip deploy)
+
+After you push your logic app project to your source repository, you can set up build and release pipelines that deploy logic apps to infrastructure inside or outside Azure.
+
+### Build your project
+
+To set up a build pipeline based on your logic app project type, follow the corresponding actions:
+
+* Nuget-based: The NuGet-based project structure is based on the .NET Framework. To build these projects, make sure to follow the build steps for .NET Standard. For more information, review the [Create a NuGet package using MSBuild](/nuget/create-packages/creating-a-package-msbuild) documentation.
+
+* Bundle-based: The extension bundle-based project isn't language specific and doesn't require any language-specific build steps. You can use any method to zip your project files.
+
+ > [!IMPORTANT]
+ > Make sure that the .zip file includes all workflow folders, configuration files such as host.json, connections.json, and any other related files.
+
+### Release to Azure
+
+To set up a release pipeline that deploys to Azure, choose the associated option for GitHub, Azure DevOps, or Azure CLI.
+
+> [!NOTE]
+> Azure Logic Apps currently doesn't support Azure deployment slots.
+
+#### [GitHub](#tab/github)
+
+For GitHub deployments, you can deploy your logic app by using [GitHub Actions](https://docs.github.com/actions), for example, the GitHub Action in Azure Functions. This action requires that you pass through the following information:
+
+* Your build artifact
+* The logic app name to use for deployment
+* Your publish profile
+
+```yaml
+- name: 'Run Azure Functions Action'
+ uses: Azure/functions-action@v1
+ id: fa
+ with:
+ app-name: {your-logic-app-name}
+ package: '{your-build-artifact}.zip'
+ publish-profile: {your-logic-app-publish-profile}
+```
+
+For more information, review the [Continuous delivery by using GitHub Action](../azure-functions/functions-how-to-github-actions.md) documentation.
+
+#### [Azure DevOps](#tab/azure-devops)
+
+For Azure DevOps deployments, you can deploy your logic app by using the [Azure Function App Deploy task](/devops/pipelines/tasks/deploy/azure-function-app) in Azure Pipelines. This action requires that you pass through the following information:
+
+* Your build artifact
+* The logic app name to use for deployment
+* Your publish profile
+
+```yaml
+- task: AzureFunctionApp@1
+ displayName: 'Deploy logic app workflows'
+ inputs:
+ azureSubscription: '{your-service-connection}'
+ appType: 'workflowapp'
+ appName: '{your-logic-app-name}'
+ package: '{your-build-artifact}.zip'
+ deploymentMethod: 'zipDeploy'
+```
+
+For more information, review the [Deploy an Azure Function using Azure Pipelines](/devops/pipelines/targets/azure-functions-windows) documentation.
+
+#### [Azure CLI](#tab/azure-cli)
+
+If you use other deployment tools, you can deploy your logic app by using the Azure CLI commands for single-tenant Azure Logic Apps. For example, to deploy your zipped artifact to an Azure resource group, run the following CLI command:
+
+`az logicapp deployment source config-zip -g {your-resource-group} --name {your-logic-app-name} --src {your-build-artifact}.zip`
+++
+### Release to containers
+
+If you containerize your logic app, deployment works mostly the same as any other container you deploy and manage. For more information about containerizing logic apps and deploying to Docker, review [Deploy your logic app to a Docker container from Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md#deploy-to-docker).
+
+For examples that show how to implement an end-to-end container build and deployment pipeline, review [CI/CD for Containers](https://azure.microsoft.com/solutions/architecture/cicd-for-containers/).
+
+## Next steps
+
+* [DevOps deployment for single-tenant Azure Logic Apps (preview)](devops-deployment-single-tenant-azure-logic-apps.md)
+
+We'd like to hear about your experiences with the preview logic app resource type and preview single-tenant model!
+
+- For bugs or problems, [create your issues in GitHub](https://github.com/Azure/logicapps/issues).
+- For questions, requests, comments, and other feedback, [use this feedback form](https://aka.ms/logicappsdevops).
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/single-tenant-overview-compare.md
+
+ Title: Overview - single-tenant (preview) Azure Logic Apps
+description: Learn the differences between single-tenant (preview), multi-tenant, and integration service environment (ISE) for Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 05/05/2021++
+# Single-tenant (preview) versus multi-tenant and integration service environment for Azure Logic Apps
+
+> [!IMPORTANT]
+> Currently in preview, the single-tenant Logic Apps environment and **Logic App (Preview)** resource type are subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Logic Apps is a cloud-based platform for creating and running automated *logic app workflows* that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. To create a logic app, you use either the original **Logic App (Consumption)** resource type or the new **Logic App (Preview)** resource type.
+
+Before you choose which resource type to use, review this article to learn how the new preview resource type compares to the original. You can then decide which type is best to use, based on your scenario's needs, solution requirements, and the environment where you want to deploy, host, and run your workflows.
+
+If you're new to Azure Logic Apps, review the following documentation:
+
+* [What is Azure Logic Apps?](logic-apps-overview.md)
+* [What is a *logic app workflow*?](logic-apps-overview.md#logic-app-concepts)
+
+<a name="resource-environment-differences"></a>
+
+## Resource types and environments
+
+To create logic app workflows, you choose the **Logic App** resource type based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows.
+
+The following table briefly summarizes differences between the new **Logic App (Preview)** resource type and the original **Logic App (Consumption)** resource type. You'll also learn how the *single-tenant* (preview) environment compares to the *multi-tenant* and *integration service environment (ISE)* for deploying, hosting, and running your logic app workflows.
++
+<a name="preview-resource-type-introduction"></a>
+
+## Logic App (Preview) resource
+
+The **Logic App (Preview)** resource type is powered by the redesigned Azure Logic Apps (Preview) runtime, which uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and is hosted as an extension on the Azure Functions runtime. This design provides portability, flexibility, and more performance for your logic apps plus other capabilities and benefits inherited from the Azure Functions platform and Azure App Service ecosystem.
+
+For example, you can run **Logic App (Preview)** workflows anywhere that you can run Azure function apps and their functions. The preview resource type introduces a resource structure that can have multiple workflows, similar to how an Azure function app can include multiple functions. With a 1-to-many mapping, workflows in the same logic app and tenant share compute and processing resources, providing better performance due to their proximity. This structure differs from the **Logic App (Consumption)** resource where you have a 1-to-1 mapping between a logic app resource and a workflow.
+
+To learn more about portability, flexibility, and performance improvements, continue with the following sections. Or, for more information about the redesigned runtime and Azure Functions extensibility, review the following documentation:
+
+* [Azure Logic Apps Running Anywhere - Runtime Deep Dive](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-runtime-deep-dive/ba-p/1835564)
+* [Introduction to Azure Functions](../azure-functions/functions-overview.md)
+* [Azure Functions triggers and bindings](../azure-functions/functions-triggers-bindings.md)
+
+<a name="portability"></a>
+<a name="flexibility"></a>
+
+### Portability and flexibility
+
+When you create logic apps using the **Logic App (Preview)** resource type, you can run your workflows anywhere you can run Azure function apps and their functions, not just in the single-tenant service environment.
+
+For example, when you use Visual Studio Code with the Azure Logic Apps (Preview) extension, you can *locally* develop, build, and run your workflows in your development environment without having to deploy to Azure. If your scenario requires containers, you can containerize your logic apps and deploy as Docker containers.
+
+These capabilities provide major improvements and substantial benefits compared to the multi-tenant model, which requires you to develop against an existing running resource in Azure. Also, the multi-tenant model for automating **Logic App (Consumption)** resource deployment is completely based on Azure Resource Manager (ARM) templates, which combine and handle resource provisioning for both apps and infrastructure.
+
+With the **Logic App (Preview)** resource type, deployment becomes easier because you can separate app deployment from infrastructure deployment. You can package the redesigned runtime and workflows together as part of your logic app. You can use generic steps or tasks that build, assemble, and zip your logic app resources into ready-to-deploy artifacts. To deploy your infrastructure, you can still use ARM templates to separately provision those resources along with other processes and pipelines that you use for those purposes.
+
+To deploy your app, copy the artifacts to the host environment and then start your apps to run your workflows. Or, integrate your artifacts into deployment pipelines using the tools and processes that you already know and use. That way, you can deploy using your own chosen tools, no matter the technology stack that you use for development.
+
+By using standard build and deploy options, you can focus on app development separately from infrastructure deployment. As a result, you get a more generic project model where you can apply many similar or the same deployment options that you use for a generic app. You also benefit from a more consistent experience for building deployment pipelines around your app projects and for running the required tests and validations before publishing to production.
+
+<a name="performance"></a>
+
+### Performance
+
+Using the **Logic App (Preview)** resource type, you can create and run multiple workflows in the same single logic app and tenant. With this 1-to-many mapping, these workflows share resources, such as compute, processing, storage, and network, providing better performance due to their proximity.
+
+The preview logic app resource type and redesigned Azure Logic Apps (Preview) runtime provide another significant improvement by making the more popular managed connectors available as built-in operations. For example, you can use built-in operations for Azure Service Bus, Azure Event Hubs, SQL, and others. Meanwhile, the managed connector versions are still available and continue to work.
+
+When you use the new built-in operations, you create connections called *built-in connections* or *service provider connections*. Their managed connection counterparts are called *API connections*, which are created and run separately as Azure resources that you also have to then deploy by using ARM templates. Built-in operations and their connections run locally in the same process that runs your workflows. Both are hosted on the redesigned Logic Apps runtime. As a result, built-in operations and their connections provide better performance due to proximity with your workflows. This design also works well with deployment pipelines because the service provider connections are packaged into the same build artifact.
+
+## Create, build, and deploy options
+
+To create a logic app based on the environment that you want, you have multiple options, for example:
+
+**Single-tenant environment**
+
+| Option | Resources and tools | More information |
+|--|||
+| Azure portal | **Logic App (Preview)** resource type | [Create integration workflows for single-tenant Logic Apps - Azure portal](create-single-tenant-workflows-azure-portal.md) |
+| Visual Studio Code | [**Azure Logic Apps (Preview)** extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurelogicapps) | [Create integration workflows for single-tenant Logic Apps - Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md) |
+| Azure CLI | Logic Apps Azure CLI extension | Not yet available |
+||||
+
+**Multi-tenant environment**
+
+| Option | Resources and tools | More information |
+|--|||
+| Azure portal | **Logic App (Consumption)** resource type | [Quickstart: Create integration workflows in multi-tenant Azure Logic Apps - Azure portal](quickstart-create-first-logic-app-workflow.md) |
+| Visual Studio Code | [**Azure Logic Apps (Consumption)** extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-logicapps) | [Quickstart: Create integration workflows in multi-tenant Azure Logic Apps - Visual Studio Code](quickstart-create-logic-apps-visual-studio-code.md)
+| Azure CLI | [**Logic Apps Azure CLI** extension](https://github.com/Azure/azure-cli-extensions/tree/master/src/logic) | - [Quickstart: Create and manage integration workflows in multi-tenant Azure Logic Apps - Azure CLI](quickstart-logic-apps-azure-cli.md) <p><p>- [az logic](/cli/azure/logic) |
+| Azure Resource Manager | [**Create a logic app** Azure Resource Manager (ARM) template](https://azure.microsoft.com/resources/templates/101-logic-app-create/) | [Quickstart: Create and deploy integration workflows in multi-tenant Azure Logic Apps - ARM template](quickstart-create-deploy-azure-resource-manager-template.md) |
+| Azure PowerShell | [Az.LogicApp module](/powershell/module/az.logicapp) | [Get started with Azure PowerShell](/powershell/azure/get-started-azureps) |
+| Azure REST API | [Azure Logic Apps REST API](/rest/api/logic) | [Get started with Azure REST API reference](/rest/api/azure) |
+||||
+
+**Integration service environment**
+
+| Option | Resources and tools | More information |
+|--|||
+| Azure portal | **Logic App (Consumption)** resource type with an existing ISE resource | Same as [Quickstart: Create integration workflows in multi-tenant Azure Logic Apps - Azure portal](quickstart-create-first-logic-app-workflow.md), but select an ISE, not a multi-tenant region. |
+||||
+
+Although your development experiences differ based on whether you create **Consumption** or **Preview** logic app resources, you can find and access all your deployed logic apps under your Azure subscription.
+
+For example, in the Azure portal, the **Logic apps** page shows both **Consumption** and **Preview** logic app resource types. In Visual Studio Code, deployed logic apps appear under your Azure subscription, but they are grouped by the extension that you used, namely **Azure: Logic Apps (Consumption)** and **Azure: Logic Apps (Preview)**.
+
+<a name="stateful-stateless"></a>
+
+## Stateful and stateless workflows
+
+With the preview logic app type, you can create these workflow types within the same logic app:
+
+* *Stateful*
+
+ Create stateful workflows when you need to keep, review, or reference data from previous events. These workflows save the inputs and outputs for each action and their states in external storage, which makes reviewing the run details and history possible after each run finishes. Stateful workflows provide high resiliency if outages happen. After services and systems are restored, you can reconstruct interrupted runs from the saved state and rerun the workflows to completion. Stateful workflows can continue running for much longer than stateless workflows.
+
+* *Stateless*
+
+ Create stateless workflows when you don't need to save, review, or reference data from previous events in external storage for later review. These workflows save the inputs and outputs for each action and their states *only in memory*, rather than transferring this data to external storage. As a result, stateless workflows have shorter runs that are typically no longer than 5 minutes, faster performance with quicker response times, higher throughput, and reduced running costs because the run details and history aren't kept in external storage. However, if outages happen, interrupted runs aren't automatically restored, so the caller needs to manually resubmit interrupted runs. These workflows can only run synchronously.
+
+ For easier debugging, you can enable run history for a stateless workflow, which has some impact on performance, and then disable the run history when you're done. For more information, see [Create stateful and stateless workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless) or [Create single-tenant based workflows in the Azure portal](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless).
+
+ > [!NOTE]
+ > Stateless workflows currently support only *actions* for [managed connectors](../connectors/managed.md),
+ > which are deployed in Azure, and not triggers. To start your workflow, select either the
+ > [built-in Request, Event Hubs, or Service Bus trigger](../connectors/built-in.md).
+ > These triggers run natively in the Azure Logic Apps Preview runtime. For more information about limited,
+ > unavailable, or unsupported triggers, actions, and connectors, see
+ > [Changed, limited, unavailable, or unsupported capabilities](#limited-unavailable-unsupported).
+
+<a name="nested-behavior"></a>
+
+### Nested behavior differences between stateful and stateless workflows
+
+You can [make a workflow callable](../logic-apps/logic-apps-http-endpoint.md) from other workflows that exist in the same **Logic App (Preview)** resource by using the [Request trigger](../connectors/connectors-native-reqres.md), [HTTP Webhook trigger](../connectors/connectors-native-webhook.md), or managed connector triggers that have the [ApiConnectionWebhook type](../logic-apps/logic-apps-workflow-actions-triggers.md#apiconnectionwebhook-trigger) and can receive HTTPS requests.
+
+Here are the behavior patterns that nested workflows can follow after a parent workflow calls a child workflow:
+
+* Asynchronous polling pattern
+
+ The parent doesn't wait for a response to their initial call, but continually checks the child's run history until the child finishes running. By default, stateful workflows follow this pattern, which is ideal for long-running child workflows that might exceed [request timeout limits](../logic-apps/logic-apps-limits-and-config.md).
+
+* Synchronous pattern ("fire and forget")
+
+ The child acknowledges the call by immediately returning a `202 ACCEPTED` response, and the parent continues to the next action without waiting for the results from the child. Instead, the parent receives the results when the child finishes running. Child stateful workflows that don't include a Response action always follow the synchronous pattern. For child stateful workflows, the run history is available for you to review.
+
+ To enable this behavior, in the workflow's JSON definition, set the `operationOptions` property to `DisableAsyncPattern`. For more information, see [Trigger and action types - Operation options](../logic-apps/logic-apps-workflow-actions-triggers.md#operation-options).
+
+* Trigger and wait
+
+ For a child stateless workflow, the parent waits for a response that returns the results from the child. This pattern works similar to using the built-in [HTTP trigger or action](../connectors/connectors-native-http.md) to call a child workflow. Child stateless workflows that don't include a Response action immediately return a `202 ACCEPTED` response, but the parent waits for the child to finish before continuing to the next action. These behaviors apply only to child stateless workflows.
+
+This table specifies the child workflow's behavior based on whether the parent and child are stateful, stateless, or are mixed workflow types:
+
+| Parent workflow | Child workflow | Child behavior |
+|--|-|-|
+| Stateful | Stateful | Asynchronous or synchronous with `"operationOptions": "DisableAsyncPattern"` setting |
+| Stateful | Stateless | Trigger and wait |
+| Stateless | Stateful | Synchronous |
+| Stateless | Stateless | Trigger and wait |
+||||
+
+<a name="other-capabilities"></a>
+
+## Other preview capabilities
+
+The **Logic App (Preview)** resource and single-tenant model include many current and new capabilities, for example:
+
+* Create logic apps and their workflows from [400+ connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) apps and services plus connectors for on-premises systems.
+
+ * More managed connectors are now available as built-in operations and run similarly to other built-in operations, such as Azure Functions. Built-in operations run natively on the redesigned Azure Logic Apps Preview runtime. For example, new built-in operations include Azure Service Bus, Azure Event Hubs, SQL Server, and MQ.
+
+ > [!NOTE]
+ > For the built-in SQL Server version, only the **Execute Query** action can directly connect to Azure
+ > virtual networks without using the [on-premises data gateway](logic-apps-gateway-connection.md).
+
+ * You can create your own built-in connectors for any service that you need by using the [preview's extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similarly to built-in operations such as Azure Service Bus and SQL Server but unlike [custom managed connectors](../connectors/apis-list.md#custom-apis-and-connectors), which aren't currently supported during preview, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the redesigned runtime.
+
+ The authoring capability is currently available only in Visual Studio Code, but isn't enabled by default. To create these connectors, [switch your project from extension bundle-based (Node.js) to NuGet package-based (.NET)](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring). For more information, see [Azure Logic Apps Running Anywhere - Built-in connector extensibility](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272).
+
+ * You can use the B2B actions for Liquid Operations and XML Operations without an integration account. To use these actions, you need to have Liquid maps, XML maps, or XML schemas that you can upload through the respective actions in the Azure portal or add to your Visual Studio Code project's **Artifacts** folder using the respective **Maps** and **Schemas** folders.
+
+ * Logic app (preview) resources can run anywhere because the Azure Logic Apps service generates Shared Access Signature (SAS) connection strings that these logic apps can use for sending requests to the cloud connection runtime endpoint. The Logic Apps service saves these connection strings with other application settings so that you can easily store these values in Azure Key Vault when you deploy in Azure.
+
+ > [!NOTE]
+ > By default, a **Logic App (Preview)** resource has the [system-assigned managed identity](../logic-apps/create-managed-service-identity.md)
+ > automatically enabled to authenticate connections at runtime. This identity differs from the authentication
+ > credentials or connection string that you use when you create a connection. If you disable this identity,
+ > connections won't work at runtime. To view this setting, on your logic app's menu, under **Settings**, select **Identity**.
+
+* Stateless workflows run only in memory so that they finish more quickly, respond faster, have higher throughput, and cost less to run because the run histories and data between actions don't persist in external storage. Optionally, you can enable run history for easier debugging. For more information, see [Stateful versus stateless workflows](#stateful-stateless).
+
+* You can locally run, test, and debug your logic apps and their workflows in the Visual Studio Code development environment.
+
+ Before you run and test your logic app, you can make debugging easier by adding and using breakpoints inside the **workflow.json** file for a workflow. However, breakpoints are supported only for actions at this time, not triggers. For more information, see [Create stateful and stateless workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#manage-breakpoints).
+
+* Directly publish or deploy logic apps and their workflows from Visual Studio Code to various hosting environments such as Azure and [Docker containers](/dotnet/core/docker/introduction).
+
+* Enable diagnostics logging and tracing capabilities for your logic app by using [Application Insights](../azure-monitor/app/app-insights-overview.md) when supported by your Azure subscription and logic app settings.
+
+* Access networking capabilities, such as connect and integrate privately with Azure virtual networks, similar to Azure Functions when you create and deploy your logic apps using the [Azure Functions Premium plan](../azure-functions/functions-premium-plan.md). For more information, review the following documentation:
+
+ * [Azure Functions networking options](../azure-functions/functions-networking-options.md)
+ * [Azure Logic Apps Running Anywhere - Networking possibilities with Azure Logic Apps Preview](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047)
+
+* Regenerate access keys for managed connections used by individual workflows in a **Logic App (Preview)** resource. For this task, [follow the same steps for the **Logic Apps (Consumption)** resource but at the individual workflow level](logic-apps-securing-a-logic-app.md#regenerate-access-keys), not the logic app resource level.
+
+For more information, see [Changed, limited, unavailable, and unsupported capabilities](#limited-unavailable-unsupported) and the [Logic Apps Public Preview Known Issues page in GitHub](https://github.com/Azure/logicapps/blob/master/articles/logic-apps-public-preview-known-issues.md).
+
+<a name="limited-unavailable-unsupported"></a>
+
+## Changed, limited, unavailable, or unsupported capabilities
+
+For the **Logic App (Preview)** resource, these capabilities have changed, or they are currently limited, unavailable, or unsupported:
+
+* **OS support**: Currently, the designer in Visual Studio Code doesn't work on Linux OS, but you can still deploy logic apps that use the Logic Apps Preview runtime to Linux-based virtual machines. For now, you can build your logic apps in Visual Studio Code on Windows or macOS and then deploy to a Linux-based virtual machine.
+
+* **Triggers and actions**: Built-in triggers and actions run natively in the Logic Apps Preview runtime, while managed connectors are deployed in Azure. Some built-in triggers are unavailable, such as Sliding Window and Batch. To start a stateful or stateless workflow, use the [built-in Recurrence, Request, HTTP, HTTP Webhook, Event Hubs, or Service Bus trigger](../connectors/apis-list.md). In the designer, built-in triggers and actions appear under the **Built-in** tab.
+
+ For *stateful* workflows, [managed connector triggers and actions](../connectors/managed.md) appear under the **Azure** tab, except for the unavailable operations listed below. For *stateless* workflows, the **Azure** tab doesn't appear when you want to select a trigger. You can select only [managed connector *actions*, not triggers](../connectors/managed.md). Although you can enable Azure-hosted managed connectors for stateless workflows, the designer doesn't show any managed connector triggers for you to add.
+
+ > [!NOTE]
+ > To run locally in Visual Studio Code, webhook-based triggers and actions require additional setup. For more information, see
+ > [Create stateful and stateless workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#webhook-setup).
+
+ * These triggers and actions have either changed or are currently limited, unsupported, or unavailable:
+
+ * [On-premises data gateway *triggers*](../connectors/managed.md#on-premises-connectors) are unavailable, but gateway actions *are* available.
+
+ * The built-in action, [Azure Functions - Choose an Azure function](logic-apps-azure-functions.md) is now **Azure Function Operations - Call an Azure function**. This action currently works only for functions that are created from the **HTTP Trigger** template.
+
+ In the Azure portal, you can select an HTTP trigger function that you can access by creating a connection through the user experience. If you inspect the function action's JSON definition in code view or the **workflow.json** file, the action refers to the function by using a `connectionName` reference. This version abstracts the function's information as a connection, which you can find in your project's **connections.json** file, which is available after you create a connection.
+
+ > [!NOTE]
+ > In the single-tenant model, the function action supports only query string authentication.
+ > Azure Logic Apps Preview gets the default key from the function when making the connection,
+ > stores that key in your app's settings, and uses the key for authentication when calling the function.
+ >
+ > As in the multi-tenant model, if you renew this key, for example, through the Azure Functions experience
+ > in the portal, the function action no longer works due to the invalid key. To fix this problem, you need
+ > to recreate the connection to the function that you want to call or update your app's settings with the new key.
+
+ * The built-in action, [Inline Code - Execute JavaScript Code](logic-apps-add-run-inline-code.md) is now **Inline Code Operations - Run in-line JavaScript**.
+
+ * Inline Code Operations actions no longer require an integration account.
+
+ * For macOS and Linux, **Inline Code Operations** is now supported when you use the Azure Logic Apps (Preview) extension in Visual Studio Code.
+
+ * You no longer have to restart your logic app if you make changes in an **Inline Code Operations** action.
+
+ * **Inline Code Operations** actions have [updated limits](logic-apps-limits-and-config.md).
+
+ * Some [built-in B2B triggers and actions for integration accounts](../connectors/managed.md#integration-account-connectors) are unavailable, for example, the **Flat File** encoding and decoding actions.
+
+ * The built-in action, [Azure Logic Apps - Choose a Logic App workflow](logic-apps-http-endpoint.md) is now **Workflow Operations - Invoke a workflow in this workflow app**.
+
+* [Custom managed connectors](../connectors/apis-list.md#custom-apis-and-connectors) aren't currently supported.
+
+* **Hosting plan availability**: Whether you create the single-tenant **Logic App (Preview)** resource type in the Azure portal or deploy from Visual Studio Code, you can only use the Premium or App Service hosting plan in Azure. The preview resource type doesn't support Consumption hosting plans. You can deploy from Visual Studio Code to a Docker container, but not to an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md).
+
+* **Breakpoint debugging in Visual Studio Code**: Although you can add and use breakpoints inside the **workflow.json** file for a workflow, breakpoints are supported only for actions at this time, not triggers. For more information, see [Create stateful and stateless workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#manage-breakpoints).
+
+* **Zoom control**: The zoom control is currently unavailable on the designer.
+
+* **Trigger history and run history**: For the **Logic App (Preview)** resource type, trigger history and run history in the Azure portal appears at the workflow level, not the logic app level. To find this historical data, follow these steps:
+
+ * To view the run history, open the workflow in your logic app. On the workflow menu, under **Developer**, select **Monitor**.
+ * To review the trigger history, open the workflow in your logic app. On the workflow menu, under **Developer**, select **Trigger Histories**.
+
+<a name="firewall-permissions"></a>
+
+## Strict network and firewall traffic permissions
+
+If your environment has strict network requirements or firewalls that limit traffic, you have to allow access for any trigger or action connections in your logic app workflows. To find the fully qualified domain names (FQDNs) for these connections, review the corresponding sections in these topics:
+
+* [Firewall permissions for single tenant logic apps - Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#firewall-setup)
+* [Firewall permissions for single tenant logic apps - Azure portal](create-single-tenant-workflows-azure-portal.md#firewall-setup)
+
+## Next steps
+
+* [Create single-tenant based workflows in the Azure portal](create-single-tenant-workflows-azure-portal.md)
+* [Create stateful and stateless workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md)
+* [Logic Apps Public Preview Known Issues page in GitHub](https://github.com/Azure/logicapps/blob/master/articles/logic-apps-public-preview-known-issues.md)
+
+We'd also like to hear about your experiences with the preview logic app resource type and preview single-tenant model!
+
+* For bugs or problems, [create your issues in GitHub](https://github.com/Azure/logicapps/issues).
+* For questions, requests, comments, and other feedback, [use this feedback form](https://aka.ms/lafeedback).
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ **azureml-automl-runtime** + Improved AutoML Scoring script to make it consistent with designer + Patch bug where forecasting with the Prophet model would throw a "missing column" error if trained on an earlier version of the SDK.
+ + Added the ARIMAX model to the public-facing, forecasting-supported model lists of the AutoML SDK. Here, ARIMAX is a regression with ARIMA errors and a special case of the transfer function models developed by Box and Jenkins. For a discussion of how the two approaches are different, see [The ARIMAX model muddle](https://robjhyndman.com/hyndsight/arimax/). Unlike the rest of the multivariate models that use auto-generated, time-dependent features (hour of the day, day of the year, and so on) in AutoML, this model uses only features that are provided by the user, and it makes interpreting coefficients easy.
+ **azureml-contrib-dataset** + Updated documentation description with indication that libfuse should be installed while using mount. + **azureml-core**
machine-learning How To Inference Server Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-inference-server-http.md
+
+ Title: Azure Machine Learning inference HTTP server
+
+description: Learn how to enable local development with Azure machine learning inference http server.
++++++++ Last updated : 05/14/2021++
+# Azure Machine Learning inference HTTP server (Preview)
+
+The Azure Machine Learning inference HTTP server [(preview)](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) is a Python package that allows you to easily validate your entry script (`score.py`) in a local development environment. If there's a problem with the scoring script, the server will return an error. It will also return the location where the error occurred.
+
+The server can also be used when creating validation gates in a continuous integration and deployment pipeline. For example, start the server with thee candidate script and run the test suite against the local endpoint.
+
+## Prerequisites
+
+- Python version 3.7
+
+## Installation
+
+> [!NOTE]
+> To avoid package conflicts, install the server in a virtual environment.
+
+To install the `azureml-inference-server-http package`, run the following command in your cmd/terminal:
+
+```bash
+python -m pip install azureml-inference-server-http
+```
+
+## Use the server
+
+1. Create a directory to hold your files:
+
+ ```bash
+ mkdir server_quickstart
+ cd server_quickstart
+ ```
+
+1. To avoid package conflicts, create a virtual environment and activate it:
+
+ ```bash
+ virtualenv myenv
+ source myenv/bin/activate
+ ```
+
+1. Install the `azureml-inference-server-http` package from the pypi feed:
+
+ ```bash
+ python -m pip install azureml-inference-server-http
+ ```
+
+1. Create your entry script (`score.py`). The following example creates a basic entry script:
+
+ ```bash
+ echo '
+ import time
+
+ def init():
+ time.sleep(1)
+
+ def run(input_data):
+ return {"message":"Hello, World!"}
+ ' > score.py
+ ```
+
+ The directory structure should look like the following tree structure:
+
+ ```text
+ server_quickstart/
+ Γö£ΓöÇΓöÇ score.py
+ ΓööΓöÇΓöÇ myenv/lib/python3.7/site-packages
+ Γö£ΓöÇΓöÇ pip
+ Γö£ΓöÇΓöÇ setuptools
+ Γö£ΓöÇΓöÇ ...
+ ΓööΓöÇΓöÇ azureml-inference-server-http
+ ```
+
+1. Start the server and set `score.py` as the entry script:
+
+ ```bash
+ azmlinfsrv --entry_script score.py
+ ```
+
+ > [!NOTE]
+ > The server is hosted on 0.0.0.0, which means it will listen to all IP addresses of the hosting machine.
+
+ The server is listening on port 5001 at these routes.
+
+ | Name | Route|
+ | | |
+ | Liveness Probe | 127.0.0.1:5001/|
+ | Score | 127.0.0.1:5001/score|
+
+1. Send a scoring request to the server using `curl`:
+
+ ```bash
+ curl -p 127.0.0.1:5001/score
+ ```
+
+ The server should respond like this.
+
+ ```bash
+ {"message": "Hello, World!"}
+ ```
+
+Now you can modify the scoring script and test your changes by running the server again.
+
+## Server parameters
+
+The following table contains the parameters accepted by the server:
+
+| Parameter | Required | Default | Description |
+| - | | - | -|
+| entry_script | True | N/A | The relative or absolute path to the scoring script.|
+| model_dir | False | N/A | The relative or absolute path to the directory holding the model used for inferencing.
+| port | False | 5001 | The serving port of the server.|
+| worker_count | False | 1 | The number of worker threads that will process concurrent requests. |
+
+## Request flow
+
+The following steps explain how the Azure Machine Learning inference HTTP server works handles incoming requests:
+
+1. A python CLI wrapper sits around the server's network stack and is used to start the server.
+1. A client sends a request to the server.
+1. When a request is received, it goes through the [WSGI](https://www.fullstackpython.com/wsgi-servers.html) server and is then dispatched to one of the workers.
+ - [Gunicorn](https://docs.gunicorn.org/) is used on __Linux__.
+ - [Waitress](https://docs.pylonsproject.org/projects/waitress/) is used on __Windows__.
+1. The requests are then handled by a [Flask](https://flask.palletsprojects.com/) app, which loads the entry script & any dependencies.
+1. Finally, the request is sent to your entry script. The entry script then makes an inference call to the loaded model and returns a response.
+
+## Frequently asked questions
+
+### Do I need to reload the server when changing the score script?
+
+After changing your scoring script (`score.py`), stop the server with `ctrl + c`. Then restart it with `azmlinfsrv --entry_script score.py`.
+
+### Which OS is supported?
+
+The Azure Machine Learning inference server runs on Windows & Linux based operating systems.
+
+## Next steps
+
+For more information on creating an entry script and deploying models, see [How to deploy a model using Azure Machine Learning](how-to-deploy-and-where.md).
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-with-custom-image.md
ws = Workspace.from_config()
### Define your environment
-Create an `Environment` object and enable Docker.
+Create an `Environment` object.
```python from azureml.core import Environment fastai_env = Environment("fastai2")
-fastai_env.docker.enabled = True
``` The specified base image in the following code supports the fast.ai library, which allows for distributed deep-learning capabilities. For more information, see the [fast.ai Docker Hub repository](https://hub.docker.com/u/fastdotai).
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-dns.md
In this scenario, there's a [hub and spoke](/azure/architecture/reference-archit
## On-premises workloads using a DNS forwarder
-For on-premises workloads to resolve the FQDN of a private endpoint, use a DNS forwarder to resolve the Azure service [public DNS zone](#azure-services-dns-zone-configuration) in Azure.
+For on-premises workloads to resolve the FQDN of a private endpoint, use a DNS forwarder to resolve the Azure service [public DNS zone](#azure-services-dns-zone-configuration) in Azure. A [DNS forwarder](/windows-server/identity/ad-ds/plan/reviewing-dns-concepts#resolving-names-by-using-forwarding) is a Virtual Machine running on the Virtual Network linked to the Private DNS Zone that can proxy DNS queries coming from other Virtual Networks or from on-premises. This is required as the query must be originated from the Virtual Network to Azure DNS. A few options for DNS proxies are: Windows running DNS services, Linux running DNS services, [Azure Firewall](/azure/firewall/dns-settings).
The following scenario is for an on-premises network that has a DNS forwarder in Azure. This forwarder resolves DNS queries via a server-level forwarder to the Azure provided DNS [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md).
purview Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/frequently-asked-questions.md
Yes, Azure Purview supports Soft Delete for Azure subscription status management
No, Azure Purview does not provide Data Loss Prevention capabilities at this point.
-Read about [Data Loss Prevention in Microsoft Information Protection](https://docs.microsoft.com/microsoft-365/compliance/information-protection?view=o365-worldwide#prevent-data-loss) if you are interested in Data Loss Prevention features inside Microsoft 365.
+Read about [Data Loss Prevention in Microsoft Information Protection](/microsoft-365/compliance/information-protection#prevent-data-loss) if you are interested in Data Loss Prevention features inside Microsoft 365.
purview Register Scan Amazon S3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-amazon-s3.md
Previously updated : 04/21/2021 Last updated : 05/13/2021 # Customer intent: As a security officer, I need to understand how to use the Azure Purview connector for Amazon S3 service to set up, configure, and scan my Amazon S3 buckets.
The following table maps the regions where you data is stored to the region wher
| - | - | | US East (Ohio) | US East (Ohio) | | US East (N. Virginia) | US East (N. Virginia) |
-| US West (N. California) | US East (Ohio) |
-| US West (Oregon) | US East (Ohio) |
+| US West (N. California) | US East (Ohio) or US West (N. California) |
+| US West (Oregon) | US East (Ohio) or US West (Oregon) |
| Africa (Cape Town) | Europe (Frankfurt) |
-| Asia Pacific (Hong Kong) | Asia Pacific (Sydney) |
-| Asia Pacific (Mumbai) | Asia Pacific (Sydney) |
-| Asia Pacific (Osaka-Local) | Asia Pacific (Sydney) |
-| Asia Pacific (Seoul) | Asia Pacific (Sydney) |
-| Asia Pacific (Singapore) | Asia Pacific (Sydney) |
+| Asia Pacific (Hong Kong) | Asia Pacific (Sydney) or Asia Pacific (Singapore) |
+| Asia Pacific (Mumbai) | Asia Pacific (Sydney) or Asia Pacific (Singapore) |
+| Asia Pacific (Osaka-Local) | Asia Pacific (Sydney) or Asia Pacific (Tokyo) |
+| Asia Pacific (Seoul) | Asia Pacific (Sydney) or Asia Pacific (Tokyo) |
+| Asia Pacific (Singapore) | Asia Pacific (Sydney) or Asia Pacific (Singapore) |
| Asia Pacific (Sydney) | Asia Pacific (Sydney) |
-| Asia Pacific (Tokyo) | Asia Pacific (Sydney) |
+| Asia Pacific (Tokyo) | Asia Pacific (Sydney) or Asia Pacific (Tokyo) |
| Canada (Central) | US East (Ohio) | | China (Beijing) | Not supported | | China (Ningxia) | Not supported | | Europe (Frankfurt) | Europe (Frankfurt) | | Europe (Ireland) | Europe (Ireland) |
-| Europe (London) | Europe (Ireland) |
+| Europe (London) | Europe (Ireland) or Europe (London) |
| Europe (Milan) | Europe (Frankfurt) |
-| Europe (Paris) | Europe (Frankfurt) |
+| Europe (Paris) | Europe (Frankfurt) or Europe (Paris) |
| Europe (Stockholm) | Europe (Frankfurt) | | Middle East (Bahrain) | Europe (Frankfurt) | | South America (São Paulo) | US East (Ohio) |
security-center Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/alerts-reference.md
Previously updated : 04/28/2021 Last updated : 05/13/2021
Azure Defender alerts for container hosts aren't limited to the alerts below. Ma
[Further details and notes](defender-for-resource-manager-introduction.md)
-| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
-|||:-:|-|
-| **Antimalware broad files exclusion in your virtual machine (Preview)**<br>(ARM_AmBroadFilesExclusion) | Files exclusion from antimalware extension with broad exclusion rule was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Such exclusion practically disabling the Antimalware protection.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | | Medium |
-| **Antimalware disabled and code execution in your virtual machine (Preview)**<br>(ARM_AmDisablementAndCodeExecution) | Antimalware disabled at the same time as code execution on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers disable antimalware scanners to prevent detection while running unauthorized tools or infecting the machine with malware. | | High |
-| **Antimalware disabled in your virtual machine (Preview)**<br>(ARM_AmDisablement) | Antimalware disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | | Medium |
-| **Antimalware file exclusion and code execution in your virtual machine (Preview)**<br>(ARM_AmFileExclusionAndCodeExecution) | File excluded from your antimalware scanner at the same time as code was executed via a custom script extension on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | | High |
-| **Antimalware file exclusion and code execution in your virtual machine (Preview)**<br>(ARM_AmTempFileExclusionAndCodeExecution) | Temporary file exclusion from antimalware extension in parallel to execution of code via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | | High |
-| **Antimalware file exclusion in your virtual machine (Preview)**<br>(ARM_AmTempFileExclusion) | File excluded from your antimalware scanner on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | | Medium |
-| **Antimalware real-time protection was disabled in your virtual machine (Preview)**<br>(ARM_AmRealtimeProtectionDisabled) | Real-time protection disablement of the antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | | Medium |
-| **Antimalware real-time protection was disabled temporarily in your virtual machine (Preview)**<br>(ARM_AmTempRealtimeProtectionDisablement) | Real-time protection temporary disablement of the antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | | Medium |
-| **Antimalware real-time protection was disabled temporarily while code was executed in your virtual machine (Preview)**<br>(ARM_AmRealtimeProtectionDisablementAndCodeExec) | Real-time protection temporary disablement of the antimalware extension in parallel to code execution via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | | High |
-| **Antimalware temporarily disabled in your virtual machine (Preview)**<br>(ARM_AmTemporarilyDisablement) | Antimalware temporarily disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | | Medium |
-| **Antimalware unusual file exclusion in your virtual machine (Preview)**<br>(ARM_UnusualAmFileExclusion) | Unusual file exclusion from antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | | Medium |
-| **Custom script extension with suspicious command in your virtual machine (Preview)**<br>(ARM_CustomScriptExtensionSuspiciousCmd) | Custom script extension with suspicious command was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extension to execute a malicious code on your virtual machine via the Azure Resource Manager. | Execution | Medium |
-| **Custom script extension with suspicious entry-point in your virtual machine (Preview)**<br>(ARM_CustomScriptExtensionSuspiciousEntryPoint) | Custom script extension with a suspicious entry-point was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. The entry-point refers to a suspicious GitHub repository.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
-| **Custom script extension with suspicious payload in your virtual machine (Preview)**<br>(ARM_CustomScriptExtensionSuspiciousPayload) | Custom script extension with a payload from a suspicious GitHub repository was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
-| **MicroBurst exploitation toolkit used to enumerate resources in your subscriptions (Preview)**<br>(ARM_MicroBurst.AzDomainInfo) | MicroBurst's Information Gathering module was run on your subscription. This tool can be used to discover resources, permissions and network structures. This was detected by analyzing the Azure Activity logs and resource management operations in your subscription | | High |
-| **MicroBurst exploitation toolkit used to enumerate resources in your subscriptions (Preview)**<br>(ARM_MicroBurst.AzureDomainInfo) | MicroBurst's Information Gathering module was run on your subscription. This tool can be used to discover resources, permissions and network structures. This was detected by analyzing the Azure Activity logs and resource management operations in your subscription | | High |
-| **MicroBurst exploitation toolkit used to execute code on your virtual machine (Preview)**<br>(ARM_MicroBurst.AzVMBulkCMD) | MicroBurst's exploitation toolkit was used to execute code on your virtual machines. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
-| **MicroBurst exploitation toolkit used to execute code on your virtual machine (Preview)**<br>(RM_MicroBurst.AzureRmVMBulkCMD) | MicroBurst's exploitation toolkit was used to execute code on your virtual machines. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
-| **MicroBurst exploitation toolkit used to extract keys from your Azure key vaults (Preview)**<br>(ARM_MicroBurst.AzKeyVaultKeysREST) | MicroBurst's exploitation toolkit was used to extract keys from your Azure key vaults. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | | High |
-| **MicroBurst exploitation toolkit used to extract keys to your storage accounts (Preview)**<br>(ARM_MicroBurst.AZStorageKeysREST) | MicroBurst's exploitation toolkit was used to extract keys to your storage accounts. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | | High |
-| **MicroBurst exploitation toolkit used to extract secrets from your Azure key vaults (Preview)**<br>(ARM_MicroBurst.AzKeyVaultSecretsREST) | MicroBurst's exploitation toolkit was used to extract secrets from your Azure key vaults. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | | High |
-| **PowerZure exploitation toolkit used to elevate access from Azure AD to Azure (Preview)**<br>(ARM_PowerZure.AzureElevatedPrivileges) | PowerZure exploitation toolkit was used to elevate access from AzureAD to Azure. This was detected by analyzing Azure Resource Manager operations in your tenant. | | High |
-| **PowerZure exploitation toolkit used to enumerate resources (Preview)**<br>(ARM_PowerZure.GetAzureTargets) | PowerZure exploitation toolkit was used to enumerate resources on behalf of a legitimate user account in your organization. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
-| **PowerZure exploitation toolkit used to enumerate storage containers, shares, and tables (Preview)**<br>(ARM_PowerZure.ShowStorageContent) | PowerZure exploitation toolkit was used to enumerate storage shares, tables, and containers. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
-| **PowerZure exploitation toolkit used to execute a Runbook in your subscription (Preview)**<br>(ARM_PowerZure.StartRunbook) | PowerZure exploitation toolkit was used to execute a Runbook. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
-| **PowerZure exploitation toolkit used to extract Runbooks content (Preview)**<br>(ARM_PowerZure.AzureRunbookContent) | PowerZure exploitation toolkit was used to extract Runbook content. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
-| **Suspicious failed execution of custom script extension in your virtual machine (Preview)**<br>(ARM_CustomScriptExtensionSuspiciousFailure) | Suspicious failure of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Such failures may be associated with malicious scripts run by this extension. | Execution | Medium |
-| **Unusual config reset in your virtual machine (Preview)**<br>(ARM_VMAccessUnusualConfigReset) | An unusual config reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset the configuration in your virtual machine and compromise it. | Credential Access | Medium |
-| **Unusual deletion of custom script extension in your virtual machine (Preview)**<br>(ARM_CustomScriptExtensionUnusualDeletion) | Unusual deletion of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
-| **Unusual execution of custom script extension in your virtual machine (Preview)**<br>(ARM_CustomScriptExtensionUnusualExecution) | Unusual execution of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
-| **Unusual user password reset in your virtual machine (Preview)**<br>(ARM_VMAccessUnusualPasswordReset) | An unusual user password reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing the VM Access extension to reset the credentials of a local user in your virtual machine and compromise it. | Credential Access | Medium |
-| **Unusual user SSH key reset in your virtual machine (Preview)**<br>(ARM_VMAccessUnusualSSHReset) | An unusual user SSH key reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset SSH key of a user account in your virtual machine and compromise it. | Credential Access | Medium |
-| **Usage of MicroBurst exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials (Preview)**<br>(ARM_MicroBurst.RunCodeOnBehalf) | Usage of MicroBurst exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
-| **Usage of NetSPI techniques to maintain persistence in your Azure environment (Preview)**<br>(ARM_NetSPI.MaintainPersistence) | Usage of NetSPI persistence technique to create a webhook backdoor and maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
-| **Usage of PowerZure exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials (Preview)**<br>(ARM_PowerZure.RunCodeOnBehalf) | PowerZure exploitation toolkit detected attempting to run code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
-| **Usage of PowerZure function to maintain persistence in your Azure environment (Preview)**<br>(ARM_PowerZure.MaintainPersistence) | PowerZure exploitation toolkit detected creating a webhook backdoor to maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
-| **PREVIEW - Activity from a risky IP address**<br>(ARM.MCAS_ActivityFromAnonymousIPAddresses) | Users activity from an IP address that has been identified as an anonymous proxy IP address has been detected.<br>These proxies are used by people who want to hide their device's IP address, and can be used for malicious intent. This detection uses a machine learning algorithm that reduces false positives, such as mis-tagged IP addresses that are widely used by users in the organization.<br>Requires an active Microsoft Cloud App Security license. | - | Medium |
-| **PREVIEW - Activity from infrequent country**<br>(ARM.MCAS_ActivityFromInfrequentCountry) | Activity from a location that wasn't recently or ever visited by any user in the organization has occurred.<br>This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization.<br>Requires an active Microsoft Cloud App Security license. | - | Medium |
-| **PREVIEW - Impossible travel activity**<br>(ARM.MCAS_ImpossibleTravelActivity) | Two user activities (in a single or multiple sessions) have occurred, originating from geographically distant locations. This occurs within a time period shorter than the time it would have taken the user to travel from the first location to the second. This indicates that a different user is using the same credentials.<br>This detection uses a machine learning algorithm that ignores obvious false positives contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The detection has an initial learning period of seven days, during which it learns a new user's activity pattern.<br>Requires an active Microsoft Cloud App Security license. | - | Medium |
-| **PREVIEW - Azurite toolkit run detected**<br>(ARM_Azurite) | A known cloud-environment reconnaissance toolkit run has been detected in your environment. The tool [Azurite](https://github.com/mwrlabs/Azurite) can be used by an attacker (or penetration tester) to map your subscriptions' resources and identify insecure configurations. | - | High |
-| **PREVIEW - Suspicious management session using PowerShell detected**<br>(ARM_UnusedAppPowershellPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal that doesn't regularly use PowerShell to manage the subscription environment is now using PowerShell, and performing actions that can secure persistence for an attacker. | Persistence | Medium |
-| **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium |
-| **PREVIEW ΓÇô Suspicious management session using Azure portal detected**<br>(ARM_UnusedAppIbizaPersistence) | Analysis of your subscription activity logs has detected a suspicious behavior. A principal that doesn't regularly use the Azure portal (Ibiza) to manage the subscription environment (hasn't used Azure portal to manage for the last 45 days, or a subscription that it is actively managing), is now using the Azure portal and performing actions that can secure persistence for an attacker. | - | Medium |
+| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
+|-||:-:|-|
+| **Antimalware broad files exclusion in your virtual machine**<br>(ARM_AmBroadFilesExclusion) | Files exclusion from antimalware extension with broad exclusion rule was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Such exclusion practically disabling the Antimalware protection.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | | Medium |
+| **Antimalware disabled and code execution in your virtual machine**<br>(ARM_AmDisablementAndCodeExecution) | Antimalware disabled at the same time as code execution on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers disable antimalware scanners to prevent detection while running unauthorized tools or infecting the machine with malware. | | High |
+| **Antimalware disabled in your virtual machine**<br>(ARM_AmDisablement) | Antimalware disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | | Medium |
+| **Antimalware file exclusion and code execution in your virtual machine**<br>(ARM_AmFileExclusionAndCodeExecution) | File excluded from your antimalware scanner at the same time as code was executed via a custom script extension on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | | High |
+| **Antimalware file exclusion and code execution in your virtual machine**<br>(ARM_AmTempFileExclusionAndCodeExecution) | Temporary file exclusion from antimalware extension in parallel to execution of code via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | | High |
+| **Antimalware file exclusion in your virtual machine**<br>(ARM_AmTempFileExclusion) | File excluded from your antimalware scanner on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | | Medium |
+| **Antimalware real-time protection was disabled in your virtual machine**<br>(ARM_AmRealtimeProtectionDisabled) | Real-time protection disablement of the antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | | Medium |
+| **Antimalware real-time protection was disabled temporarily in your virtual machine**<br>(ARM_AmTempRealtimeProtectionDisablement) | Real-time protection temporary disablement of the antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | | Medium |
+| **Antimalware real-time protection was disabled temporarily while code was executed in your virtual machine**<br>(ARM_AmRealtimeProtectionDisablementAndCodeExec) | Real-time protection temporary disablement of the antimalware extension in parallel to code execution via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | | High |
+| **Antimalware temporarily disabled in your virtual machine**<br>(ARM_AmTemporarilyDisablement) | Antimalware temporarily disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | | Medium |
+| **Antimalware unusual file exclusion in your virtual machine**<br>(ARM_UnusualAmFileExclusion) | Unusual file exclusion from antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | | Medium |
+| **Custom script extension with suspicious command in your virtual machine**<br>(ARM_CustomScriptExtensionSuspiciousCmd) | Custom script extension with suspicious command was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extension to execute a malicious code on your virtual machine via the Azure Resource Manager. | Execution | Medium |
+| **Custom script extension with suspicious entry-point in your virtual machine**<br>(ARM_CustomScriptExtensionSuspiciousEntryPoint) | Custom script extension with a suspicious entry-point was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. The entry-point refers to a suspicious GitHub repository.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
+| **Custom script extension with suspicious payload in your virtual machine**<br>(ARM_CustomScriptExtensionSuspiciousPayload) | Custom script extension with a payload from a suspicious GitHub repository was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
+| **MicroBurst exploitation toolkit used to enumerate resources in your subscriptions**<br>(ARM_MicroBurst.AzDomainInfo) | MicroBurst's Information Gathering module was run on your subscription. This tool can be used to discover resources, permissions and network structures. This was detected by analyzing the Azure Activity logs and resource management operations in your subscription | | High |
+| **MicroBurst exploitation toolkit used to enumerate resources in your subscriptions**<br>(ARM_MicroBurst.AzureDomainInfo) | MicroBurst's Information Gathering module was run on your subscription. This tool can be used to discover resources, permissions and network structures. This was detected by analyzing the Azure Activity logs and resource management operations in your subscription | | High |
+| **MicroBurst exploitation toolkit used to execute code on your virtual machine**<br>(ARM_MicroBurst.AzVMBulkCMD) | MicroBurst's exploitation toolkit was used to execute code on your virtual machines. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
+| **MicroBurst exploitation toolkit used to execute code on your virtual machine**<br>(RM_MicroBurst.AzureRmVMBulkCMD) | MicroBurst's exploitation toolkit was used to execute code on your virtual machines. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
+| **MicroBurst exploitation toolkit used to extract keys from your Azure key vaults**<br>(ARM_MicroBurst.AzKeyVaultKeysREST) | MicroBurst's exploitation toolkit was used to extract keys from your Azure key vaults. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | | High |
+| **MicroBurst exploitation toolkit used to extract keys to your storage accounts**<br>(ARM_MicroBurst.AZStorageKeysREST) | MicroBurst's exploitation toolkit was used to extract keys to your storage accounts. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | | High |
+| **MicroBurst exploitation toolkit used to extract secrets from your Azure key vaults**<br>(ARM_MicroBurst.AzKeyVaultSecretsREST) | MicroBurst's exploitation toolkit was used to extract secrets from your Azure key vaults. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | | High |
+| **PowerZure exploitation toolkit used to elevate access from Azure AD to Azure**<br>(ARM_PowerZure.AzureElevatedPrivileges) | PowerZure exploitation toolkit was used to elevate access from AzureAD to Azure. This was detected by analyzing Azure Resource Manager operations in your tenant. | | High |
+| **PowerZure exploitation toolkit used to enumerate resources**<br>(ARM_PowerZure.GetAzureTargets) | PowerZure exploitation toolkit was used to enumerate resources on behalf of a legitimate user account in your organization. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
+| **PowerZure exploitation toolkit used to enumerate storage containers, shares, and tables**<br>(ARM_PowerZure.ShowStorageContent) | PowerZure exploitation toolkit was used to enumerate storage shares, tables, and containers. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
+| **PowerZure exploitation toolkit used to execute a Runbook in your subscription**<br>(ARM_PowerZure.StartRunbook) | PowerZure exploitation toolkit was used to execute a Runbook. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
+| **PowerZure exploitation toolkit used to extract Runbooks content**<br>(ARM_PowerZure.AzureRunbookContent) | PowerZure exploitation toolkit was used to extract Runbook content. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
+| **Suspicious failed execution of custom script extension in your virtual machine**<br>(ARM_CustomScriptExtensionSuspiciousFailure) | Suspicious failure of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Such failures may be associated with malicious scripts run by this extension. | Execution | Medium |
+| **Unusual config reset in your virtual machine**<br>(ARM_VMAccessUnusualConfigReset) | An unusual config reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset the configuration in your virtual machine and compromise it. | Credential Access | Medium |
+| **Unusual deletion of custom script extension in your virtual machine**<br>(ARM_CustomScriptExtensionUnusualDeletion) | Unusual deletion of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
+| **Unusual execution of custom script extension in your virtual machine**<br>(ARM_CustomScriptExtensionUnusualExecution) | Unusual execution of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
+| **Unusual user password reset in your virtual machine**<br>(ARM_VMAccessUnusualPasswordReset) | An unusual user password reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing the VM Access extension to reset the credentials of a local user in your virtual machine and compromise it. | Credential Access | Medium |
+| **Unusual user SSH key reset in your virtual machine**<br>(ARM_VMAccessUnusualSSHReset) | An unusual user SSH key reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset SSH key of a user account in your virtual machine and compromise it. | Credential Access | Medium |
+| **Usage of MicroBurst exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_MicroBurst.RunCodeOnBehalf) | Usage of MicroBurst exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
+| **Usage of NetSPI techniques to maintain persistence in your Azure environment**<br>(ARM_NetSPI.MaintainPersistence) | Usage of NetSPI persistence technique to create a webhook backdoor and maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
+| **Usage of PowerZure exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_PowerZure.RunCodeOnBehalf) | PowerZure exploitation toolkit detected attempting to run code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
+| **Usage of PowerZure function to maintain persistence in your Azure environment**<br>(ARM_PowerZure.MaintainPersistence) | PowerZure exploitation toolkit detected creating a webhook backdoor to maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | | High |
+| **PREVIEW - Activity from a risky IP address**<br>(ARM.MCAS_ActivityFromAnonymousIPAddresses) | Users activity from an IP address that has been identified as an anonymous proxy IP address has been detected.<br>These proxies are used by people who want to hide their device's IP address, and can be used for malicious intent. This detection uses a machine learning algorithm that reduces false positives, such as mis-tagged IP addresses that are widely used by users in the organization.<br>Requires an active Microsoft Cloud App Security license. | - | Medium |
+| **PREVIEW - Activity from infrequent country**<br>(ARM.MCAS_ActivityFromInfrequentCountry) | Activity from a location that wasn't recently or ever visited by any user in the organization has occurred.<br>This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization.<br>Requires an active Microsoft Cloud App Security license. | - | Medium |
+| **PREVIEW - Impossible travel activity**<br>(ARM.MCAS_ImpossibleTravelActivity) | Two user activities (in a single or multiple sessions) have occurred, originating from geographically distant locations. This occurs within a time period shorter than the time it would have taken the user to travel from the first location to the second. This indicates that a different user is using the same credentials.<br>This detection uses a machine learning algorithm that ignores obvious false positives contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The detection has an initial learning period of seven days, during which it learns a new user's activity pattern.<br>Requires an active Microsoft Cloud App Security license. | - | Medium |
+| **PREVIEW - Azurite toolkit run detected**<br>(ARM_Azurite) | A known cloud-environment reconnaissance toolkit run has been detected in your environment. The tool [Azurite](https://github.com/mwrlabs/Azurite) can be used by an attacker (or penetration tester) to map your subscriptions' resources and identify insecure configurations. | - | High |
+| **PREVIEW - Suspicious management session using PowerShell detected**<br>(ARM_UnusedAppPowershellPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal that doesn't regularly use PowerShell to manage the subscription environment is now using PowerShell, and performing actions that can secure persistence for an attacker. | Persistence | Medium |
+| **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium |
+| **PREVIEW ΓÇô Suspicious management session using Azure portal detected**<br>(ARM_UnusedAppIbizaPersistence) | Analysis of your subscription activity logs has detected a suspicious behavior. A principal that doesn't regularly use the Azure portal (Ibiza) to manage the subscription environment (hasn't used Azure portal to manage for the last 45 days, or a subscription that it is actively managing), is now using the Azure portal and performing actions that can secure persistence for an attacker. | - | Medium |
| | | | |
Azure Defender alerts for container hosts aren't limited to the alerts below. Ma
| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | |--|-|:--:|-|
-| **Anomalous network protocol usage (Preview)**<br>(AzureDNS_ProtocolAnomaly) | Analysis of DNS transactions from %{CompromisedEntity} detected anomalous protocol usage. Such traffic, while possibly benign, may indicate abuse of this common protocol to bypass network traffic filtering. Typical related attacker activity includes copying remote administration tools to a compromised host and exfiltrating user data from it. | Exfiltration | - |
-| **Anonymity network activity (Preview)**<br>(AzureDNS_DarkWeb) | Analysis of DNS transactions from %{CompromisedEntity} detected anonymity network activity. Such activity, while possibly legitimate user behavior, is frequently employed by attackers to evade tracking and fingerprinting of network communications. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
-| **Anonymity network activity using web proxy (Preview)**<br>(AzureDNS_DarkWebProxy) | Analysis of DNS transactions from %{CompromisedEntity} detected anonymity network activity. Such activity, while possibly legitimate user behavior, is frequently employed by attackers to evade tracking and fingerprinting of network communications. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
-| **Attempted communication with suspicious sinkholed domain (Preview)**<br>(AzureDNS_SinkholedDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected request for sinkholed domain. Such activity, while possibly legitimate user behavior, is frequently an indication of the download or execution of malicious software. Typical related attacker activity is likely to include the download and execution of further malicious software or remote administration tools. | Exfiltration | - |
-| **Communication with possible phishing domain (Preview)**<br>(AzureDNS_PhishingDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected a request for a possible phishing domain. Such activity, while possibly benign, is frequently performed by attackers to harvest credentials to remote services. Typical related attacker activity is likely to include the exploitation of any credentials on the legitimate service. | Exfiltration | - |
-| **Communication with suspicious algorithmically generated domain (Preview)**<br>(AzureDNS_DomainGenerationAlgorithm) | Analysis of DNS transactions from %{CompromisedEntity} detected possible usage of a domain generation algorithm. Such activity, while possibly benign, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
-| **Communication with suspicious domain identified by threat intelligence (Preview)**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised." | Initial Access | Medium |
-| **Communication with suspicious random domain name (Preview)**<br>(AzureDNS_RandomizedDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected usage of a suspicious randomly generated domain name. Such activity, while possibly benign, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
-| **Digital currency mining activity (Preview)**<br>(AzureDNS_CurrencyMining) | Analysis of DNS transactions from %{CompromisedEntity} detected digital currency mining activity. Such activity, while possibly legitimate user behavior, is frequently performed by attackers following compromise of resources. Typical related attacker activity is likely to include the download and execution of common mining tools. | Exfiltration | - |
-| **Network intrusion detection signature activation (Preview)**<br>(AzureDNS_SuspiciousDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected a known malicious network signature. Such activity, while possibly legitimate user behavior, is frequently an indication of the download or execution of malicious software. Typical related attacker activity is likely to include the download and execution of further malicious software or remote administration tools. | Exfiltration | - |
-| **Possible data download via DNS tunnel (Preview)**<br>(AzureDNS_DataInfiltration) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
-| **Possible data exfiltration via DNS tunnel (Preview)**<br>(AzureDNS_DataExfiltration) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
-| **Possible data transfer via DNS tunnel (Preview)**<br>(AzureDNS_DataObfuscation) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
+| **Anomalous network protocol usage**<br>(AzureDNS_ProtocolAnomaly) | Analysis of DNS transactions from %{CompromisedEntity} detected anomalous protocol usage. Such traffic, while possibly benign, may indicate abuse of this common protocol to bypass network traffic filtering. Typical related attacker activity includes copying remote administration tools to a compromised host and exfiltrating user data from it. | Exfiltration | - |
+| **Anonymity network activity**<br>(AzureDNS_DarkWeb) | Analysis of DNS transactions from %{CompromisedEntity} detected anonymity network activity. Such activity, while possibly legitimate user behavior, is frequently employed by attackers to evade tracking and fingerprinting of network communications. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
+| **Anonymity network activity using web proxy**<br>(AzureDNS_DarkWebProxy) | Analysis of DNS transactions from %{CompromisedEntity} detected anonymity network activity. Such activity, while possibly legitimate user behavior, is frequently employed by attackers to evade tracking and fingerprinting of network communications. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
+| **Attempted communication with suspicious sinkholed domain**<br>(AzureDNS_SinkholedDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected request for sinkholed domain. Such activity, while possibly legitimate user behavior, is frequently an indication of the download or execution of malicious software. Typical related attacker activity is likely to include the download and execution of further malicious software or remote administration tools. | Exfiltration | - |
+| **Communication with possible phishing domain**<br>(AzureDNS_PhishingDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected a request for a possible phishing domain. Such activity, while possibly benign, is frequently performed by attackers to harvest credentials to remote services. Typical related attacker activity is likely to include the exploitation of any credentials on the legitimate service. | Exfiltration | - |
+| **Communication with suspicious algorithmically generated domain**<br>(AzureDNS_DomainGenerationAlgorithm) | Analysis of DNS transactions from %{CompromisedEntity} detected possible usage of a domain generation algorithm. Such activity, while possibly benign, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
+| **Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised." | Initial Access | Medium |
+| **Communication with suspicious random domain name**<br>(AzureDNS_RandomizedDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected usage of a suspicious randomly generated domain name. Such activity, while possibly benign, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
+| **Digital currency mining activity**<br>(AzureDNS_CurrencyMining) | Analysis of DNS transactions from %{CompromisedEntity} detected digital currency mining activity. Such activity, while possibly legitimate user behavior, is frequently performed by attackers following compromise of resources. Typical related attacker activity is likely to include the download and execution of common mining tools. | Exfiltration | - |
+| **Network intrusion detection signature activation**<br>(AzureDNS_SuspiciousDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected a known malicious network signature. Such activity, while possibly legitimate user behavior, is frequently an indication of the download or execution of malicious software. Typical related attacker activity is likely to include the download and execution of further malicious software or remote administration tools. | Exfiltration | - |
+| **Possible data download via DNS tunnel**<br>(AzureDNS_DataInfiltration) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
+| **Possible data exfiltration via DNS tunnel**<br>(AzureDNS_DataExfiltration) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
+| **Possible data transfer via DNS tunnel**<br>(AzureDNS_DataObfuscation) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
| | | | ## <a name="alerts-azurestorage"></a>Alerts for Azure Storage
security-center Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
Previously updated : 05/03/2021 Last updated : 05/13/2021
To access this information, you can use any of the methods in the table below.
|-|| | REST API call | GET https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/providers/Microsoft.Security/assessments?api-version=2019-01-01-preview&$expand=statusEvaluationDates | | Azure Resource Graph | `securityresources`<br>`where type == "microsoft.security/assessments"` |
-| Workflow automation | The two dedicated fields will be availbel the Log Analytics workspace data |
+| Workflow automation | The two dedicated fields will be available the Log Analytics workspace data |
| [CSV export](continuous-export.md#manual-one-time-export-of-alerts-and-recommendations) | The two fields are included in the CSV files | | | |
These tools have been enhanced and expanded in the following ways:
- **Regulatory compliance assessment data added (in preview).** You can now continuously export updates to regulatory compliance assessments, including for any custom initiatives, to a Log Analytics workspace or Event Hub. This feature is unavailable on national/sovereign clouds.
- :::image type="content" source="media/release-notes/continuous-export-regulatory-compliance-option.png" alt-text="The options for including regulatory compliant assessment information with your continuous export data.":::
+ :::image type="content" source="media/release-notes/continuous-export-regulatory-compliance-option.png" alt-text="The options for including regulatory compliant assessment information with your continuous export data.":::
service-fabric Service Fabric Get Started Mac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-get-started-mac.md
You can build Azure Service Fabric applications to run on Linux clusters by using Mac OS X. This document covers how to set up your Mac for development. ## Prerequisites
-Azure Service Fabric doesn't run natively on Mac OS X. To run a local Service Fabric cluster, a pre-configured Docker container image is provided. Before you get started, you need:
+Azure Service Fabric doesn't run natively on Mac OS X. T