Updates from: 01/09/2021 04:04:35
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/customize-ui-with-html https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/customize-ui-with-html.md
@@ -70,7 +70,7 @@ When using your own HTML and CSS files to customize the UI, host your UI content
## Guidelines for using custom page content - Use an absolute URL when you include external resources like media, CSS, and JavaScript files in your HTML file.-- Using [page layout version](page-layout.md) 1.2.0 and above, you can add the `data-preload="true"` attribute in your HTML tags to control the load order for CSS and JavaScript. With `data-preload=true`, the page is constructed before being shown to the user. This attribute helps prevent the page from "flickering" by preloading the CSS file, without the un-styled HTML being shown to the user. The following HTML code snippet shows the use of the `data-preload` tag.
+- Using [page layout version](page-layout.md) 1.2.0 and above, you can add the `data-preload="true"` attribute in your HTML tags to control the load order for CSS and JavaScript. With `data-preload="true"`, the page is constructed before being shown to the user. This attribute helps prevent the page from "flickering" by preloading the CSS file, without the un-styled HTML being shown to the user. The following HTML code snippet shows the use of the `data-preload` tag.
```HTML <link href="https://path-to-your-file/sample.css" rel="stylesheet" type="text/css" data-preload="true"/> ```
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/oauth2-technical-profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/oauth2-technical-profile.md
@@ -83,7 +83,7 @@ The technical profile also returns claims that aren't returned by the identity p
| end_session_endpoint | Yes | The URL of the end session endpoint as per RFC 6749. | | AccessTokenResponseFormat | No | The format of the access token endpoint call. For example, Facebook requires an HTTP GET method, but the access token response is in JSON format. | | AdditionalRequestQueryParameters | No | Additional request query parameters. For example, you may want to send additional parameters to your identity provider. You can include multiple parameters using comma delimiter. |
-| ClaimsEndpointAccessTokenName | No | The name of the access token query string parameter. Some identity providers' claims endpoints support GET HTTP request. In this case, the bearer token is sent by using a query string parameter instead of the authorization header. |
+| ClaimsEndpointAccessTokenName | No | The name of the access token query string parameter. Some identity providers' claims endpoints support GET HTTP request. In this case, the bearer token is sent by using a query string parameter instead of the authorization header. Default value: `access_token`. |
| ClaimsEndpointFormatName | No | The name of the format query string parameter. For example, you can set the name as `format` in this LinkedIn claims endpoint `https://api.linkedin.com/v1/people/~?format=json`. | | ClaimsEndpointFormat | No | The value of the format query string parameter. For example, you can set the value as `json` in this LinkedIn claims endpoint `https://api.linkedin.com/v1/people/~?format=json`. | | ProviderName | No | The name of the identity provider. |
@@ -96,7 +96,8 @@ The technical profile also returns claims that aren't returned by the identity p
| IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. | | ResolveJsonPathsInJsonTokens | No | Indicates whether the technical profile resolves JSON paths. Possible values: `true`, or `false` (default). Use this metadata to read data from a nested JSON element. In an [OutputClaim](technicalprofiles.md#output-claims), set the `PartnerClaimType` to the JSON path element you want to output. For example: `firstName.localized`, or `data.0.to.0.email`.| |token_endpoint_auth_method| No| Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic` (public preview). For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). |
-|SingleLogoutEnabled| No| Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](session-behavior.md#sign-out). Possible values: `true` (default), or `false`.|
+|SingleLogoutEnabled| No| Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](session-behavior.md#sign-out). Possible values: `true` (default), or `false`.|
+| UsePolicyInRedirectUri | No | Indicates whether to use a policy when constructing the redirect URI. When you configure your application in the identity provider, you need to specify the redirect URI. The redirect URI points to Azure AD B2C, `https://{your-tenant-name}.b2clogin.com/{your-tenant-name}.onmicrosoft.com/oauth2/authresp`. If you specify `true`, you need to add a redirect URI for each policy you use. For example: `https://{your-tenant-name}.b2clogin.com/{your-tenant-name}.onmicrosoft.com/{policy-name}/oauth2/authresp`. |
## Cryptographic keys
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/openid-connect-technical-profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/openid-connect-technical-profile.md
@@ -84,12 +84,13 @@ The technical profile also returns claims that aren't returned by the identity p
| scope | No | The scope of the request that is defined according to the OpenID Connect Core 1.0 specification. Such as `openid`, `profile`, and `email`. | | HttpBinding | No | The expected HTTP binding to the access token and claims token endpoints. Possible values: `GET` or `POST`. | | ValidTokenIssuerPrefixes | No | A key that can be used to sign in to each of the tenants when using a multi-tenant identity provider such as Azure Active Directory. |
-| UsePolicyInRedirectUri | No | Indicates whether to use a policy when constructing the redirect URI. When you configure your application in the identity provider, you need to specify the redirect URI. The redirect URI points to Azure AD B2C, `https://{your-tenant-name}.b2clogin.com/{your-tenant-name}.onmicrosoft.com/oauth2/authresp`. If you specify `false`, you need to add a redirect URI for each policy you use. For example: `https://{your-tenant-name}.b2clogin.com/{your-tenant-name}.onmicrosoft.com/{policy-name}/oauth2/authresp`. |
+| UsePolicyInRedirectUri | No | Indicates whether to use a policy when constructing the redirect URI. When you configure your application in the identity provider, you need to specify the redirect URI. The redirect URI points to Azure AD B2C, `https://{your-tenant-name}.b2clogin.com/{your-tenant-name}.onmicrosoft.com/oauth2/authresp`. If you specify `true`, you need to add a redirect URI for each policy you use. For example: `https://{your-tenant-name}.b2clogin.com/{your-tenant-name}.onmicrosoft.com/{policy-name}/oauth2/authresp`. |
| MarkAsFailureOnStatusCode5xx | No | Indicates whether a request to an external service should be marked as a failure if the Http status code is in the 5xx range. The default is `false`. | | DiscoverMetadataByTokenIssuer | No | Indicates whether the OIDC metadata should be discovered by using the issuer in the JWT token. | | IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. |
-|token_endpoint_auth_method| No| Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), `private_key_jwt` (public preview), and `client_secret_basic` (public preview). For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). |
-|SingleLogoutEnabled| No| Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](session-behavior.md#sign-out). Possible values: `true` (default), or `false`.|
+| token_endpoint_auth_method | No | Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic` (public preview). For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). |
+| token_signing_algorithm | No | The signing algorithm used for client assertions when the **token_endpoint_auth_method** metadata is set to `private_key_jwt`. Possible values: `RS256` (default). |
+| SingleLogoutEnabled | No | Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](session-overview.md#sign-out). Possible values: `true` (default), or `false`. |
```xml <Metadata>
@@ -120,7 +121,8 @@ The **CryptographicKeys** element contains the following attribute:
| Attribute | Required | Description | | --------- | -------- | ----------- |
-| client_secret | Yes | The client secret of the identity provider application. The cryptographic key is required only if the **response_types** metadata is set to `code`. In this case, Azure AD B2C makes another call to exchange the authorization code for an access token. If the metadata is set to `id_token` you can omit the cryptographic key. |
+| client_secret | Yes | The client secret of the identity provider application. This cryptographic key is required only if the **response_types** metadata is set to `code` and **token_endpoint_auth_method** is set to `client_secret_post` or `client_secret_basic`. In this case, Azure AD B2C makes another call to exchange the authorization code for an access token. If the metadata is set to `id_token` you can omit the cryptographic key. |
+| assertion_signing_key | Yes | The RSA private key which will be used to sign the client assertion. This cryptographic key is required only if the **token_endpoint_auth_method** metadata is set to `private_key_jwt`. |
## Redirect Uri
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/password-complexity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/password-complexity.md
@@ -72,7 +72,7 @@ Allows you to accept digits only (pins) or the full character set.
Allows you to control the length requirements of the password. - **Minimum Length** must be at least 4.-- **Maximum Length** must be greater or equal to minimum length and at most can be 64 characters.
+- **Maximum Length** must be greater or equal to minimum length and at most can be 256 characters.
### Character classes
@@ -218,4 +218,4 @@ Save the policy file.
- Learn more about the [Predicates](predicates.md) and [PredicateValidations](predicates.md#predicatevalidations) elements in the IEF reference.
-::: zone-end
\ No newline at end of file
+::: zone-end
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/self-asserted-technical-profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/self-asserted-technical-profile.md
@@ -205,8 +205,8 @@ You can also call a REST API technical profile with your business logic, overwri
| setting.showCancelButton | No | Displays the cancel button. Possible values: `true` (default), or `false` | | setting.showContinueButton | No | Displays the continue button. Possible values: `true` (default), or `false` | | setting.showSignupLink <sup>2</sup>| No | Displays the sign-up button. Possible values: `true` (default), or `false` |
-| setting.forgotPasswordLinkLocation <sup>2</sup>| No| Displays the forgot password link. Possible values: `AfterInput` (default) the link is displayed at the bottom of the page, or `None` removes the forgot password link.|
-| setting.enableRememberMe <sup>2</sup>| No| Displays the [Keep me signed in](session-behavior.md?pivots=b2c-custom-policy#enable-keep-me-signed-in-kmsi) checkbox. Possible values: `true`, or `false` (default). |
+| setting.forgotPasswordLinkLocation <sup>2</sup>| No| Displays the forgot password link. Possible values: `AfterLabel` (default) displays the link directly after the label or after the password input field when there is no label, `AfterInput` displays the link after the password input field, `AfterButtons` displays the link on the bottom of the form after the buttons, or `None` removes the forgot password link.|
+| setting.enableRememberMe <sup>2</sup>| No| Displays the [Keep me signed in](session-behavior.md?pivots=b2c-custom-policy#enable-keep-me-signed-in-kmsi) checkbox. Possible values: `true` , or `false` (default). |
| setting.inputVerificationDelayTimeInMilliseconds <sup>3</sup>| No| Improves user experience, by waiting for the user to stop typing, and then validate the value. Default value 2000 milliseconds. | | IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. |
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-optional-claims https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-optional-claims.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: develop ms.topic: how-to ms.workload: identity
-ms.date: 1/05/2021
+ms.date: 1/06/2021
ms.author: ryanwi ms.reviewer: paulgarn, hirsin, keyam ms.custom: aaddev
@@ -83,10 +83,12 @@ These claims are always included in v1.0 Azure AD tokens, but not included in v2
| `given_name` | First name | Provides the first or "given" name of the user, as set on the user object.<br>"given_name": "Frank" | Supported in MSA and Azure AD. Requires the `profile` scope. | | `upn` | User Principal Name | An identifer for the user that can be used with the username_hint parameter. Not a durable identifier for the user and should not be used to uniquely identity user information (for example, as a database key). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) should not be shown their User Principal Name (UPN). Instead, use the following `preferred_username` claim for displaying sign-in state to the user. | See [additional properties](#additional-properties-of-optional-claims) below for configuration of the claim. Requires the `profile` scope.|
+## v1.0-specific optional claims set
+
+Some of the improvements of the v2 token format are available to apps that use the v1 token format, as they help improve security and reliability. These will not take effect for ID tokens requested from the v2 endpoint, nor access tokens for APIs that use the v2 token format. These only apply to JWTs, not SAML tokens.
**Table 4: v1.0-only optional claims**
-Some of the improvements of the v2 token format are available to apps that use the v1 token format, as they help improve security and reliability. These will not take effect for ID tokens requested from the v2 endpoint, nor access tokens for APIs that use the v2 token format.
| JWT Claim | Name | Description | Notes | |---------------|---------------------------------|-------------|-------|
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-health-diagnose-sync-errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-diagnose-sync-errors.md
@@ -136,6 +136,9 @@ The user with conflicting attribute in Azure AD should be cleaned before you can
**Updating source anchor to cloud-based user in your tenant is not supported.** Cloud-based user in Azure AD should not have source anchor. Updating source anchor is not supported in this case. Manual fix is required from on premises.
+**The fix process failed to update the values.**
+The specific settings such as [UserWriteback in Azure AD Connect](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-preview#user-writeback) is not supported. Please disable in the settings.
+ ## FAQ **Q.** What happens if execution of the **Apply Fix** fails? **A.** If execution fails, it's possible that Azure AD Connect is running an export error. Refresh the portal page and retry after the next sync. The default sync cycle is 30 minutes.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/troubleshooting-identity-protection-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/troubleshooting-identity-protection-faq.md
@@ -6,7 +6,7 @@ services: active-directory
ms.service: active-directory ms.subservice: identity-protection ms.topic: troubleshooting
-ms.date: 10/07/2020
+ms.date: 01/07/2021
ms.author: joflore author: MicrosoftGuyJFlo
@@ -32,7 +32,7 @@ There is a current known issue causing latency in the user risk dismissal flow.
If you are an Azure AD Identity Protection customer, go to the [risky users](howto-identity-protection-investigate-risk.md#risky-users) view and click on an at-risk user. In the drawer at the bottom, tab ΓÇÿRisk historyΓÇÖ will show all the events that led to a user risk change. To see all risky sign-ins for the user, click on ΓÇÿUserΓÇÖs risky sign-insΓÇÖ. To see all risk detections for this user, click on ΓÇÿUserΓÇÖs risk detectionsΓÇÖ.
-## Why was my sign-in blocked but Identity Protection didn't generate a risk detection?
+### Why was my sign-in blocked but Identity Protection didn't generate a risk detection?
Sign-ins can be blocked for several reasons. It is important to note that Identity Protection only generates risk detections when correct credentials are used in the authentication request. If a user uses incorrect credentials, it will not be flagged by Identity Protection since there is not of risk of credential compromise unless a bad actor uses the correct credentials. Some reasons a user can be blocked from signing that will not generate an Identity Protection detection include: * The **IP can been blocked** due to malicious activity from the IP address. The IP blocked message does not differentiate whether the credentials were correct or not. If the IP is blocked and correct credentials are not used, it will not generate an Identity Protection detection * **[Smart Lockout](../authentication/howto-password-smart-lockout.md)** can block the account from signing-in after multiple failed attempts
@@ -93,3 +93,7 @@ Given the user risk is cumulative in nature and does not expire, a user may have
### Why does a sign-in have a ΓÇ£sign-in risk (aggregate)ΓÇ¥ score of High when the detections associated with it are of low or medium risk? The high aggregate risk score could be based on other features of the sign-in, or the fact that more than one detection fired for that sign-in. And conversely, a sign-in may have a sign-in risk (aggregate) of Medium even if the detections associated with the sign-in are of High risk.+
+### What is the difference between the "Activity from anonymous IP address" and "Anonymous IP address" detections?
+
+The "Anonymous IP address" detection's source is Azure AD Identity Protection, while the "Activity from anonymous IP address" detection is integrated from MCAS (Microsoft Cloud App Security). While they have very similar names and it is possible that you may see overlap in these signals, they have distinct back-end detections.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-cloud-backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-cloud-backup.md
@@ -67,6 +67,23 @@ Notice the `yield context.df.Task.all(tasks);` line. All the individual calls to
After yielding from `context.df.Task.all`, we know that all function calls have completed and have returned values back to us. Each call to `E2_CopyFileToBlob` returns the number of bytes uploaded, so calculating the sum total byte count is a matter of adding all those return values together.
+# [Python](#tab/python)
+
+The function uses the standard *function.json* for orchestrator functions.
+
+[!code-json[Main](~/samples-durable-functions-python/samples/fan_in_fan_out/E2_BackupSiteContent/function.json)]
+
+Here is the code that implements the orchestrator function:
+
+[!code-python[Main](~/samples-durable-functions-python/samples/fan_in_fan_out/E2_BackupSiteContent/\_\_init\_\_.py)]
+
+Notice the `yield context.task_all(tasks);` line. All the individual calls to the `E2_CopyFileToBlob` function were *not* yielded, which allows them to run in parallel. When we pass this array of tasks to `context.task_all`, we get back a task that won't complete *until all the copy operations have completed*. If you're familiar with [`asyncio.gather`](https://docs.python.org/3/library/asyncio-task.html#asyncio.gather) in Python, then this is not new to you. The difference is that these tasks could be running on multiple virtual machines concurrently, and the Durable Functions extension ensures that the end-to-end execution is resilient to process recycling.
+
+> [!NOTE]
+> Although tasks are conceptually similar to Python awaitables, orchestrator functions should use `yield` as well as the `context.task_all` and `context.task_any` APIs to manage task parallelization.
+
+After yielding from `context.task_all`, we know that all function calls have completed and have returned values back to us. Each call to `E2_CopyFileToBlob` returns the number of bytes uploaded, so we can calculate the sum total byte count by adding all the return values together.
+ --- ### Helper activity functions
@@ -91,6 +108,16 @@ And here is the implementation:
The function uses the `readdirp` module (version 2.x) to recursively read the directory structure.
+# [Python](#tab/python)
+
+The *function.json* file for `E2_GetFileList` looks like the following:
+
+[!code-json[Main](~/samples-durable-functions-python/samples/fan_in_fan_out/E2_GetFileList/function.json)]
+
+And here is the implementation:
+
+[!code-python[Main](~/samples-durable-functions-python/samples/fan_in_fan_out/E2_GetFileList/\_\_init\_\_.py)]
+ --- > [!NOTE]
@@ -117,6 +144,16 @@ The JavaScript implementation uses the [Azure Storage SDK for Node](https://gith
[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E2_CopyFileToBlob/index.js)]
+# [Python](#tab/python)
+
+The *function.json* file for `E2_CopyFileToBlob` is similarly simple:
+
+[!code-json[Main](~/samples-durable-functions-python/samples/fan_in_fan_out/E2_CopyFileToBlob/function.json)]
+
+The Python implementation uses the [Azure Storage SDK for Python](https://github.com/Azure/azure-storage-python) to upload the files to Azure Blob Storage.
+
+[!code-python[Main](~/samples-durable-functions-python/samples/fan_in_fan_out/E2_CopyFileToBlob/\_\_init\_\_.py)]
+ --- The implementation loads the file from disk and asynchronously streams the contents into a blob of the same name in the "backups" container. The return value is the number of bytes copied to storage, that is then used by the orchestrator function to compute the aggregate sum.
@@ -126,7 +163,7 @@ The implementation loads the file from disk and asynchronously streams the conte
## Run the sample
-You can start the orchestration by sending the following HTTP POST request.
+You can start the orchestration, on Windows, by sending the following HTTP POST request.
``` POST http://{host}/orchestrators/E2_BackupSiteContent
@@ -136,6 +173,16 @@ Content-Length: 20
"D:\\home\\LogFiles" ```
+Alternatively, on a Linux Function App (Python currently only runs on Linux for App Service), you can start the orchestration like so:
+
+```
+POST http://{host}/orchestrators/E2_BackupSiteContent
+Content-Type: application/json
+Content-Length: 20
+
+"/home/site/wwwroot"
+```
+ > [!NOTE] > The `HttpStart` function that you are invoking only works with JSON-formatted content. For this reason, the `Content-Type: application/json` header is required and the directory path is encoded as a JSON string. Moreover, HTTP snippet assumes there is an entry in the `host.json` file which removes the default `api/` prefix from all HTTP trigger functions URLs. You can find the markup for this configuration in the `host.json` file in the samples.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-monitor-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-monitor-python.md new file mode 100644
@@ -0,0 +1,153 @@
+---
+title: Monitors in Durable Functions (Python) - Azure
+description: Learn how to implement a status monitor using the Durable Functions extension for Azure Functions (Python).
+author: davidmrdavid
+ms.topic: conceptual
+ms.date: 12/02/2020
+ms.author: azfuncdf
+---
+
+# Monitor scenario in Durable Functions - GitHub Issue monitoring sample
+
+The monitor pattern refers to a flexible recurring process in a workflow - for example, polling until certain conditions are met. This article explains a sample that uses Durable Functions to implement monitoring.
+
+[!INCLUDE durable-functions-prerequisites]
+
+## Scenario overview
+
+This sample monitors the count of issues in a GitHub repo and alerts the user if there are more than 3 open issues. You could use a regular timer-triggered function to request the opened issue counts at regular intervals. However, one problem with this approach is **lifetime management**. If only one alert should be sent, the monitor needs to disable itself after 3 or more issues are detected. The monitoring pattern can end its own execution, among other benefits:
+
+* Monitors run on intervals, not schedules: a timer trigger *runs* every hour; a monitor *waits* one hour between actions. A monitor's actions will not overlap unless specified, which can be important for long-running tasks.
+* Monitors can have dynamic intervals: the wait time can change based on some condition.
+* Monitors can terminate when some condition is met or be terminated by another process.
+* Monitors can take parameters. The sample shows how the same repo-monitoring process can be applied to any requested public GitHub repo and phone number.
+* Monitors are scalable. Because each monitor is an orchestration instance, multiple monitors can be created without having to create new functions or define more code.
+* Monitors integrate easily into larger workflows. A monitor can be one section of a more complex orchestration function, or a [sub-orchestration](durable-functions-sub-orchestrations.md).
+
+## Configuration
+
+### Configuring Twilio integration
+
+[!INCLUDE [functions-twilio-integration](../../../includes/functions-twilio-integration.md)]
+
+## The functions
+
+This article explains the following functions in the sample app:
+
+* `E3_Monitor`: An [orchestrator function](durable-functions-bindings.md#orchestration-trigger) that calls `E3_TooManyOpenIssues` periodically. It calls `E3_SendAlert` if the return value of `E3_TooManyOpenIssues` is `True`.
+* `E3_TooManyOpenIssues`: An [activity function](durable-functions-bindings.md#activity-trigger) that checks if a repository has too many open issues. For demoing purposes, we consider having more than 3 open issues to be too many.
+* `E3_SendAlert`: An activity function that sends an SMS message via Twilio.
+
+### E3_Monitor orchestrator function
+
+# [Python](#tab/python)
+
+The **E3_Monitor** function uses the standard *function.json* for orchestrator functions.
+
+[!code-json[Main](~/samples-durable-functions-python/samples/monitor/E3_Monitor/function.json)]
+
+Here is the code that implements the function:
+
+[!code-python[Main](~/samples-durable-functions-python/samples/monitor/E3_Monitor/\_\_init\_\_.py)]
+
+---
+
+This orchestrator function performs the following actions:
+
+1. Gets the *repo* to monitor and the *phone number* to which it will send an SMS notification.
+2. Determines the expiration time of the monitor. The sample uses a hard-coded value for brevity.
+3. Calls **E3_TooManyOpenIssues** to determine whether there are too many open issues at the requested repo.
+4. If there are too many issues, calls **E3_SendAlert** to send an SMS notification to the requested phone number.
+5. Creates a durable timer to resume the orchestration at the next polling interval. The sample uses a hard-coded value for brevity.
+6. Continues running until the current UTC time passes the monitor's expiration time, or an SMS alert is sent.
+
+Multiple orchestrator instances can run simultaneously by calling the orchestrator function multiple times. The repo to monitor and the phone number to send an SMS alert to can be specified. Finally, do note that the orchestrator function is *not* running while waiting for the timer, so you will not get charged for it.
++
+### E3_TooManyOpenIssues activity function
+
+As with other samples, the helper activity functions are regular functions that use the `activityTrigger` trigger binding. The **E3_TooManyOpenIssues** function gets a list of currently open issues on the repo and determines if there are "too many" of them: more than 3 as per our sample.
+
+# [Python](#tab/python)
+
+The *function.json* is defined as follows:
+
+[!code-json[Main](~/samples-durable-functions-python/samples/monitor/E3_TooManyOpenIssues/function.json)]
+
+And here is the implementation.
+
+[!code-python[Main](~/samples-durable-functions-python/samples/monitor/E3_TooManyOpenIssues/\_\_init\_\_.py)]
+
+---
+
+### E3_SendAlert activity function
+
+The **E3_SendAlert** function uses the Twilio binding to send an SMS message notifying the end user that there are at least 3 open issues awaiting a resolution.
+
+# [Python](#tab/python)
+
+Its *function.json* is simple:
+
+[!code-json[Main](~/samples-durable-functions-python/samples/monitor/E3_TooManyOpenIssues/function.json)]
+
+And here is the code that sends the SMS message:
+
+[!code-python[Main](~/samples-durable-functions-python/samples/monitor/E3_SendAlert/\_\_init\_\_.py)]
+
+---
+
+## Run the sample
+
+You will need a [GitHub](https://github.com/) account. With it, create a temporary public repository that you can open issues to.
+
+Using the HTTP-triggered functions included in the sample, you can start the orchestration by sending the following HTTP POST request:
+
+```
+POST https://{host}/orchestrators/E3_Monitor
+Content-Length: 77
+Content-Type: application/json
+
+{ "repo": "<your github handle>/<a new github repo under your user>", "phone": "+1425XXXXXXX" }
+```
+
+For example, if your GitHub username is `foo` and your repository is `bar` then your value for `"repo"` should be `"foo/bar"`.
+
+```
+HTTP/1.1 202 Accepted
+Content-Type: application/json; charset=utf-8
+Location: https://{host}/runtime/webhooks/durabletask/instances/f6893f25acf64df2ab53a35c09d52635?taskHub=SampleHubVS&connection=Storage&code={SystemKey}
+RetryAfter: 10
+
+{"id": "f6893f25acf64df2ab53a35c09d52635", "statusQueryGetUri": "https://{host}/runtime/webhooks/durabletask/instances/f6893f25acf64df2ab53a35c09d52635?taskHub=SampleHubVS&connection=Storage&code={systemKey}", "sendEventPostUri": "https://{host}/runtime/webhooks/durabletask/instances/f6893f25acf64df2ab53a35c09d52635/raiseEvent/{eventName}?taskHub=SampleHubVS&connection=Storage&code={systemKey}", "terminatePostUri": "https://{host}/runtime/webhooks/durabletask/instances/f6893f25acf64df2ab53a35c09d52635/terminate?reason={text}&taskHub=SampleHubVS&connection=Storage&code={systemKey}"}
+```
+
+The **E3_Monitor** instance starts and queries the provided repo for open issues. If at least 3 open issues are found, it calls an activity function to send an alert; otherwise, it sets a timer. When the timer expires, the orchestration will resume.
+
+You can see the orchestration's activity by looking at the function logs in the Azure Functions portal.
+
+```
+[2020-12-04T18:24:30.007Z] Executing 'Functions.HttpStart' (Reason='This function was programmatically
+called via the host APIs.', Id=93772f6b-f4f0-405a-9d7b-be9eb7a38aa6)
+[2020-12-04T18:24:30.769Z] Executing 'Functions.E3_Monitor' (Reason='(null)', Id=058e656e-bcb1-418c-95b3-49afcd07bd08)
+[2020-12-04T18:24:30.847Z] Started orchestration with ID = '788420bb31754c50acbbc46e12ef4f9c'.
+[2020-12-04T18:24:30.932Z] Executed 'Functions.E3_Monitor' (Succeeded, Id=058e656e-bcb1-418c-95b3-49afcd07bd08, Duration=174ms)
+[2020-12-04T18:24:30.955Z] Executed 'Functions.HttpStart' (Succeeded, Id=93772f6b-f4f0-405a-9d7b-be9eb7a38aa6, Duration=1028ms)
+[2020-12-04T18:24:31.229Z] Executing 'Functions.E3_TooManyOpenIssues' (Reason='(null)', Id=6fd5be5e-7f26-4b0b-98df-c3ac39125da3)
+[2020-12-04T18:24:31.772Z] Executed 'Functions.E3_TooManyOpenIssues' (Succeeded, Id=6fd5be5e-7f26-4b0b-98df-c3ac39125da3, Duration=555ms)
+[2020-12-04T18:24:40.754Z] Executing 'Functions.E3_Monitor' (Reason='(null)', Id=23915e4c-ddbf-46f9-b3f0-53289ed66082)
+[2020-12-04T18:24:40.789Z] Executed 'Functions.E3_Monitor' (Succeeded, Id=23915e4c-ddbf-46f9-b3f0-53289ed66082, Duration=38ms)
+(...trimmed...)
+```
+
+The orchestration will complete once its timeout is reached or more than 3 open issues are detected. You can also use the `terminate` API inside another function or invoke the **terminatePostUri** HTTP POST webhook referenced in the 202 response above. To use the webhook, replace `{text}` with the reason for the early termination. The HTTP POST URL will look roughly as follows:
+
+```
+POST https://{host}/runtime/webhooks/durabletask/instances/f6893f25acf64df2ab53a35c09d52635/terminate?reason=Because&taskHub=SampleHubVS&connection=Storage&code={systemKey}
+```
+
+## Next steps
+
+This sample has demonstrated how to use Durable Functions to monitor an external source's status using [durable timers](durable-functions-timers.md) and conditional logic. The next sample shows how to use external events and [durable timers](durable-functions-timers.md) to handle human interaction.
+
+> [!div class="nextstepaction"]
+> [Run the human interaction sample](durable-functions-phone-verification.md)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-monitor.md
@@ -67,6 +67,9 @@ Here is the code that implements the function:
[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E3_Monitor/index.js)]
+# [Python](#tab/python)
+We have a different tutorial for the monitoring pattern on Python, please see it [here](durable-functions-monitor-python.md).
+ --- This orchestrator function performs the following actions:
@@ -78,8 +81,7 @@ This orchestrator function performs the following actions:
5. Creates a durable timer to resume the orchestration at the next polling interval. The sample uses a hard-coded value for brevity. 6. Continues running until the current UTC time passes the monitor's expiration time, or an SMS alert is sent.
-Multiple orchestrator instances can run simultaneously by calling the orchestrator function multiple times. The location to monitor and the phone number to send an SMS alert to can be specified.
-
+Multiple orchestrator instances can run simultaneously by calling the orchestrator function multiple times. The location to monitor and the phone number to send an SMS alert to can be specified. Finally, do note that the orchestrator function is *not* running while waiting for the timer, so you will not get charged for it.
### E3_GetIsClear activity function As with other samples, the helper activity functions are regular functions that use the `activityTrigger` trigger binding. The **E3_GetIsClear** function gets the current weather conditions using the Weather Underground API and determines whether the sky is clear.
@@ -98,6 +100,9 @@ And here is the implementation.
[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E3_GetIsClear/index.js)]
+# [Python](#tab/python)
+We have a different tutorial for the monitoring pattern on Python, please see it [here](durable-functions-monitor-python.md).
+ --- ### E3_SendGoodWeatherAlert activity function
@@ -121,6 +126,9 @@ And here is the code that sends the SMS message:
[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E3_SendGoodWeatherAlert/index.js)]
+# [Python](#tab/python)
+We have a different tutorial for the monitoring pattern on Python, please see it [here](durable-functions-monitor-python.md).
+ --- ## Run the sample
@@ -164,7 +172,7 @@ You can see the orchestration's activity by looking at the function logs in the
2018-03-01T01:14:54.030 Function completed (Success, Id=561d0c78-ee6e-46cb-b6db-39ef639c9a2c, Duration=62ms) ```
-The orchestration will [terminate](durable-functions-instance-management.md) once its timeout is reached or clear skies are detected. You can also use `TerminateAsync` (.NET) or `terminate` (JavaScript) inside another function or invoke the **terminatePostUri** HTTP POST webhook referenced in the 202 response above, replacing `{text}` with the reason for termination:
+The orchestration completes once its timeout is reached or clear skies are detected. You can also use the `terminate` API inside another function or invoke the **terminatePostUri** HTTP POST webhook referenced in the 202 response above. To use the webhook, replace `{text}` with the reason for the early termination. The HTTP POST URL will look roughly as follows:
``` POST https://{host}/runtime/webhooks/durabletask/instances/f6893f25acf64df2ab53a35c09d52635/terminate?reason=Because&taskHub=SampleHubVS&connection=Storage&code={systemKey}
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-phone-verification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-phone-verification.md
@@ -40,7 +40,7 @@ This article walks through the following functions in the sample app:
[!code-csharp[Main](~/samples-durable-functions/samples/precompiled/PhoneVerification.cs?range=17-70)] > [!NOTE]
-> It may not be obvious at first, but this orchestrator function is completely deterministic. It is deterministic because the `CurrentUtcDateTime` property is used to calculate the timer expiration time, and it returns the same value on every replay at this point in the orchestrator code. This behavior is important to ensure that the same `winner` results from every repeated call to `Task.WhenAny`.
+> It may not be obvious at first, but this orchestrator does not violate the [deterministic orchestration constraint](durable-functions-code-constraints.md). It is deterministic because the `CurrentUtcDateTime` property is used to calculate the timer expiration time, and it returns the same value on every replay at this point in the orchestrator code. This behavior is important to ensure that the same `winner` results from every repeated call to `Task.WhenAny`.
# [JavaScript](#tab/javascript)
@@ -53,7 +53,20 @@ Here is the code that implements the function:
[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E4_SmsPhoneVerification/index.js)] > [!NOTE]
-> It may not be obvious at first, but this orchestrator function is completely deterministic. It is deterministic because the `currentUtcDateTime` property is used to calculate the timer expiration time, and it returns the same value on every replay at this point in the orchestrator code. This behavior is important to ensure that the same `winner` results from every repeated call to `context.df.Task.any`.
+> It may not be obvious at first, but this orchestrator does not violate the [deterministic orchestration constraint](durable-functions-code-constraints.md). It is deterministic because the `currentUtcDateTime` property is used to calculate the timer expiration time, and it returns the same value on every replay at this point in the orchestrator code. This behavior is important to ensure that the same `winner` results from every repeated call to `context.df.Task.any`.
+
+# [Python](#tab/python)
+
+The **E4_SmsPhoneVerification** function uses the standard *function.json* for orchestrator functions.
+
+[!code-json[Main](~/samples-durable-functions-python/samples/human_interaction/E4_SmsPhoneVerification/function.json)]
+
+Here is the code that implements the function:
+
+[!code-python[Main](~/samples-durable-functions-python/samples/human_interaction/E4_SmsPhoneVerification/\_\_init\_\_.py)]
+
+> [!NOTE]
+> It may not be obvious at first, but this orchestrator does not violate the [deterministic orchestration constraint](durable-functions-code-constraints.md). It is deterministic because the `currentUtcDateTime` property is used to calculate the timer expiration time, and it returns the same value on every replay at this point in the orchestrator code. This behavior is important to ensure that the same `winner` results from every repeated call to `context.df.Task.any`.
---
@@ -90,6 +103,16 @@ And here is the code that generates the four-digit challenge code and sends the
[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E4_SendSmsChallenge/index.js)]
+# [Python](#tab/python)
+
+The *function.json* is defined as follows:
+
+[!code-json[Main](~/samples-durable-functions-python/samples/human_interaction/SendSMSChallenge/function.json)]
+
+And here is the code that generates the four-digit challenge code and sends the SMS message:
+
+[!code-python[Main](~/samples-durable-functions-python/samples/human_interaction/SendSMSChallenge/\_\_init\_\_.py)]
+ --- ## Run the sample
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-sequence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-sequence.md
@@ -9,7 +9,7 @@ ms.author: azfuncdf
# Function chaining in Durable Functions - Hello sequence sample
-Function chaining refers to the pattern of executing a sequence of functions in a particular order. Often the output of one function needs to be applied to the input of another function. This article describes the chaining sequence that you create when you complete the Durable Functions quickstart ([C#](durable-functions-create-first-csharp.md) or [JavaScript](quickstart-js-vscode.md)). For more information about Durable Functions, see [Durable Functions overview](durable-functions-overview.md).
+Function chaining refers to the pattern of executing a sequence of functions in a particular order. Often the output of one function needs to be applied to the input of another function. This article describes the chaining sequence that you create when you complete the Durable Functions quickstart ([C#](durable-functions-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), or [Python](quickstart-python-vscode.md)). For more information about Durable Functions, see [Durable Functions overview](durable-functions-overview.md).
[!INCLUDE [durable-functions-prerequisites](../../../includes/durable-functions-prerequisites.md)]
@@ -19,7 +19,7 @@ This article explains the following functions in the sample app:
* `E1_HelloSequence`: An [orchestrator function](durable-functions-bindings.md#orchestration-trigger) that calls `E1_SayHello` multiple times in a sequence. It stores the outputs from the `E1_SayHello` calls and records the results. * `E1_SayHello`: An [activity function](durable-functions-bindings.md#activity-trigger) that prepends a string with "Hello".
-* `HttpStart`: An HTTP triggered function that starts an instance of the orchestrator.
+* `HttpStart`: An HTTP triggered [durable client](durable-functions-bindings.md#orchestration-client) function that starts an instance of the orchestrator.
### E1_HelloSequence orchestrator function
@@ -34,7 +34,7 @@ The code calls `E1_SayHello` three times in sequence with different parameter va
# [JavaScript](#tab/javascript) > [!NOTE]
-> JavaScript Durable Functions are available for the Functions 2.0 runtime only.
+> JavaScript Durable Functions are available for the Functions 3.0 runtime only.
#### function.json
@@ -49,18 +49,48 @@ The important thing is the `orchestrationTrigger` binding type. All orchestrator
#### index.js
-Here is the function:
+Here is the orchestrator function:
[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E1_HelloSequence/index.js)]
-All JavaScript orchestration functions must include the [`durable-functions` module](https://www.npmjs.com/package/durable-functions). It's a library that enables you to write Durable Functions in JavaScript. There are three significant differences between an orchestration function and other JavaScript functions:
+All JavaScript orchestration functions must include the [`durable-functions` module](https://www.npmjs.com/package/durable-functions). It's a library that enables you to write Durable Functions in JavaScript. There are three significant differences between an orchestrator function and other JavaScript functions:
-1. The function is a [generator function.](/scripting/javascript/advanced/iterators-and-generators-javascript).
+1. The orchestrator function is a [generator function](/scripting/javascript/advanced/iterators-and-generators-javascript).
2. The function is wrapped in a call to the `durable-functions` module's `orchestrator` method (here `df`). 3. The function must be synchronous. Because the 'orchestrator' method handles calling 'context.done', the function should simply 'return'. The `context` object contains a `df` durable orchestration context object that lets you call other *activity* functions and pass input parameters using its `callActivity` method. The code calls `E1_SayHello` three times in sequence with different parameter values, using `yield` to indicate the execution should wait on the async activity function calls to be returned. The return value of each call is added to the `outputs` array, which is returned at the end of the function.
+# [Python](#tab/python)
+
+> [!NOTE]
+> Python Durable Functions are available for the Functions 3.0 runtime only.
++
+#### function.json
+
+If you use Visual Studio Code or the Azure portal for development, here's the content of the *function.json* file for the orchestrator function. Most orchestrator *function.json* files look almost exactly like this.
+
+[!code-json[Main](~/samples-durable-functions-python/samples/function_chaining/E1_HelloSequence/function.json)]
+
+The important thing is the `orchestrationTrigger` binding type. All orchestrator functions must use this trigger type.
+
+> [!WARNING]
+> To abide by the "no I/O" rule of orchestrator functions, don't use any input or output bindings when using the `orchestrationTrigger` trigger binding. If other input or output bindings are needed, they should instead be used in the context of `activityTrigger` functions, which are called by the orchestrator. For more information, see the [orchestrator function code constraints](durable-functions-code-constraints.md) article.
+
+#### \_\_init\_\_.py
+
+Here is the orchestrator function:
+
+[!code-python[Main](~/samples-durable-functions-python/samples/function_chaining/E1_HelloSequence/\_\_init\_\_.py)]
+
+All Python orchestration functions must include the [`durable-functions` package](https://pypi.org/project/azure-functions-durable). It's a library that enables you to write Durable Functions in Python. There are two significant differences between an orchestrator function and other Python functions:
+
+1. The orchestrator function is a [generator function](https://wiki.python.org/moin/Generators).
+2. The _file_ should register the orchestrator function as an orchestrator by stating `main = df.Orchestrator.create(<orchestrator function name>)` at the end of the file. This helps distinguish it from other, helper, functions declared in the file.
+
+The `context` object lets you call other *activity* functions and pass input parameters using its `call_activity` method. The code calls `E1_SayHello` three times in sequence with different parameter values, using `yield` to indicate the execution should wait on the async activity function calls to be returned. The return value of each call is returned at the end of the function.
+ --- ### E1_SayHello activity function
@@ -86,7 +116,7 @@ The *function.json* file for the activity function `E1_SayHello` is similar to t
[!code-json[Main](~/samples-durable-functions/samples/javascript/E1_SayHello/function.json)] > [!NOTE]
-> Any function called by an orchestration function must use the `activityTrigger` binding.
+> All activity functions called by an orchestration function must use the `activityTrigger` binding.
The implementation of `E1_SayHello` is a relatively trivial string formatting operation.
@@ -94,7 +124,26 @@ The implementation of `E1_SayHello` is a relatively trivial string formatting op
[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E1_SayHello/index.js)]
-Unlike a JavaScript orchestration function, an activity function needs no special setup. The input passed to it by the orchestrator function is located on the `context.bindings` object under the name of the `activityTrigger` binding - in this case, `context.bindings.name`. The binding name can be set as a parameter of the exported function and accessed directly, which is what the sample code does.
+Unlike the orchestration function, an activity function needs no special setup. The input passed to it by the orchestrator function is located on the `context.bindings` object under the name of the `activityTrigger` binding - in this case, `context.bindings.name`. The binding name can be set as a parameter of the exported function and accessed directly, which is what the sample code does.
+
+# [Python](#tab/python)
+
+#### E1_SayHello/function.json
+
+The *function.json* file for the activity function `E1_SayHello` is similar to that of `E1_HelloSequence` except that it uses an `activityTrigger` binding type instead of an `orchestrationTrigger` binding type.
+
+[!code-json[Main](~/samples-durable-functions-python/samples/function_chaining/E1_SayHello/function.json)]
+
+> [!NOTE]
+> All activity functions called by an orchestration function must use the `activityTrigger` binding.
+
+The implementation of `E1_SayHello` is a relatively trivial string formatting operation.
+
+#### E1_SayHello/\_\_init\_\_.py
+
+[!code-python[Main](~/samples-durable-functions-python/samples/function_chaining/E1_SayHello/\_\_init\_\_.py)]
+
+Unlike the orchestrator function, an activity function needs no special setup. The input passed to it by the orchestrator function is directly accessible as the parameter to the function.
---
@@ -122,6 +171,20 @@ To interact with orchestrators, the function must include a `durableClient` inpu
Use `df.getClient` to obtain a `DurableOrchestrationClient` object. You use the client to start an orchestration. It can also help you return an HTTP response containing URLs for checking the status of the new orchestration.
+# [Python](#tab/python)
+
+#### HttpStart/function.json
+
+[!code-json[Main](~/samples-durable-functions-python/samples/function_chaining/HttpStart/function.json)]
+
+To interact with orchestrators, the function must include a `durableClient` input binding.
+
+#### HttpStart/\_\_init\_\_.py
+
+[!code-python[Main](~/samples-durable-functions-python/samples/function_chaining/HttpStart/\_\_init\_\_.py)]
+
+Use the `DurableOrchestrationClient` constructor to obtain a Durable Functions client. You use the client to start an orchestration. It can also help you return an HTTP response containing URLs for checking the status of the new orchestration.
+ --- ## Run the sample
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-node.md
@@ -15,7 +15,7 @@ As an Express.js, Node.js, or JavaScript developer, if you are new to Azure Func
| Getting started | Concepts| Guided learning | | -- | -- | -- |
-| <ul><li>[Node.js function using Visual Studio Code](./create-first-function-vs-code-node.md)</li><li>[Node.js function with terminal/command prompt](./create-first-function-cli-java.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[TypeScript functions](#typescript)</li><li>[Performance&nbsp; considerations](functions-best-practices.md)</li></ul> | <ul><li>[Create serverless applications](/learn/paths/create-serverless-applications/)</li><li>[Refactor Node.js and Express APIs to Serverless APIs](/learn/modules/shift-nodejs-express-apis-serverless/)</li></ul> |
+| <ul><li>[Node.js function using Visual Studio Code](./create-first-function-vs-code-node.md)</li><li>[Node.js function with terminal/command prompt](./create-first-function-cli-node.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[TypeScript functions](#typescript)</li><li>[Performance&nbsp; considerations](functions-best-practices.md)</li></ul> | <ul><li>[Create serverless applications](/learn/paths/create-serverless-applications/)</li><li>[Refactor Node.js and Express APIs to Serverless APIs](/learn/modules/shift-nodejs-express-apis-serverless/)</li></ul> |
## JavaScript function basics
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/ip-addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/ip-addresses.md
@@ -23,7 +23,7 @@ You need to open some outgoing ports in your server's firewall to allow the Appl
| Purpose | URL | IP | Ports | | --- | --- | --- | --- |
-| Telemetry |dc.applicationinsights.azure.com<br/>dc.applicationinsights.microsoft.com<br/>dc.services.visualstudio.com |40.114.241.141<br/>104.45.136.42<br/>40.84.189.107<br/>168.63.242.221<br/>52.167.221.184<br/>52.169.64.244<br/>40.85.218.175<br/>104.211.92.54<br/>52.175.198.74<br/>51.140.6.23<br/>40.71.12.231<br/>13.69.65.22<br/>13.78.108.165<br/>13.70.72.233<br/>20.44.8.7<br/>13.86.218.248<br/>40.79.138.41<br/>52.231.18.241<br/>13.75.38.7<br/>102.133.155.50<br/>52.162.110.67<br/>191.233.204.248<br/>13.69.66.140<br/>13.77.52.29<br/>51.107.59.180<br/>40.71.12.235<br/>20.44.8.10<br/>40.71.13.169<br/>13.66.141.156<br/>40.71.13.170<br/>13.69.65.23<br/>20.44.17.0<br/>20.36.114.207 <br/>51.116.155.246 <br/>51.107.155.178 <br/>51.140.212.64 <br/>13.86.218.255 <br/>20.37.74.240 <br/>65.52.250.236 <br/>13.69.229.240 <br/>52.236.186.210<br/>52.167.107.65 | 443 |
+| Telemetry |dc.applicationinsights.azure.com<br/>dc.applicationinsights.microsoft.com<br/>dc.services.visualstudio.com |40.114.241.141<br/>104.45.136.42<br/>40.84.189.107<br/>168.63.242.221<br/>52.167.221.184<br/>52.169.64.244<br/>40.85.218.175<br/>104.211.92.54<br/>52.175.198.74<br/>51.140.6.23<br/>40.71.12.231<br/>13.69.65.22<br/>13.78.108.165<br/>13.70.72.233<br/>20.44.8.7<br/>13.86.218.248<br/>40.79.138.41<br/>52.231.18.241<br/>13.75.38.7<br/>102.133.155.50<br/>52.162.110.67<br/>191.233.204.248<br/>13.69.66.140<br/>13.77.52.29<br/>51.107.59.180<br/>40.71.12.235<br/>20.44.8.10<br/>40.71.13.169<br/>13.66.141.156<br/>40.71.13.170<br/>13.69.65.23<br/>20.44.17.0<br/>20.36.114.207 <br/>51.116.155.246 <br/>51.107.155.178 <br/>51.140.212.64 <br/>13.86.218.255 <br/>20.37.74.240 <br/>65.52.250.236 <br/>13.69.229.240 <br/>52.236.186.210<br/>52.167.107.65<br/>40.71.12.237<br/>40.78.229.32<br/>40.78.229.33 | 443 |
| Live Metrics Stream | live.applicationinsights.azure.com<br/>rt.applicationinsights.microsoft.com<br/>rt.services.visualstudio.com|23.96.28.38<br/>13.92.40.198<br/>40.112.49.101<br/>40.117.80.207<br/>157.55.177.6<br/>104.44.140.84<br/>104.215.81.124<br/>23.100.122.113| 443 | ## Status Monitor
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/javascript-react-plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript-react-plugin.md
@@ -65,7 +65,10 @@ class MyComponent extends React.Component {
... }
-export default withAITracking(reactPlugin, appInsights, MyComponent);
+// withAITracking takes 4 parameters ( reactPlugin, Component, ComponentName, className)
+// the first two are required and the other two are optional.
+
+export default withAITracking(reactPlugin, MyComponent);
``` ## Configuration
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-log.md
@@ -27,7 +27,7 @@ You can also create log alert rules using Azure Resource Manager templates, whic
Here the steps to get started writing queries for alerts:
-1. Go to the resource you would like to alert on.
+1. Go to the resource you would like to alert on. Consider setting up alert rules on multiple resources by selecting a subscription or resource group scope whenever possible. Alerting on multiple resources reduces costs and the need to manage multiple alert rules.
1. Under **Monitor**, select **Logs**. 1. Query the log data that can indicate the issue. You can use the [alert query examples topic](../log-query/example-queries.md) to understand what you can discover or [get started on writing your own query](../log-query/log-analytics-tutorial.md). Also, [learn how to create optimized alert queries](alerts-log-query.md). 1. Press on '+ New Alert Rule' button to start the alert creation flow.
@@ -149,7 +149,7 @@ Here the steps to get started writing queries for alerts:
1. Choose [alert splitting by dimensions](alerts-unified-log.md#split-by-alert-dimensions), if needed: - **Resource ID column** is selected automatically, if detected, and changes the context of the fired alert to the record's resource. - **Resource ID column** can be de-selected to fire alerts on subscription or resource groups. De-selecting is useful when query results are based on cross-resources. For example, a query that check if 80% of the resource group's virtual machines are experiencing high CPU usage.
- - Up to six additional splittings can be also selected for any number or text columns types using the dimensions table.
+ - Up to six more splittings can be also selected for any number or text columns types using the dimensions table.
- Alerts are fired separately according to splitting based on unique combinations and alert payload includes this information. ![Select aggregation parameters and splitting](media/alerts-log/select-aggregation-parameters-and-splitting.png)
@@ -320,4 +320,4 @@ On success for creation, 201 is returned. On success for update, 200 is returned
* Learn about [log alerts](./alerts-unified-log.md). * Create log alerts using [Azure Resource Manager Templates](./alerts-log-create-templates.md). * Understand [webhook actions for log alerts](./alerts-log-webhook.md).
-* Learn more about [log queries](../log-query/log-query-overview.md).
\ No newline at end of file
+* Learn more about [log queries](../log-query/log-query-overview.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/design-logs-deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/design-logs-deployment.md
@@ -23,7 +23,7 @@ A Log Analytics workspace provides:
* Data isolation by granting different users access rights following one of our recommended design strategies. * Scope for configuration of settings like [pricing tier](./manage-cost-storage.md#changing-pricing-tier), [retention](./manage-cost-storage.md#change-the-data-retention-period), and [data capping](./manage-cost-storage.md#manage-your-maximum-daily-data-volume).
-Workspaces are hosted on a physical clusters. By default, the system is creating and managing these clusters. Customers that ingest more than 4TB/day are expected to create their own dedicated clusters for their workspaces - it enables them better control and higher ingestion rate.
+Workspaces are hosted on physical clusters. By default, the system is creating and managing these clusters. Customers that ingest more than 4TB/day are expected to create their own dedicated clusters for their workspaces - it enables them better control and higher ingestion rate.
This article provides a detailed overview of the design and migration considerations, access control overview, and an understanding of the design implementations we recommend for your IT organization.
@@ -159,4 +159,4 @@ While planning your migration to this model, consider the following:
## Next steps
-To implement the security permissions and controls recommended in this guide, review [manage access to logs](manage-access.md).
\ No newline at end of file
+To implement the security permissions and controls recommended in this guide, review [manage access to logs](manage-access.md).
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cli.md
@@ -4,6 +4,7 @@ description: Use Azure Resource Manager and Azure CLI to deploy resources to Azu
ms.topic: conceptual ms.date: 10/22/2020 ---+ # Deploy resources with ARM templates and Azure CLI This article explains how to use Azure CLI with Azure Resource Manager templates (ARM templates) to deploy your resources to Azure. If you aren't familiar with the concepts of deploying and managing your Azure solutions, see [template deployment overview](overview.md).
@@ -12,7 +13,7 @@ The deployment commands changed in Azure CLI version 2.2.0. The examples in this
[!INCLUDE [sample-cli-install](../../../includes/sample-cli-install.md)]
-If you don't have Azure CLI installed, you can use the Cloud Shell. For more information, see [Deploy ARM templates from Cloud Shell](deploy-cloud-shell.md).
+If you don't have Azure CLI installed, you can use Azure Cloud Shell. For more information, see [Deploy ARM templates from Azure Cloud Shell](deploy-cloud-shell.md).
## Deployment scope
@@ -163,7 +164,7 @@ To pass parameter values, you can use either inline parameters or a parameter fi
### Inline parameters
-To pass inline parameters, provide the values in `parameters`. For example, to pass a string and array to a template is a Bash shell, use:
+To pass inline parameters, provide the values in `parameters`. For example, to pass a string and array to a template in a Bash shell, use:
```azurecli-interactive az deployment group create \
@@ -185,7 +186,7 @@ az deployment group create \
Getting a parameter value from a file is helpful when you need to provide configuration values. For example, you can provide [cloud-init values for a Linux virtual machine](../../virtual-machines/linux/using-cloud-init.md).
-The arrayContent.json format is:
+The _arrayContent.json_ format is:
```json [
@@ -222,7 +223,7 @@ Rather than passing parameters as inline values in your script, you may find it
For more information about the parameter file, see [Create Resource Manager parameter file](parameter-files.md).
-To pass a local parameter file, use `@` to specify a local file named storage.parameters.json.
+To pass a local parameter file, use `@` to specify a local file named _storage.parameters.json_.
```azurecli-interactive az deployment group create \
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-cloud-shell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cloud-shell.md
@@ -1,12 +1,13 @@
--- title: Deploy templates with Cloud Shell
-description: Use Azure Resource Manager and Cloud Shell to deploy resources to Azure. The resources are defined in an Azure Resource Manager template.
+description: Use Azure Resource Manager and Azure Cloud Shell to deploy resources to Azure. The resources are defined in an Azure Resource Manager template (ARM template).
ms.topic: conceptual ms.date: 10/22/2020 ---
-# Deploy ARM templates from Cloud Shell
-You can use [Cloud Shell](../../cloud-shell/overview.md) to deploy an Azure Resource Manager template (ARM template). You can deploy either an ARM template that is stored remotely, or an ARM template that is stored on the local storage account for Cloud Shell.
+# Deploy ARM templates from Azure Cloud Shell
+
+You can use [Azure Cloud Shell](../../cloud-shell/overview.md) to deploy an Azure Resource Manager template (ARM template). You can deploy either an ARM template that is stored remotely, or an ARM template that is stored on the local storage account for Cloud Shell.
You can deploy to any scope. This article shows deploying to a resource group.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-portal.md
@@ -97,11 +97,11 @@ If you want to execute a deployment but not use any of the templates in the Mark
- **Select template**: deploy the template. - **Edit template**: edit the quickstart template before you deploy it.
-1. Select **Edit template** to explore the portal template editor. The template is loaded in the editor. Notice there are two parameters: **storageAccountType** and **location**.
+1. Select **Edit template** to explore the portal template editor. The template is loaded in the editor. Notice there are two parameters: `storageAccountType` and `location`.
![Create template](./media/deploy-portal/show-json.png)
-1. Make a minor change to the template. For example, update the **storageAccountName** variable to:
+1. Make a minor change to the template. For example, update the `storageAccountName` variable to:
```json "storageAccountName": "[concat('azstore', uniquestring(resourceGroup().id))]"
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-powershell.md
@@ -4,32 +4,33 @@ description: Use Azure Resource Manager and Azure PowerShell to deploy resources
ms.topic: conceptual ms.date: 10/22/2020 ---+ # Deploy resources with ARM templates and Azure PowerShell This article explains how to use Azure PowerShell with Azure Resource Manager templates (ARM templates) to deploy your resources to Azure. If you aren't familiar with the concepts of deploying and managing your Azure solutions, see [template deployment overview](overview.md). ## Prerequisites
-You need a template to deploy. If you don't already have one, download and save an [example template](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json) from the Azure Quickstart templates repo. The local file name used in this article is **c:\MyTemplates\azuredeploy.json**.
+You need a template to deploy. If you don't already have one, download and save an [example template](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json) from the Azure Quickstart templates repo. The local file name used in this article is _C:\MyTemplates\azuredeploy.json_.
You need to install Azure PowerShell and connect to Azure: - **Install Azure PowerShell cmdlets on your local computer.** For more information, see [Get started with Azure PowerShell](/powershell/azure/get-started-azureps). - **Connect to Azure by using [Connect-AZAccount](/powershell/module/az.accounts/connect-azaccount)**. If you have multiple Azure subscriptions, you might also need to run [Set-AzContext](/powershell/module/Az.Accounts/Set-AzContext). For more information, see [Use multiple Azure subscriptions](/powershell/azure/manage-subscriptions-azureps).
-If you don't have PowerShell installed, you can use the Cloud Shell. For more information, see [Deploy ARM templates from Cloud Shell](deploy-cloud-shell.md).
+If you don't have PowerShell installed, you can use Azure Cloud Shell. For more information, see [Deploy ARM templates from Azure Cloud Shell](deploy-cloud-shell.md).
## Deployment scope You can target your deployment to a resource group, subscription, management group, or tenant. Depending on the scope of the deployment, you use different commands.
-* To deploy to a **resource group**, use [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment):
+- To deploy to a **resource group**, use [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment):
```azurepowershell New-AzResourceGroupDeployment -ResourceGroupName <resource-group-name> -TemplateFile <path-to-template> ```
-* To deploy to a **subscription**, use New-AzSubscriptionDeployment:
+- To deploy to a **subscription**, use [New-AzSubscriptionDeployment](/powershell/module/az.resources/new-azdeployment) which is an alias of the `New-AzDeployment` cmdlet:
```azurepowershell New-AzSubscriptionDeployment -Location <location> -TemplateFile <path-to-template>
@@ -37,7 +38,7 @@ You can target your deployment to a resource group, subscription, management gro
For more information about subscription level deployments, see [Create resource groups and resources at the subscription level](deploy-to-subscription.md).
-* To deploy to a **management group**, use [New-AzManagementGroupDeployment](/powershell/module/az.resources/New-AzManagementGroupDeployment).
+- To deploy to a **management group**, use [New-AzManagementGroupDeployment](/powershell/module/az.resources/New-AzManagementGroupDeployment).
```azurepowershell New-AzManagementGroupDeployment -Location <location> -TemplateFile <path-to-template>
@@ -45,7 +46,7 @@ You can target your deployment to a resource group, subscription, management gro
For more information about management group level deployments, see [Create resources at the management group level](deploy-to-management-group.md).
-* To deploy to a **tenant**, use [New-AzTenantDeployment](/powershell/module/az.resources/new-aztenantdeployment).
+- To deploy to a **tenant**, use [New-AzTenantDeployment](/powershell/module/az.resources/new-aztenantdeployment).
```azurepowershell New-AzTenantDeployment -Location <location> -TemplateFile <path-to-template>
@@ -203,7 +204,7 @@ Rather than passing parameters as inline values in your script, you may find it
For more information about the parameter file, see [Create Resource Manager parameter file](parameter-files.md).
-To pass a local parameter file, use the **TemplateParameterFile** parameter:
+To pass a local parameter file, use the `TemplateParameterFile` parameter:
```powershell New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
@@ -211,7 +212,7 @@ New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName Example
-TemplateParameterFile c:\MyTemplates\storage.parameters.json ```
-To pass an external parameter file, use the **TemplateParameterUri** parameter:
+To pass an external parameter file, use the `TemplateParameterUri` parameter:
```powershell New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
@@ -224,4 +225,4 @@ New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName Example
- To roll back to a successful deployment when you get an error, see [Rollback on error to successful deployment](rollback-on-error.md). - To specify how to handle resources that exist in the resource group but aren't defined in the template, see [Azure Resource Manager deployment modes](deployment-modes.md). - To understand how to define parameters in your template, see [Understand the structure and syntax of ARM templates](template-syntax.md).-- For information about deploying a template that requires a SAS token, see [Deploy private template with SAS token](secure-template-with-sas-token.md).
+- For information about deploying a template that requires a SAS token, see [Deploy private ARM template with SAS token](secure-template-with-sas-token.md).
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-rest.md
@@ -4,6 +4,7 @@ description: Use Azure Resource Manager and Resource Manager REST API to deploy
ms.topic: conceptual ms.date: 10/22/2020 ---+ # Deploy resources with ARM templates and Azure Resource Manager REST API This article explains how to use the Azure Resource Manager REST API with Azure Resource Manager templates (ARM templates) to deploy your resources to Azure.
@@ -14,13 +15,13 @@ You can either include your template in the request body or link to a file. When
You can target your deployment to a resource group, Azure subscription, management group, or tenant. Depending on the scope of the deployment, you use different commands.
-* To deploy to a **resource group**, use [Deployments - Create](/rest/api/resources/deployments/createorupdate). The request is sent to:
+- To deploy to a **resource group**, use [Deployments - Create](/rest/api/resources/deployments/createorupdate). The request is sent to:
```HTTP PUT https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-06-01 ```
-* To deploy to a **subscription**, use [Deployments - Create At Subscription Scope](/rest/api/resources/deployments/createorupdateatsubscriptionscope). The request is sent to:
+- To deploy to a **subscription**, use [Deployments - Create At Subscription Scope](/rest/api/resources/deployments/createorupdateatsubscriptionscope). The request is sent to:
```HTTP PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-06-01
@@ -28,7 +29,7 @@ You can target your deployment to a resource group, Azure subscription, manageme
For more information about subscription level deployments, see [Create resource groups and resources at the subscription level](deploy-to-subscription.md).
-* To deploy to a **management group**, use [Deployments - Create At Management Group Scope](/rest/api/resources/deployments/createorupdateatmanagementgroupscope). The request is sent to:
+- To deploy to a **management group**, use [Deployments - Create At Management Group Scope](/rest/api/resources/deployments/createorupdateatmanagementgroupscope). The request is sent to:
```HTTP PUT https://management.azure.com/providers/Microsoft.Management/managementGroups/{groupId}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-06-01
@@ -36,7 +37,7 @@ You can target your deployment to a resource group, Azure subscription, manageme
For more information about management group level deployments, see [Create resources at the management group level](deploy-to-management-group.md).
-* To deploy to a **tenant**, use [Deployments - Create Or Update At Tenant Scope](/rest/api/resources/deployments/createorupdateattenantscope). The request is sent to:
+- To deploy to a **tenant**, use [Deployments - Create Or Update At Tenant Scope](/rest/api/resources/deployments/createorupdateattenantscope). The request is sent to:
```HTTP PUT https://management.azure.com/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-06-01
@@ -77,7 +78,7 @@ The examples in this article use resource group deployments.
In the request body, provide a link to your template and parameter file. For more information about the parameter file, see [Create Resource Manager parameter file](parameter-files.md).
- Notice the **mode** is set to **Incremental**. To run a complete deployment, set **mode** to **Complete**. Be careful when using the complete mode as you can inadvertently delete resources that aren't in your template.
+ Notice the `mode` is set to **Incremental**. To run a complete deployment, set `mode` to **Complete**. Be careful when using the complete mode as you can inadvertently delete resources that aren't in your template.
```json {
@@ -116,9 +117,9 @@ The examples in this article use resource group deployments.
} ```
- You can set up your storage account to use a shared access signature (SAS) token. For more information, see [Delegating Access with a Shared Access Signature](/rest/api/storageservices/delegating-access-with-a-shared-access-signature).
+ You can set up your storage account to use a shared access signature (SAS) token. For more information, see [Delegate access with a shared access signature](/rest/api/storageservices/delegate-access-with-shared-access-signature).
- If you need to provide a sensitive value for a parameter (such as a password), add that value to a key vault. Retrieve the key vault during deployment as shown in the previous example. For more information, see [Pass secure values during deployment](key-vault-parameter.md).
+ If you need to provide a sensitive value for a parameter (such as a password), add that value to a key vault. Retrieve the key vault during deployment as shown in the previous example. For more information, see [Use Azure Key Vault to pass secure parameter value during deployment](key-vault-parameter.md).
1. Instead of linking to files for the template and parameters, you can include them in the request body. The following example shows the request body with the template and parameter inline:
@@ -211,4 +212,3 @@ To avoid conflicts with concurrent deployments and to ensure unique entries in t
- To specify how to handle resources that exist in the resource group but aren't defined in the template, see [Azure Resource Manager deployment modes](deployment-modes.md). - To learn about handling asynchronous REST operations, see [Track asynchronous Azure operations](../management/async-operations.md). - To learn more about templates, see [Understand the structure and syntax of ARM templates](template-syntax.md).-
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-to-azure-button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-azure-button.md
@@ -4,16 +4,17 @@ description: Use button to deploy Azure Resource Manager templates from a GitHub
ms.topic: conceptual ms.date: 11/10/2020 ---+ # Use a deployment button to deploy templates from GitHub repository
-This article describes how to use the **Deploy to Azure** button to deploy templates from a GitHub repository. You can add the button directly to the README.md file in your GitHub repository. Or, you can add the button to a web page that references the repository.
+This article describes how to use the **Deploy to Azure** button to deploy templates from a GitHub repository. You can add the button directly to the _README.md_ file in your GitHub repository. Or, you can add the button to a web page that references the repository.
The deployment scope is determined by the template schema. For more information, see:
-* [resource groups](deploy-to-resource-group.md)
-* [subscriptions](deploy-to-subscription.md)
-* [management groups](deploy-to-management-group.md)
-* [tenants](deploy-to-tenant.md)
+- [resource groups](deploy-to-resource-group.md)
+- [subscriptions](deploy-to-subscription.md)
+- [management groups](deploy-to-management-group.md)
+- [tenants](deploy-to-tenant.md)
## Use common image
@@ -72,7 +73,7 @@ You have your full URL for the link.
Typically, you host the template in a public repo. If you use a private repo, you must include a token to access the raw contents of the template. The token generated by GitHub is valid for only a short time. You would need to update the link often.
-If you're using [Git with Azure Repos](/azure/devops/repos/git/) instead of a GitHub repo, you can still use the Deploy to Azure button. Make sure your repo is public. Use the [Items operation](/rest/api/azure/devops/git/items/get) to get the template. Your request should be in the following format:
+If you're using [Git with Azure Repos](/azure/devops/repos/git/) instead of a GitHub repo, you can still use the **Deploy to Azure** button. Make sure your repo is public. Use the [Items operation](/rest/api/azure/devops/git/items/get) to get the template. Your request should be in the following format:
```http https://dev.azure.com/{organization-name}/{project-name}/_apis/git/repositories/{repository-name}/items?scopePath={url-encoded-path}&api-version=6.0
@@ -84,7 +85,7 @@ Encode this request URL.
Finally, put the link and image together.
-To add the button with Markdown in the README.md file in your GitHub repository or a web page, use:
+To add the button with Markdown in the _README.md_ file in your GitHub repository or a web page, use:
```markdown [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-storage-account-create%2Fazuredeploy.json)
@@ -116,4 +117,4 @@ The portal displays a pane that allows you to easily provide parameter values. T
## Next steps -- To learn more about templates, see [Understand the structure and syntax of Azure Resource Manager templates](template-syntax.md).
+- To learn more about templates, see [Understand the structure and syntax of ARM templates](template-syntax.md).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/database-copy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-copy.md
@@ -129,6 +129,46 @@ CREATE DATABASE Database2 AS COPY OF server1.Database1;
You can use the steps in the [Copy a SQL Database to a different server](#copy-to-a-different-server) section to copy your database to a server in a different subscription using T-SQL. Make sure you use a login that has the same name and password as the database owner of the source database. Additionally, the login must be a member of the `dbmanager` role or a server administrator, on both source and target servers.
+```sql
+Step# 1
+Create login and user in the master database of the source server.
+
+CREATE LOGIN loginname WITH PASSWORD = 'xxxxxxxxx'
+GO
+CREATE USER [loginname] FOR LOGIN [loginname] WITH DEFAULT_SCHEMA=[dbo]
+GO
+
+Step# 2
+Create the user in the source database and grant dbowner permission to the database.
+
+CREATE USER [loginname] FOR LOGIN [loginname] WITH DEFAULT_SCHEMA=[dbo]
+GO
+exec sp_addrolemember 'db_owner','loginname'
+GO
+
+Step# 3
+Capture the SID of the user ΓÇ£loginnameΓÇ¥ from master database
+
+SELECT [sid] FROM sysusers WHERE [name] = 'loginname'
+
+Step# 4
+Connect to Destination server.
+Create login and user in the master database, same as of the source server.
+
+CREATE LOGIN loginname WITH PASSWORD = 'xxxxxxxxx', SID = [SID of loginname login on source server]
+GO
+CREATE USER [loginname] FOR LOGIN [loginname] WITH DEFAULT_SCHEMA=[dbo]
+GO
+exec sp_addrolemember 'dbmanager','loginname'
+GO
+
+Step# 5
+Execute the copy of database script from the destination server using the credentials created
+
+CREATE DATABASE new_database_name
+AS COPY OF source_server_name.source_database_name
+```
+ > [!NOTE] > The [Azure portal](https://portal.azure.com), PowerShell, and the Azure CLI do not support database copy to a different subscription.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/dynamic-data-masking-configure-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/dynamic-data-masking-configure-portal.md
@@ -18,7 +18,7 @@ ms.date: 04/28/2020
This article shows you how to implement [dynamic data masking](dynamic-data-masking-overview.md) with the Azure portal. You can also implement dynamic data masking using [Azure SQL Database cmdlets](/powershell/module/az.sql/) or the [REST API](/rest/api/sql/). > [!NOTE]
-> This feature cannot be set using portal for Azure Synapse (use PowerShell or REST API) or SQL Managed Instance. For more information, see [Dynamic Data Masking](/sql/relational-databases/security/dynamic-data-masking).
+> This feature cannot be set using portal for SQL Managed Instance (use PowerShell or REST API). For more information, see [Dynamic Data Masking](/sql/relational-databases/security/dynamic-data-masking).
## Set up dynamic data masking for your database using the Azure portal
@@ -54,4 +54,4 @@ This article shows you how to implement [dynamic data masking](dynamic-data-mask
## Next steps - For an overview of dynamic data masking, see [dynamic data masking](dynamic-data-masking-overview.md).-- You can also implement dynamic data masking using [Azure SQL Database cmdlets](/powershell/module/az.sql/) or the [REST API](/rest/api/sql/).\ No newline at end of file
+- You can also implement dynamic data masking using [Azure SQL Database cmdlets](/powershell/module/az.sql/) or the [REST API](/rest/api/sql/).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/elastic-query-getting-started-vertical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-query-getting-started-vertical.md
@@ -71,6 +71,7 @@ INSERT INTO [dbo].[CustomerInformation] ([CustomerID], [CustomerName], [Company]
SECRET = '<password>'; ```
+ The "master_key_password" is a strong password of your choosing used to encrypt the connection credentials.
The "username" and "password" should be the username and password used to log in into the Customers database. Authentication using Azure Active Directory with elastic queries is not currently supported.
@@ -123,4 +124,4 @@ For pricing information, see [SQL Database Pricing](https://azure.microsoft.com/
* For syntax and sample queries for vertically partitioned data, see [Querying vertically partitioned data)](elastic-query-vertical-partitioning.md) * For a horizontal partitioning (sharding) tutorial, see [Getting started with elastic query for horizontal partitioning (sharding)](elastic-query-getting-started.md). * For syntax and sample queries for horizontally partitioned data, see [Querying horizontally partitioned data)](elastic-query-horizontal-partitioning.md)
-* See [sp\_execute \_remote](/sql/relational-databases/system-stored-procedures/sp-execute-remote-azure-sql-database) for a stored procedure that executes a Transact-SQL statement on a single remote Azure SQL Database or set of databases serving as shards in a horizontal partitioning scheme.
\ No newline at end of file
+* See [sp\_execute \_remote](/sql/relational-databases/system-stored-procedures/sp-execute-remote-azure-sql-database) for a stored procedure that executes a Transact-SQL statement on a single remote Azure SQL Database or set of databases serving as shards in a horizontal partitioning scheme.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/elastic-scale-introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-scale-introduction.md
@@ -17,7 +17,7 @@ You can easily scale out databases in Azure SQL Database using the **Elastic Dat
* [Elastic Database client library](elastic-database-client-library.md): The client library is a feature that allows you to create and maintain sharded databases. See [Get started with Elastic Database tools](elastic-scale-get-started.md). * [Elastic Database split-merge tool](elastic-scale-overview-split-and-merge.md): moves data between sharded databases. This tool is useful for moving data from a multi-tenant database to a single-tenant database (or vice-versa). See [Elastic database Split-Merge tool tutorial](elastic-scale-configure-deploy-split-and-merge.md).
-* [Elastic Database jobs](elastic-jobs-overview.md): Use jobs to manage large numbers of databases in Azure SQL Database. Easily perform administrative operations such as schema changes, credentials management, reference data updates, performance data collection, or tenant (customer) telemetry collection using jobs.
+* [Elastic Database jobs](elastic-jobs-overview.md) (preview): Use jobs to manage large numbers of databases in Azure SQL Database. Easily perform administrative operations such as schema changes, credentials management, reference data updates, performance data collection, or tenant (customer) telemetry collection using jobs.
* [Elastic Database query](elastic-query-overview.md) (preview): Enables you to run a Transact-SQL query that spans multiple databases. This enables connection to reporting tools such as Excel, Power BI, Tableau, etc. * [Elastic transactions](elastic-transactions-overview.md): This feature allows you to run transactions that span several databases. Elastic database transactions are available for .NET applications using ADO .NET and integrate with the familiar programming experience using the [System.Transaction classes](/dotnet/api/system.transactions).
@@ -97,4 +97,4 @@ To see the specifics of the elastic pool, see [Price and performance considerati
[1]:./media/elastic-scale-introduction/tools.png [2]:./media/elastic-scale-introduction/h_versus_vert.png [3]:./media/elastic-scale-introduction/overview.png
-[4]:./media/elastic-scale-introduction/single_v_multi_tenant.png
\ No newline at end of file
+[4]:./media/elastic-scale-introduction/single_v_multi_tenant.png
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/cpp/windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/cpp/windows.md
@@ -71,6 +71,13 @@ Insert this code below your `IntentRecognizer`. Make sure that you replace `"You
This example uses the `AddIntent()` function to individually add intents. If you want to add all intents from a model, use `AddAllIntents(model)` and pass the model.
+> [!NOTE]
+> You can create a LanguageUnderstandingModel by passing an endpoint URL to the FromEndpoint method.
+> Speech SDK only supports LUIS v2.0 endpoints, and
+> LUIS v2.0 endpoints always follow one of these two patterns:
+> * `https://{AzureResourceName}.cognitiveservices.azure.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
+> * `https://{Region}.api.cognitive.microsoft.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
+ ## Recognize an intent From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech. For simplicity we'll wait on the future returned to complete.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/csharp/dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/csharp/dotnet.md
@@ -69,6 +69,13 @@ You need to associate a `LanguageUnderstandingModel` with the intent recognizer,
This example uses the `AddIntent()` function to individually add intents. If you want to add all intents from a model, use `AddAllIntents(model)` and pass the model.
+> [!NOTE]
+> You can create a LanguageUnderstandingModel by passing an endpoint URL to the FromEndpoint method.
+> Speech SDK only supports LUIS v2.0 endpoints, and
+> LUIS v2.0 endpoints always follow one of these two patterns:
+> * `https://{AzureResourceName}.cognitiveservices.azure.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
+> * `https://{Region}.api.cognitive.microsoft.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
+ ## Recognize an intent From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/header https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/header.md
@@ -18,9 +18,3 @@ After satisfying a few prerequisites, recognizing speech and identifying intents
> * Using the `IntentRecognizer` object, start the recognition process for a single utterance. > * Inspect the `IntentRecognitionResult` returned.
-> [!NOTE]
-> You can create a LanguageUnderstandingModel by passing an endpoint URL to the FromEndpoint method.
-> Speech SDK only supports LUIS v2.0 endpoints, and
-> LUIS v2.0 endpoints always follow one of these two patterns:
-> * `https://{AzureResourceName}.cognitiveservices.azure.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
-> * `https://{Region}.api.cognitive.microsoft.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/java/jre https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/java/jre.md
@@ -66,6 +66,13 @@ Insert this code below your `IntentRecognizer`. Make sure that you replace `"You
This example uses the `addIntent()` function to individually add intents. If you want to add all intents from a model, use `addAllIntents(model)` and pass the model.
+> [!NOTE]
+> You can create a LanguageUnderstandingModel by passing an endpoint URL to the FromEndpoint method.
+> Speech SDK only supports LUIS v2.0 endpoints, and
+> LUIS v2.0 endpoints always follow one of these two patterns:
+> * `https://{AzureResourceName}.cognitiveservices.azure.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
+> * `https://{Region}.api.cognitive.microsoft.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
+ ## Recognize an intent From the `IntentRecognizer` object, you're going to call the `recognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/javascript/browser https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/javascript/browser.md
@@ -182,6 +182,14 @@ Insert this code below your `IntentRecognizer`. Make sure that you replace `"You
recognizer.addAllIntents(lm); } ```+
+> [!NOTE]
+> You can create a LanguageUnderstandingModel by passing an endpoint URL to the FromEndpoint method.
+> Speech SDK only supports LUIS v2.0 endpoints, and
+> LUIS v2.0 endpoints always follow one of these two patterns:
+> * `https://{AzureResourceName}.cognitiveservices.azure.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
+> * `https://{Region}.api.cognitive.microsoft.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
+ ## Recognize an intent From the `IntentRecognizer` object, you're going to call the `recognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/python/python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/python/python.md
@@ -66,6 +66,13 @@ Insert this code below your `IntentRecognizer`. Make sure that you replace `"You
This example uses the `add_intents()` function to add a list of explicitly-defined intents. If you want to add all intents from a model, use `add_all_intents(model)` and pass the model.
+> [!NOTE]
+> You can create a LanguageUnderstandingModel by passing an endpoint URL to the FromEndpoint method.
+> Speech SDK only supports LUIS v2.0 endpoints, and
+> LUIS v2.0 endpoints always follow one of these two patterns:
+> * `https://{AzureResourceName}.cognitiveservices.azure.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
+> * `https://{Region}.api.cognitive.microsoft.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
+ ## Recognize an intent From the `IntentRecognizer` object, you're going to call the `recognize_once()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Translator/migrate-to-v3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/migrate-to-v3.md
@@ -15,9 +15,7 @@ ms.author: swmachan
# Translator V2 to V3 Migration > [!NOTE]
-> V2 was deprecated on April 30, 2018. Please migrate your applications to V3 in order to take advantage of new functionality available exclusively in V3.
->
-> The Microsoft Translator Hub will be retired on May 17, 2019. [View important migration information and dates](https://www.microsoft.com/translator/business/hub/).
+> V2 was deprecated on April 30, 2018. Please migrate your applications to V3 in order to take advantage of new functionality available exclusively in V3. V2 will be retired on May 24, 2021.
The Microsoft Translator team has released Version 3 (V3) of the Translator. This release includes new features, deprecated methods and a new format for sending to, and receiving data from the Microsoft Translator Service. This document provides information for changing applications to use V3.
@@ -141,4 +139,4 @@ No version of the Translator creates a record of your translations. Your transla
## Next steps > [!div class="nextstepaction"]
-> [View V3.0 Documentation](reference/v3-0-reference.md)
\ No newline at end of file
+> [View V3.0 Documentation](reference/v3-0-reference.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/client-library.md
@@ -18,7 +18,7 @@ keywords: forms processing, automated data processing
# Quickstart: Use the Form Recognizer client library
-Get started with the Form Recognizer using the language of your choice. Azure Form Recognizer is a cognitive service that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs and table data from your form documents&mdash;the service outputs structured data that includes the relationships in the original file. Follow these steps to install the SDK package and try out the example code for basic tasks. The Form Recognizer client library currently targets v2.0 of the From Recognizer service.
+Get started with the Form Recognizer using the language of your choice. Azure Form Recognizer is a cognitive service that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs and table data from your form documents&mdash;the service outputs structured data that includes the relationships in the original file. Follow these steps to install the SDK package and try out the example code for basic tasks. The Form Recognizer client library currently targets v2.0 of the Form Recognizer service.
Use the Form Recognizer client library to:
@@ -58,4 +58,4 @@ Use the Form Recognizer client library to:
[!INCLUDE [REST API quickstart](../includes/quickstarts/rest-api.md)]
-::: zone-end
\ No newline at end of file
+::: zone-end
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/personalizer/what-is-personalizer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/what-is-personalizer.md
@@ -11,7 +11,7 @@ keywords: personalizer, Azure personalizer, machine learning
# What is Personalizer?
-Azure Personalizer is a cloud-based service that helps your applications choose the best content item to show your users. You can use the Personalizer service to determine what product to suggest to shoppers or to figure out the optimal position for an advertisement. After the content is shown to the user, the system monitors real-time user behavior and reports a reward score back to the Personalizer service. This ensures continuous improvement of the machine learning model, and Personalizer's ability to select the best content item based on the contextual information it receives.
+Azure Personalizer is a cloud-based service that helps your applications choose the best content item to show your users. You can use the Personalizer service to determine what product to suggest to shoppers or to figure out the optimal position for an advertisement. After the content is shown to the user, your application monitors the user's reaction and reports a reward score back to the Personalizer service. This ensures continuous improvement of the machine learning model, and Personalizer's ability to select the best content item based on the contextual information it receives.
> [!TIP] > Content is any unit of information, such as text, images, URL, emails, or anything else that you want to select from and show to your users.
@@ -60,7 +60,7 @@ Personalizer's **Reward** [API](https://westus2.dev.cognitive.microsoft.com/docs
Use Personalizer when your content:
-* Has a limited set of items (max of ~50) to select from. If you have a larger list, [use a recommendation engine](where-can-you-use-personalizer.md#how-to-use-personalizer-with-a-recommendation-solution) to reduce the list down to 50 items.
+* Has a limited set of actions or items (max of ~50) to select from in each personalization event. If you have a larger list, [use a recommendation engine](where-can-you-use-personalizer.md#how-to-use-personalizer-with-a-recommendation-solution) to reduce the list down to 50 items for each time you call Rank on the Personalizer service.
* Has information describing the content you want ranked: _actions with features_ and _context features_. * Has a minimum of ~1k/day content-related events for Personalizer to be effective. If Personalizer doesn't receive the minimum traffic required, the service takes longer to determine the single best content item.
@@ -120,4 +120,4 @@ After you've had a chance to get started with the Personalizer service, try our
> [!div class="nextstepaction"] > [How Personalizer works](how-personalizer-works.md)
-> [What is Reinforcement Learning?](concepts-reinforcement-learning.md)
\ No newline at end of file
+> [What is Reinforcement Learning?](concepts-reinforcement-learning.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/concepts/data-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/concepts/data-limits.md
@@ -31,7 +31,15 @@ Use this article to find the limits for the size, and rates that you can send da
| Maximum size of a single document (`/analyze` endpoint) | 125K characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). Does not apply to Text Analytics for health. | | Maximum size of entire request | 1 MB. Also applies to Text Analytics for health. |
-The maximum number of documents you can send in a single request will depend on the API version and feature you're using. The `/analyze` endpoint will reject the entire request if any document exceeds the max size (125K characters)
+
+If a document exceeds the character limit, the API will behave differently depending on the endpoint you're using:
+
+* `/analyze` endpoint:
+ * The API will reject the entire request and return a `400 bad request` error if any document within it exceeds the maximum size.
+* All other endpoints:
+ * The API won't process a document that exceeds the maximum size, and will return an invalid document error for it. If an API request has multiple documents, the API will continue processing them if they are within the character limit.
+
+The maximum number of documents you can send in a single request will depend on the API version and feature you're using, which is described in the table below.
#### [Version 3](#tab/version-3)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/voice-video-calling/about-call-types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/about-call-types.md
@@ -44,8 +44,8 @@ We support H.264 (MPEG-4)
We support up to Full HD 1080p on the native (iOS, Android) SDKs. For Web (JS) SDK we support Standard HD 720p. The quality depends on the available bandwidth. ### Rooms concept
-Rooms are are a set of APIs and SDKs that allow you to easily add audio, video, screen sharing, PSTN and SMS interactions to your website or native application.
-During the preview you can use the group ID to join the same conversation. You can create as many group ID as you need and separate the users by the ΓÇ£roomsΓÇ¥. Moving forward will introduce more controls around ΓÇ£roomsΓÇ¥
+Rooms are a set of APIs and SDKs that allow you to easily add audio, video, screen sharing, PSTN and SMS interactions to your website or native application.
+During the preview you can use the group ID to join the same conversation. You can create as many group IDs as you need and separate the users by the ΓÇ£roomsΓÇ¥. Moving forward will introduce more controls around ΓÇ£roomsΓÇ¥
## Next steps
container-instances https://docs.microsoft.com/en-us/azure/container-instances/container-instances-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-vnet.md
@@ -15,7 +15,7 @@ This article shows how to use the [az container create][az-container-create] com
For networking scenarios and limitations, see [Virtual network scenarios and resources for Azure Container Instances](container-instances-virtual-network-concepts.md). > [!IMPORTANT]
-> Container group deployment to a virtual network is generally available for Linux containers, in most regions where Azure Container Instances is available. For details, see [Regions and resource availability](container-instances-virtual-network-concepts.md#where-to-deploy).
+> Container group deployment to a virtual network is generally available for Linux containers, in most regions where Azure Container Instances is available. For details, see [Regions and resource availability][container-regions].
Examples in this article are formatted for the Bash shell. If you prefer another shell such as PowerShell or Command Prompt, adjust the line continuation characters accordingly.
@@ -233,3 +233,4 @@ To deploy a new virtual network, subnet, network profile, and container group us
[az-container-show]: /cli/azure/container#az-container-show [az-network-vnet-create]: /cli/azure/network/vnet#az-network-vnet-create [az-network-profile-list]: /cli/azure/network/profile#az-network-profile-list
+[container-regions]: container-instances-region-availability.md
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/change-feed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/change-feed.md
@@ -10,7 +10,7 @@ ms.reviewer: sngun
ms.custom: seodec18, "seo-nov-2020" --- # Change feed in Azure Cosmos DB
-[!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
+[!INCLUDE[appliesto-all-apis-except-table](includes/appliesto-all-apis-except-table.md)]
Change feed in Azure Cosmos DB is a persistent record of changes to a container in the order they occur. Change feed support in Azure Cosmos DB works by listening to an Azure Cosmos container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified. The persisted changes can be processed asynchronously and incrementally, and the output can be distributed across one or more consumers for parallel processing.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-write-stored-procedures-triggers-udfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-write-stored-procedures-triggers-udfs.md
@@ -279,7 +279,7 @@ function async_sample() {
## <a id="triggers"></a>How to write triggers
-Azure Cosmos DB supports pre-triggers and post-triggers. Pre-triggers are executed before modifying a database item and post-triggers are executed after modifying a database item.Triggers are not automatic. They must be specified for each database operation where you want them executed.
+Azure Cosmos DB supports pre-triggers and post-triggers. Pre-triggers are executed before modifying a database item and post-triggers are executed after modifying a database item. Triggers are not automatically executed, they must be specified for each database operation where you want them to execute. After you define a trigger, you should [register and call a pre-trigger](how-to-use-stored-procedures-triggers-udfs.md#pre-triggers) by using the Azure Cosmos DB SDKs.
### <a id="pre-triggers"></a>Pre-triggers
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/includes/appliesto-all-apis-except-table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/includes/appliesto-all-apis-except-table.md new file mode 100644
@@ -0,0 +1 @@
+APPLIES TO: :::image type="icon" source="../media/applies-to/yes.png" border="false":::SQL API :::image type="icon" source="../media/applies-to/yes.png" border="false":::Cassandra API :::image type="icon" source="../media/applies-to/yes.png" border="false"::: Gremlin API :::image type="icon" source="../media/applies-to/yes.png" border="false"::: Azure Cosmos DB API for MongoDB
\ No newline at end of file
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-feature-support-36 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-feature-support-36.md
@@ -18,11 +18,14 @@ By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of th
## Protocol Support
-The Azure Cosmos DB's API for MongoDB is compatible with MongoDB server version **3.6** by default for new accounts. The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB's API for MongoDB. Note that when using Azure Cosmos DB's API for MongoDB accounts, the 3.6 version of accounts have the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts have the endpoint in the format `*.documents.azure.com`.
+The Azure Cosmos DB's API for MongoDB is compatible with MongoDB server version **3.6** by default for new accounts. The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB's API for MongoDB. Note that when using Azure Cosmos DB's API for MongoDB accounts, the 3.6 version of account has the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of account has the endpoint in the format `*.documents.azure.com`.
## Query language support
-Azure Cosmos DB's API for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options.
+Azure Cosmos DB's API for MongoDB provides comprehensive support for MongoDB query language constructs. The following sections show the detailed list of server operations, operators, stages, commands, and options currently supported by Azure Cosmos DB.
+
+> [!NOTE]
+> This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB's API for MongoDB.
## Database commands
@@ -587,7 +590,7 @@ Azure Cosmos DB supports automatic, server-side sharding. It manages shard creat
## Sessions
-Azure Cosmos DB does not yet support server side sessions commands.
+Azure Cosmos DB does not yet support server-side sessions commands.
## Next steps
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-feature-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-feature-support.md
@@ -12,7 +12,7 @@ ms.author: sivethe
# Azure Cosmos DB's API for MongoDB (3.2 version): supported features and syntax [!INCLUDE[appliesto-mongodb-api](includes/appliesto-mongodb-api.md)]
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB's API for MongoDB using any of the open source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB's API for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB's API for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB's API for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Cosmos DB provides: [global distribution](distribute-data-globally.md), [automatic sharding](partitioning-overview.md), availability and latency guarantees, automatic indexing of every field, encryption at rest, backups, and much more.
@@ -27,12 +27,15 @@ Azure Cosmos DB's API for MongoDB also offers a seamless upgrade experience for
## Query language support
-Azure Cosmos DB's API for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands and options.
+Azure Cosmos DB's API for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options.
## Database commands Azure Cosmos DB's API for MongoDB supports the following database commands:
+> [!NOTE]
+> This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB's API for MongoDB.
+ ### Query and write operation commands - delete
@@ -305,7 +308,7 @@ $polygon | ```{ "Location.coordinates": { $near: { $geometry: { type: "Polygon",
When using the `findOneAndUpdate` operation, sort operations on a single field are supported but sort operations on multiple fields are not supported.
-## Additional operators
+## Other operators
Operator | Example | Notes --- | --- | --- |
@@ -349,7 +352,7 @@ Cosmos DB supports automatic, native replication at the lowest layers. This logi
## Write Concern
-Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/) which specifies the number of responses required during a write operation. Due to how Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](consistency-levels.md).
+Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/) that specifies the number of responses required during a write operation. Due to how Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](consistency-levels.md).
## Sharding
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/monitor-normalized-request-units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/monitor-normalized-request-units.md
@@ -5,7 +5,7 @@ ms.service: cosmos-db
ms.topic: how-to author: kanshiG ms.author: govindk
-ms.date: 06/25/2020
+ms.date: 01/07/2021
---
@@ -23,13 +23,11 @@ When the normalized RU/s consumption reaches 100% for given partition key range,
The Azure Monitor metrics help you to find the operations per status code for SQL API by using the **Total Requests** metric. Later you can filter on these requests by the 429 status code and split them by **Operation Type**.
-To find the requests which are rate limited, the recommended way is to get this information through diagnostic logs.
-
-If there is continuous peak of 100% normalized RU/s consumption or close to 100% across multiple partition key ranges it's recommended to increase the throughput. You can find out which operations are heavy and their peak usage by utilizing the Azure monitor metrics and Azure monitor diagnostic logs.
-
-In summary, the **Normalized RU Consumption** metric is used to see which partition key range is more warm in terms of usage. So it gives you the skew of throughput towards a partition key range. You can later follow up to see the **PartitionKeyRUConsumption** log in Azure Monitor logs to get information about which logical partition keys are hot in terms of usage. This will point to change in either the partition key choice, or the change in application logic. To resolve the rate limiting, distribute the load of data say across multiple partitions or just increase in the throughput as it is really required.
+To find the requests, which are rate limited, the recommended way is to get this information through diagnostic logs.
+If there is continuous peak of 100% normalized RU/s consumption or close to 100% across multiple partition key ranges, it's recommended to increase the throughput. You can find out which operations are heavy and their peak usage by utilizing the Azure monitor metrics and Azure monitor diagnostic logs.
+In summary, the **Normalized RU Consumption** metric is used to see which partition key range is more warm in terms of usage. So it gives you the skew of throughput towards a partition key range. You can later follow up to see the **PartitionKeyRUConsumption** log in Azure Monitor logs to get information about which logical partition keys are hot in terms of usage. This will point to change in either the partition key choice, or the change in application logic. To resolve the rate limiting, distribute the load of data say across multiple partitions or just increase in the throughput as it is required.
## View the normalized request unit consumption metric
@@ -53,9 +51,9 @@ In summary, the **Normalized RU Consumption** metric is used to see which parti
You can also filter metrics and the chart displayed by a specific **CollectionName**, **DatabaseName**, **PartitionKeyRangeID**, and **Region**. To filter the metrics, select **Add filter** and choose the required property such as **CollectionName** and corresponding value you are interested in. The graph then displays the Normalized RU Consumption units consumed for the container for the selected period.
-You can group metrics by using the **Apply splitting** option.
+You can group metrics by using the **Apply splitting** option. For shared throughput databases, the normalized RU metric shows data at the database granularity only, it doesn't show any data per collection. So for shared throughput database, you won't see any data when you apply splitting by collection name.
-The normalized request unit consumption metric for each container are displayed as shown in the following image:
+The normalized request unit consumption metric for each container is displayed as shown in the following image:
:::image type="content" source="./media/monitor-normalized-request-units/normalized-request-unit-usage-filters.png" alt-text="Apply filters to normalized request unit consumption metric":::
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/synapse-link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link.md
@@ -112,14 +112,16 @@ Synapse Link is not recommended if you are looking for traditional data warehous
## Limitations
-* Today Azure Synapse Link for Azure Cosmos DB is supported for SQL API and Azure Cosmos DB API for MongoDB. It is not supported for Gremlin API and Table API. Support for Cassandra API is in private preview, for more information please contact the [Azure Synapse Link team](mailto:cosmosdbsynapselink@microsoft.com).
+* Azure Synapse Link for Azure Cosmos DB is supported for SQL API and Azure Cosmos DB API for MongoDB. It is not supported for Gremlin API, Cassandra API, and Table API.
-* Currently, the analytical store can only be enabled for new containers. To use analytical store for existing containers, migrate data from your existing containers to new containers using [Azure Cosmos DB migration tools](cosmosdb-migrationchoices.md). You can enable Synapse Link on new and existing Azure Cosmos DB accounts.
+* Analytical store can only be enabled for new containers. To use analytical store for existing containers, migrate data from your existing containers to new containers using [Azure Cosmos DB migration tools](cosmosdb-migrationchoices.md). You can enable Synapse Link on new and existing Azure Cosmos DB accounts.
* For the containers with analytical store turned on, automatic backup and restore of your data in the analytical store is not supported at this time. When Synapse Link is enabled on a database account, Azure Cosmos DB will continue to automatically [take backups](./online-backup-and-restore.md) of your data in the transactional store (only) of containers at scheduled backup interval, as always. It is important to note that when a container with analytical store turned on is restored to a new account, the container will be restored with only transactional store and no analytical store enabled. * Accessing the Azure Cosmos DB analytics store with Synapse SQL provisioned is currently not available.
+* Network isolation for Azure Cosmso DB analytical store, using managed private endpoints in Azure Synapse Analytics, is currently not supported.
+ ## Pricing The billing model of Azure Synapse Link includes the costs incurred by using the Azure Cosmos DB analytical store and the Synapse runtime. To learn more, see the [Azure Cosmos DB analytical store pricing](analytical-store-introduction.md#analytical-store-pricing) and [Azure Synapse Analytics pricing](https://azure.microsoft.com/pricing/details/synapse-analytics/) articles.
@@ -136,4 +138,4 @@ To learn more, see the following docs:
* [Frequently asked questions about Azure Synapse Link for Azure Cosmos DB](synapse-link-frequently-asked-questions.md)
-* [Azure Synapse Link for Azure Cosmos DB Use cases](synapse-link-use-cases.md)
\ No newline at end of file
+* [Azure Synapse Link for Azure Cosmos DB Use cases](synapse-link-use-cases.md)
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/table-storage-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-overview.md deleted file mode 100644
@@ -1,35 +0,0 @@
-title: Overview of Azure Table storage
-description: Learn how to use Azure Table storage to store flexible datasets like user data for web applications, address books, device information, or other types of metadata.
-ms.service: cosmos-db
-ms.subservice: cosmosdb-table
-ms.devlang: dotnet
-ms.topic: overview
-ms.date: 05/20/2019
-author: sakash279
-ms.author: akshanka
-ms.reviewer: sngun
-
-# Azure Table storage overview
-[!INCLUDE[appliesto-table-api](includes/appliesto-table-api.md)]
-
-[!INCLUDE [storage-table-cosmos-db-tip-include](../../includes/storage-table-cosmos-db-tip-include.md)]
-
-Azure Table storage is a service that stores structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design. Because Table storage is schemaless, it's easy to adapt your data as the needs of your application evolve. Access to Table storage data is fast and cost-effective for many types of applications, and is typically lower in cost than traditional SQL for similar volumes of data.
-
-You can use Table storage to store flexible datasets like user data for web applications, address books, device information, or other types of metadata your service requires. You can store any number of entities in a table, and a storage account may contain any number of tables, up to the capacity limit of the storage account.
-
-[!INCLUDE [storage-table-concepts-include](../../includes/storage-table-concepts-include.md)]
-
-## Next steps
-
-* [Microsoft Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux.
-
-* [Get started with Azure Cosmos DB Table API and Azure Table storage using the .NET SDK](./tutorial-develop-table-dotnet.md)
-
-* View the Table service reference documentation for complete details about available APIs:
-
- * [Storage Client Library for .NET reference](/dotnet/api/overview/azure/storage)
-
- * [REST API reference](/rest/api/storageservices/)
\ No newline at end of file
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-blob-storage.md
@@ -429,6 +429,9 @@ The following properties are supported for Azure Blob storage under `storeSettin
] ```
+> [!NOTE]
+> The `$logs` container, which is automatically created when Storage Analytics is enabled for a storage account, isn't shown when a container listing operation is performed via the Data Factory UI. The file path must be provided directly for Data Factory to consume files from the `$logs` container.
+ ### Blob storage as a sink type [!INCLUDE [data-factory-v2-file-sink-formats](../../includes/data-factory-v2-file-sink-formats.md)]
@@ -747,4 +750,4 @@ To learn details about the properties, check [Delete activity](delete-activity.m
## Next steps
-For a list of data stores that the Copy activity in Data Factory supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
\ No newline at end of file
+For a list of data stores that the Copy activity in Data Factory supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb.md
@@ -11,7 +11,7 @@ ms.service: data-factory
ms.workload: data-services ms.topic: conceptual ms.custom: seo-lt-2019; seo-dt-2019
-ms.date: 09/28/2020
+ms.date: 01/08/2021
--- # Copy data from MongoDB using Azure Data Factory
@@ -23,22 +23,26 @@ This article outlines how to use the Copy Activity in Azure Data Factory to copy
>[!IMPORTANT] >ADF release this new version of MongoDB connector which provides better native MongoDB support. If you are using the previous MongoDB connector in your solution which is supported as-is for backward compatibility, refer to [MongoDB connector (legacy)](connector-mongodb-legacy.md) article. + ## Supported capabilities You can copy data from MongoDB database to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
-Specifically, this MongoDB connector supports **versions up to 3.4**.
+Specifically, this MongoDB connector supports **versions up to 4.2**.
+ ## Prerequisites [!INCLUDE [data-factory-v2-integration-runtime-requirements](../../includes/data-factory-v2-integration-runtime-requirements.md)] + ## Getting started [!INCLUDE [data-factory-v2-connector-get-started](../../includes/data-factory-v2-connector-get-started.md)] The following sections provide details about properties that are used to define Data Factory entities specific to MongoDB connector. + ## Linked service properties The following properties are supported for MongoDB linked service:
@@ -97,6 +101,7 @@ For a full list of sections and properties that are available for defining datas
} ``` + ## Copy activity properties For a full list of sections and properties available for defining activities, see the [Pipelines](concepts-pipelines-activities.md) article. This section provides a list of properties supported by MongoDB source.
@@ -113,7 +118,7 @@ The following properties are supported in the copy activity **source** section:
| cursorMethods.sort | Specifies the order in which the query returns matching documents. Refer to [cursor.sort()](https://docs.mongodb.com/manual/reference/method/cursor.sort/#cursor.sort). | No | | cursorMethods.limit | Specifies the maximum number of documents the server returns. Refer to [cursor.limit()](https://docs.mongodb.com/manual/reference/method/cursor.limit/#cursor.limit). | No | | cursorMethods.skip | Specifies the number of documents to skip and from where MongoDB begins to return results. Refer to [cursor.skip()](https://docs.mongodb.com/manual/reference/method/cursor.skip/#cursor.skip). | No |
-| batchSize | Specifies the number of documents to return in each batch of the response from MongoDB instance. In most cases, modifying the batch size will not affect the user or the application. Cosmos DB limits each batch cannot exceed 40MB in size, which is the sum of the batchSize number of documents' size, so decrease this value if your document size being large. | No<br/>(the default is **100**) |
+| batchSize | Specifies the number of documents to return in each batch of the response from MongoDB instance. In most cases, modifying the batch size will not affect the user or the application. Cosmos DB limits each batch cannot exceed 40 MB in size, which is the sum of the batchSize number of documents' size, so decrease this value if your document size being large. | No<br/>(the default is **100**) |
>[!TIP] >ADF support consuming BSON document in **Strict mode**. Make sure your filter query is in Strict mode instead of Shell mode. More description can be found at [MongoDB manual](https://docs.mongodb.com/manual/reference/mongodb-extended-json/https://docsupdatetracker.net/index.html).
@@ -156,13 +161,16 @@ The following properties are supported in the copy activity **source** section:
] ``` + ## Export JSON documents as-is You can use this MongoDB connector to export JSON documents as-is from a MongoDB collection to various file-based stores or to Azure Cosmos DB. To achieve such schema-agnostic copy, skip the "structure" (also called *schema*) section in dataset and schema mapping in copy activity. + ## Schema mapping To copy data from MongoDB to tabular sink, refer to [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping). + ## Next steps For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/apache-kafka-configurations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/apache-kafka-configurations.md
@@ -2,7 +2,7 @@
title: Recommended configurations for Apache Kafka clients - Azure Event Hubs description: This article provides recommended Apache Kafka configurations for clients interacting with Azure Event Hubs for Apache Kafka. ms.topic: reference
-ms.date: 07/20/2020
+ms.date: 01/07/2021
--- # Recommended configurations for Apache Kafka clients
@@ -74,7 +74,7 @@ Check the following table of common configuration-related error scenarios.
Symptoms | Problem | Solution ----|---|-----
-Offset commit failures because of rebalancing | Your consumer is waiting too long in between calls to poll() and the service is kicking the consumer out of the group. | You have several options: <ul><li>increase session timeout</li><li>decrease message batch size to speed up processing</li><li>improve processing parallelization to avoid blocking consumer.poll()</li></ul> Applying some combination of the three is likely wisest.
+Offset commit failures because of rebalancing | Your consumer is waiting too long in between calls to poll() and the service is kicking the consumer out of the group. | You have several options: <ul><li>Increase poll processing timeout (`max.poll.interval.ms`)</li><li>Decrease message batch size to speed up processing</li><li>Improve processing parallelization to avoid blocking consumer.poll()</li></ul> Applying some combination of the three is likely wisest.
Network exceptions at high produce throughput | Are you using Java client + default max.request.size? Your requests may be too large. | See Java configs above. ## Next steps
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-howto-set-global-reach-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-set-global-reach-cli.md
@@ -6,7 +6,7 @@ author: duongau
ms.service: expressroute ms.topic: how-to
-ms.date: 12/12/2018
+ms.date: 01/07/2021
ms.author: duau ms.custom: devx-track-azurecli
@@ -46,7 +46,7 @@ az account set --subscription <your subscription ID>
### Identify your ExpressRoute circuits for configuration
-You can enable ExpressRoute Global Reach between any two ExpressRoute circuits, as long as they're located in supported countries/regions and were created at different peering locations. If your subscription owns both circuits, you can choose either circuit to run the configuration as explained later in this article. If the two circuits are in different Azure subscriptions, you must have authorization from one Azure subscription and must pass in its authorization key when you run the configuration command in the other Azure subscription.
+You can enable ExpressRoute Global Reach between any two ExpressRoute circuits. The circuits are required to be in supported countries/regions and were created at different peering locations. If your subscription owns both circuits, you may select either circuit to run the configuration. However, if the two circuits are in different Azure subscriptions you must create an authorization key from one of the circuits. Using the authorization key generated from the first circuit you can enable Global Reach on the second circuit.
## Enable connectivity between your on-premises networks
@@ -56,7 +56,7 @@ When running the command to enable connectivity, note the following requirements
> /subscriptions/{your_subscription_id}/resourceGroups/{your_resource_group}/providers/Microsoft.Network/expressRouteCircuits/{your_circuit_name}
-* *address-prefix* must be a "/29" IPv4 subnet (for example, "10.0.0.0/29"). We use IP addresses in this subnet to establish connectivity between the two ExpressRoute circuits. You must not use addresses in this subnet in your Azure virtual networks or in your on-premises networks.
+* *address-prefix* must be a "/29" IPv4 subnet (for example, "10.0.0.0/29"). We use IP addresses in this subnet to establish connectivity between the two ExpressRoute circuits. You can't use addresses in this subnet in your Azure virtual networks or in your on-premises networks.
Run the following CLI command to connect two ExpressRoute circuits:
@@ -92,7 +92,7 @@ When this operation is complete, you'll have connectivity between your on-premis
## Enable connectivity between ExpressRoute circuits in different Azure subscriptions
-If the two circuits aren't in the same Azure subscription, you need authorization. In the following configuration, you generate authorization in circuit 2's subscription and pass the authorization key to circuit 1.
+If the two circuits aren't in the same Azure subscription, you need authorization. In the following configuration, you generate authorization in circuit 2's subscription. Then you pass the authorization key to circuit 1.
1. Generate an authorization key:
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-locations-providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
@@ -85,7 +85,7 @@ The following table shows connectivity locations and the service providers for e
| **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | 10G | BCX, Internet Solutions - Cloud Connect, Liquid Telecom, Teraco | | **Chennai** | Tata Communications | 2 | South India | 10G | Global CloudXchange (GCX), SIFY, Tata Communications | | **Chennai2** | Airtel | 2 | South India | 10G | Airtel |
-| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | 10G, 100G | Aryaka Networks, AT&T NetBond, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, Level 3 Communications, Megaport, PacketFabric, PCCW Global Limited, Sprint, Telia Carrier, Verizon, Zayo |
+| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, Level 3 Communications, Megaport, PacketFabric, PCCW Global Limited, Sprint, Telia Carrier, Verizon, Zayo |
| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | 10G | Interxion | | **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | 10G, 100G | Aryaka Networks, AT&T NetBond, Cologix, Equinix, Internet2, Level 3 Communications, Megaport, Neutrona Networks, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo| | **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | n/a | CoreSite, Megaport, Zayo |
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-troubleshooting-network-performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-troubleshooting-network-performance.md
@@ -6,16 +6,16 @@ author: duongau
ms.service: expressroute ms.topic: troubleshooting
-ms.date: 12/20/2017
+ms.date: 01/07/2021
ms.author: duau ms.custom: seodec18 --- # Troubleshooting network performance ## Overview
-Azure provides stable and fast ways to connect from your on-premises network to Azure. Methods like Site-to-Site VPN and ExpressRoute are successfully used by customers large and small to run their businesses in Azure. But what happens when performance doesn't meet your expectation or previous experience? This document can help standardize the way you test and baseline your specific environment.
+Azure provides a stable and fast way to connect from your on-premises network to Azure. Methods like Site-to-Site VPN and ExpressRoute are successfully used by customers large and small to run their businesses in Azure. But what happens when performance doesn't meet your expectation or previous experience? This article can help standardize the way you test and baseline your specific environment.
-This document shows how you can easily and consistently test network latency and bandwidth between two hosts. This document also provides some advice on ways to look at the Azure network and help to isolate problem points. The PowerShell script and tools discussed require two hosts on the network (at either end of the link being tested). One host must be a Windows Server or Desktop, the other can be either Windows or Linux.
+You'll learn how to easily and consistently test network latency and bandwidth between two hosts. You'll also be provided some advice on ways to look at the Azure network to help isolate problem points. The PowerShell script and tools discussed require two hosts on the network (at either end of the link being tested). One host must be a Windows Server or Desktop, the other can be either Windows or Linux.
>[!NOTE] >The approach to troubleshooting, the tools, and methods used are personal preferences. This document describes the approach and tools I often take. Your approach will probably differ, there's nothing wrong with different approaches to problem solving. However, if you don't have an established approach, this document can get you started on the path to building your own methods, tools, and preferences to troubleshooting network issues.
@@ -33,26 +33,28 @@ At the highest level, I describe three major network routing domains;
- the Corporate Network (peach cloud on the left) Looking at the diagram from right to left, let's discuss briefly each component:
- - **Virtual Machine** - The server may have multiple NICs, ensure any static routes, default routes, and Operating System settings are sending and receiving traffic the way you think it is. Also, each VM SKU has a bandwidth restriction. If you're using a smaller VM SKU, your traffic is limited by the bandwidth available to the NIC. I usually use a DS5v2 for testing (and then delete once done with testing to save money) to ensure adequate bandwidth at the VM.
+ - **Virtual Machine** - The server may have multiple NICs. Ensure any static routes, default routes, and Operating System settings are sending and receiving traffic the way you think it is. Also, each VM SKU has a bandwidth restriction. If you're using a smaller VM SKU, your traffic is limited by the bandwidth available to the NIC. I usually use a DS5v2 for testing (and then delete once done with testing to save money) to ensure adequate bandwidth at the VM.
- **NIC** - Ensure you know the private IP that is assigned to the NIC in question. - **NIC NSG** - There may be specific NSGs applied at the NIC level, ensure the NSG rule-set is appropriate for the traffic you're trying to pass. For example, ensure ports 5201 for iPerf, 3389 for RDP, or 22 for SSH are open to allow test traffic to pass. - **VNet Subnet** - The NIC is assigned to a specific subnet, ensure you know which one and the rules associated with that subnet.
- - **Subnet NSG** - Just like the NIC, NSGs can be applied at the subnet as well. Ensure the NSG rule-set is appropriate for the traffic you're trying to pass. (for traffic inbound to the NIC the subnet NSG applies first, then the NIC NSG, conversely for traffic outbound from the VM the NIC NSG applies first then the Subnet NSG comes into play).
- - **Subnet UDR** - User Defined Routes can direct traffic to an intermediate hop (like a firewall or load-balancer). Ensure you know if there is a UDR in place for your traffic and if so where it goes and what that next hop will do to your traffic. (for example, a firewall could pass some traffic and deny other traffic between the same two hosts).
- - **Gateway subnet / NSG / UDR** - Just like the VM subnet, the gateway subnet can have NSGs and UDRs. Make sure you know if they are there and what effects they have on your traffic.
- - **VNet Gateway (ExpressRoute)** - Once peering (ExpressRoute) or VPN is enabled, there aren't many settings that can affect how or if traffic routes. If you have multiple ExpressRoute circuits or VPN tunnels connected to the same VNet Gateway, you should be aware of the connection weight settings as this setting affects connection preference and affects the path your traffic takes.
- - **Route Filter** (Not shown) - A route filter only applies to Microsoft Peering on ExpressRoute, but is critical to check if you're not seeing the routes you expect on Microsoft Peering.
+ - **Subnet NSG** - Just like the NIC, NSGs can be applied at the subnet as well. Ensure the NSG rule-set is appropriate for the traffic you're trying to pass. (For traffic inbound to the NIC the subnet NSG applies first, then the NIC NSG. When traffic is going outbound from the VM, the NIC NSG applies first then the Subnet NSG is applied).
+ - **Subnet UDR** - User-Defined Routes can direct traffic to an intermediate hop (like a firewall or load-balancer). Ensure you know if there's a UDR in place for your traffic. If so, understand where it goes and what that next hop will do to your traffic. For example, a firewall could pass some traffic and deny other traffic between the same two hosts.
+ - **Gateway subnet / NSG / UDR** - Just like the VM subnet, the gateway subnet can have NSGs and UDRs. Make sure you know if they're there and what effects they have on your traffic.
+ - **VNet Gateway (ExpressRoute)** - Once peering (ExpressRoute) or VPN is enabled, there aren't many settings that can affect how or if traffic routes. If you have a VNet Gateway connected to multiple ExpressRoute circuits or VPN tunnels, you should be aware of the connection weight settings. The connection weight affects connection preference and determines the path your traffic takes.
+ - **Route Filter** (Not shown) - A route filter is necessary when using Microsoft Peering through ExpressRoute. If you're not receiving any routes, check if the route filter is configured and applied correctly to the circuit.
-At this point, you're on the WAN portion of the link. This routing domain can be your service provider, your corporate WAN, or the Internet. Many hops, technologies, and companies involved with these links can make it somewhat difficult to troubleshoot. Often, you work to rule out both Azure and your Corporate Networks first before jumping into this collection of companies and hops.
+At this point, you're on the WAN portion of the link. This routing domain can be your service provider, your corporate WAN, or the Internet. There are many hops, devices, and companies involved with these links, which could make it difficult to troubleshoot. You need to first rule out both Azure and your corporate networks before you can investigate the hops in between.
In the preceding diagram, on the far left is your corporate network. Depending on the size of your company, this routing domain can be a few network devices between you and the WAN or multiple layers of devices in a campus/enterprise network.
-Given the complexities of these three different high-level network environments, it's often optimal to start at the edges and try to show where performance is good, and where it degrades. This approach can help identify the problem routing domain of the three and then focus your troubleshooting on that specific environment.
+Given the complexity of these three different high-level network environments. It's often optimal to start at the edges and try to show where performance is good, and where it degrades. This approach can help identify the problem routing domain of the three. Then you can focus your troubleshooting on that specific environment.
## Tools
-Most network issues can be analyzed and isolated using basic tools like ping and traceroute. It's rare that you need to go as deep as a packet analysis like Wireshark. To help with troubleshooting, the Azure Connectivity Toolkit (AzureCT) was developed to put some of these tools in an easy package. For performance testing, I like to use iPerf and PSPing. iPerf is a commonly used tool and runs on most operating systems. iPerf is good for basic performances tests and is fairly easy to use. PSPing is a ping tool developed by SysInternals. PSPing is an easy way to perform ICMP and TCP pings in one also easy to use command. Both of these tools are lightweight and are "installed" simply by coping the files to a directory on the host.
+Most network issues can be analyzed and isolated using basic tools like ping and traceroute. It's rare you need to go as deep as a packet analysis using tools like Wireshark.
-I've wrapped all of these tools and methods into a PowerShell module (AzureCT) that you can install and use.
+To help with troubleshooting, the Azure Connectivity Toolkit (AzureCT) was developed to put some of these tools in an easy package. For performance testing, tools like iPerf and PSPing can provide you information about your network. iPerf is a commonly used tool for basic performances tests and is fairly easy to use. PSPing is a ping tool developed by SysInternals. PSPing can do both ICMP and TCP pings to reach a remote host. Both of these tools are lightweight and are "installed" simply by coping the files to a directory on the host.
+
+These tools and methods are wrapped into a PowerShell module (AzureCT) that you can install and use.
### AzureCT - the Azure Connectivity Toolkit The AzureCT PowerShell module has two components [Availability Testing][Availability Doc] and [Performance Testing][Performance Doc]. This document is only concerned with Performance testing, so lets focus on the two Link Performance commands in this PowerShell module.
@@ -76,7 +78,7 @@ There are three basic steps to use this toolkit for Performance testing. 1) Inst
3. Run the performance test
- First, on the remote host you must install and run iPerf in server mode. Also ensure the remote host is listening on either 3389 (RDP for Windows) or 22 (SSH for Linux) and allowing traffic on port 5201 for iPerf. If the remote host is windows, install the AzureCT and run the Install-LinkPerformance command to set up iPerf and the firewall rules needed to start iPerf in server mode successfully.
+ First, on the remote host you must install and run iPerf in server mode. Also ensure the remote host is listening on either 3389 (RDP for Windows) or 22 (SSH for Linux) and allowing traffic on port 5201 for iPerf. If the remote host is Windows, install the AzureCT and run the Install-LinkPerformance command. The command will set up iPerf and the firewall rules needed to start iPerf in server mode successfully.
Once the remote machine is ready, open PowerShell on the local machine and start the test: ```powershell
@@ -94,25 +96,25 @@ There are three basic steps to use this toolkit for Performance testing. 1) Inst
The detailed results of all the iPerf and PSPing tests are in individual text files in the AzureCT tools directory at "C:\ACTTools." ## Troubleshooting
-If the performance test is not giving you expected results, figuring out why should be a progressive step-by-step process. Given the number of components in the path, a systematic approach generally provides a faster path to resolution than jumping around and potentially needlessly doing the same testing multiple times.
+If the performance test isn't giving you expected results, figuring out why should be a progressive step-by-step process. Given the number of components in the path, a systematic approach will provide a faster path to resolution than jumping around. By troubleshooting systematically, you can prevent doing unnecessary testing multiple times.
>[!NOTE] >The scenario here is a performance issue, not a connectivity issue. The steps would be different if traffic wasn't passing at all. > >
-First, challenge your assumptions. Is your expectation reasonable? For instance, if you have a 1-Gbps ExpressRoute circuit and 100 ms of latency it's unreasonable to expect the full 1 Gbps of traffic given the performance characteristics of TCP over high latency links. See the [References section](#references) for more on performance assumptions.
+First, challenge your assumptions. Is your expectation reasonable? For instance, if you have a 1-Gbps ExpressRoute circuit and 100 ms of latency. It's not reasonable to expect the full 1 Gbps of traffic given the performance characteristics of TCP over high latency links. See the [References section](#references) for more on performance assumptions.
-Next, I recommend starting at the edges between routing domains and try to isolate the problem to a single major routing domain; the Corporate Network, the WAN, or the Azure Network. People often blame the "black box" in the path, while blaming the black box is easy to do, it may significantly delay resolution especially if the problem is actually in an area that you have the ability to make changes. Make sure you do your due diligence before handing off to your service provider or ISP.
+Next, I recommend starting at the edges between routing domains and try to isolate the problem to a single major routing domain. You can start at the Corporate Network, the WAN, or the Azure Network. People often blame the "black box" in the path. While blaming the black box is easy to do, it may significantly delay resolution. Especially if the problem is in an area that you can make changes to fix the issue. Make sure you do your due diligence before handing off to your service provider or ISP.
-Once you've identified the major routing domain that appears to contain the problem, you should create a diagram of the area in question. Either on a whiteboard, notepad, or Visio as a diagram provides a concrete "battle map" to allow a methodical approach to further isolate the problem. You can plan testing points, and update the map as you clear areas or dig deeper as the testing progresses.
+Once you've identified the major routing domain that appears to contain the problem, you should create a diagram of the area in question. By drawing a diagram on a whiteboard, notepad, or Visio you can methodically work through and isolate the problem. You can plan testing points, and update the map as you clear areas or dig deeper as the testing progresses.
Now that you have a diagram, start to divide the network into segments and narrow the problem down. Find out where it works and where it doesn't. Keep moving your testing points to isolate down to the offending component. Also, don't forget to look at other layers of the OSI model. It's easy to focus on the network and layers 1 - 3 (Physical, Data, and Network layers) but the problems can also be up at Layer 7 in the application layer. Keep an open mind and verify assumptions. ## Advanced ExpressRoute troubleshooting
-If you're not sure where the edge of the cloud actually is, isolating the Azure components can be a challenge. When ExpressRoute is used, the edge is a network component called the Microsoft Enterprise Edge (MSEE). **When using ExpressRoute**, the MSEE is the first point of contact into Microsoft's network, and the last hop leaving the Microsoft network. When you create a connection object between your VNet gateway and the ExpressRoute circuit, you're actually making a connection to the MSEE. Recognizing the MSEE as the first or last hop (depending on which direction you're going) is crucial to isolating Azure Network problems to either prove the issue is in Azure or further downstream in the WAN or the Corporate Network.
+If you're not sure where the edge of the cloud actually is, isolating the Azure components can be a challenge. When ExpressRoute is used, the edge is a network component called the Microsoft Enterprise Edge (MSEE). **When using ExpressRoute**, the MSEE is the first point of contact into Microsoft's network, and the last hop when leaving the Microsoft network. When you create a connection object between your VNet gateway and the ExpressRoute circuit, you're actually making a connection to the MSEE. Recognizing the MSEE as the first or last hop depending on which direction the traffic is crucial to isolating an Azure networking problem. Knowing which direction will prove if the issue is in Azure or further downstream in the WAN or the corporate network.
![2][2]
@@ -121,12 +123,12 @@ If you're not sure where the edge of the cloud actually is, isolating the Azure
> >
-If two VNets (VNets A and B in the diagram) are connected to the **same** ExpressRoute circuit, you can perform a series of tests to isolate the problem in Azure (or prove it's not in Azure)
+If two VNets are connected to the **same** ExpressRoute circuit, you can do a series of tests to isolate the problem in Azure.
### Test plan 1. Run the Get-LinkPerformance test between VM1 and VM2. This test provides insight to if the problem is local or not. If this test produces acceptable latency and bandwidth results, you can mark the local VNet network as good. 2. Assuming the local VNet traffic is good, run the Get-LinkPerformance test between VM1 and VM3. This test exercises the connection through the Microsoft network down to the MSEE and back into Azure. If this test produces acceptable latency and bandwidth results, you can mark the Azure network as good.
-3. If Azure is ruled out, you can perform a similar sequence of tests on your Corporate Network. If that also tests well, it's time to work with your service provider or ISP to diagnose your WAN connection. Example: Run this test between two branch offices, or between your desk and a data center server. Depending on what you're testing, find endpoints (servers, PCs, etc.) that can exercise that path.
+3. If Azure is ruled out, you can do a similar sequence of tests on your corporate network. If that also tests well, it's time to work with your service provider or ISP to diagnose your WAN connection. Example: Run this test between two branch offices, or between your desk and a data center server. Depending on what you're testing, find endpoints such as servers and client PCs that can exercise that path.
>[!IMPORTANT] > It's critical that for each test you mark the time of day you run the test and record the results in a common location (I like OneNote or Excel). Each test run should have identical output so you can compare the resultant data across test runs and not have "holes" in the data. Consistency across multiple tests is the primary reason I use the AzureCT for troubleshooting. The magic isn't in the exact load scenarios I run, but instead the *magic* is the fact that I get a *consistent test and data output* from each and every test. Recording the time and having consistent data every single time is especially helpful if you later find that the issue is sporadic. Be diligent with your data collection up front and you'll avoid hours of retesting the same scenarios (I learned this hard way many years ago).
@@ -134,11 +136,11 @@ If two VNets (VNets A and B in the diagram) are connected to the **same** Expres
> ## The problem is isolated, now what?
-The more you can isolate the problem the easier it is to fix, however often you reach the point where you can't go deeper or further with your troubleshooting. Example: you see the link across your service provider taking hops through Europe, but your expected path is all in Asia. This point is when you should reach out for help. Who you ask is dependent on the routing domain you isolated the issue to, or even better if you are able to narrow it down to a specific component.
+The more you isolate the problem the faster the solution can be found. Sometime you reach a point where you can't go any further with your troubleshooting. For example, you see the link across your service provider taking hops through Europe, but you expect the path to remain all in Asia. At this point, you should engage someone for help. Who you ask is dependent on the routing domain you isolate the issue to. If you can narrow it down to a specific component that would be even better.
-For corporate network issues, your internal IT department or service provider supporting your network (which may be the hardware manufacturer) may be able to help with device configuration or hardware repair.
+For corporate network issues, your internal IT department or service provider can help with device configuration or hardware repair.
-For the WAN, sharing your testing results with your Service Provider or ISP may help get them started and avoid covering some of the same ground you've tested already. However, don't be offended if they want to verify your results themselves. "Trust but verify" is a good motto when troubleshooting based on other people's reported results.
+For the WAN, sharing your testing results with your Service Provider or ISP will help get them started. Doing so will also avoid duplicating the same work that you've already done. Don't be offended if they want to verify your results themselves. "Trust but verify" is a good motto when troubleshooting based on other people's reported results.
With Azure, once you isolate the issue in as much detail as you're able, it's time to review the [Azure Network Documentation][Network Docs] and then if still needed [open a support ticket][Ticket Link].
@@ -156,7 +158,7 @@ Test setup:
- A 10Gbps Premium ExpressRoute circuit in the location identified with Private Peering enabled. - An Azure VNet with an UltraPerformance gateway in the specified region. - A DS5v2 VM running Windows Server 2016 on the VNet. The VM was non-domain joined, built from the default Azure image (no optimization or customization) with AzureCT installed.
- - All testing was using the AzureCT Get-LinkPerformance command with a 5-minute load test for each of the six test runs. For example:
+ - All tests use the AzureCT Get-LinkPerformance command with a 5-minute load test for each of the six test runs. For example:
```powershell Get-LinkPerformance -RemoteHost 10.0.0.1 -TestSeconds 300
@@ -190,7 +192,7 @@ Test setup:
| Seattle | Brazil South * | 10,930 km | 189 ms | 8.2 Mbits/sec | 699 Mbits/sec | | Seattle | South India | 12,918 km | 202 ms | 7.7 Mbits/sec | 634 Mbits/sec |
-\* The latency to Brazil is a good example where the straight-line distance significantly differs from the fiber run distance. I would expect that the latency would be in the neighborhood of 160 ms, but is actually 189 ms. This difference against my expectation could indicate a network issue somewhere, but most likely that the fiber run does not go to Brazil in a straight line and has an extra 1,000 km or so of travel to get to Brazil from Seattle.
+\* The latency to Brazil is a good example where the straight-line distance significantly differs from the fiber run distance. The expected latency would be in the neighborhood of 160 ms, but is actually 189 ms. The difference in latency would seem to indicate a network issue somewhere. But the reality is the fiber line doesn't go to Brazil in a straight line. So you should expect an extra 1,000 km or so of travel to get to Brazil from Seattle.
## Next steps 1. Download the Azure Connectivity Toolkit from GitHub at [https://aka.ms/AzCT][ACT]
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/developers-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/developers-guide.md
@@ -35,7 +35,7 @@ Access to management layer is controlled by [Azure role-based access control](..
| Azure CLI | PowerShell | REST API | Resource Manager | .NET | Python | Java | JavaScript | |--|--|--|--|--|--|--|--|
-|[Reference](/cli/azure/keyvault)<br>[Quickstart](quick-create-cli.md)|[Reference](/powershell/module/az.keyvault)<br>[Quickstart](quick-create-powershell.md)|[Reference](/rest/api/keyvault/)|[Reference](/azure/templates/microsoft.keyvault/vaults)|[Reference](/dotnet/api/microsoft.azure.management.keyvault)|[Reference](/python/api/azure-mgmt-keyvault/azure.mgmt.keyvault)|[Reference](/java/api/com.microsoft.azure.management.keyvault)|[Reference](/javascript/api/@azure/arm-keyvault)|
+|[Reference](/cli/azure/keyvault)<br>[Quickstart](quick-create-cli.md)|[Reference](/powershell/module/az.keyvault)<br>[Quickstart](quick-create-powershell.md)|[Reference](/rest/api/keyvault/)|[Reference](/azure/templates/microsoft.keyvault/vaults)|[Reference](/dotnet/api/microsoft.azure.management.keyvault)<br>[Quickstart](https://docs.microsoft.com/azure/key-vault/general/vault-create-template)|[Reference](/python/api/azure-mgmt-keyvault/azure.mgmt.keyvault)|[Reference](/java/api/com.microsoft.azure.management.keyvault)|[Reference](/javascript/api/@azure/arm-keyvault)|
See [Client Libraries](client-libraries.md) for installation packages and source code.
@@ -62,10 +62,14 @@ Above authentications scenarios are supported by **Azure Identity client library
For more information about Azure Identity client libarary, see: ### Azure Identity client libraries+ | .NET | Python | Java | JavaScript | |--|--|--|--| |[Azure Identity SDK .NET](/dotnet/api/overview/azure/identity-readme)|[Azure Identity SDK Python](/python/api/overview/azure/identity-readme)|[Azure Identity SDK Java](/java/api/overview/azure/identity-readme)|[Azure Identity SDK JavaScript](/javascript/api/overview/azure/identity-readme)|
+>[!Note]
+> [App Authentication library](https://docs.microsoft.com/dotnet/api/overview/azure/service-to-service-authentication) which was recommended for Key Vault .NET SDK version 3, which is currently depracated . Please follow [AppAuthentication to Azure.Identity Migration Guidance](https://docs.microsoft.com/dotnet/api/overview/azure/app-auth-migration) to migrate to Key Vault .NET SDK Version 4.
+ For tutorials on how to authenticate to Key Vault in applications, see: - [Authenticate to Key Vault in application hosted in VM in .NET](./tutorial-net-virtual-machine.md) - [Authenticate to Key Vault in application hosted in VM in Python](./tutorial-python-virtual-machine.md)
@@ -80,14 +84,14 @@ Access to keys, secrets, and certificates is controlled by data plane. Data plan
| Azure CLI | PowerShell | REST API | Resource Manager | .NET | Python | Java | JavaScript | |--|--|--|--|--|--|--|--|
-|[Reference](/cli/azure/keyvault/key)<br>[Quickstart](../keys/quick-create-cli.md)|[Reference](/powershell/module/az.keyvault/)<br>[Quickstart](../keys/quick-create-powershell.md)|[Reference](/rest/api/keyvault/#key-operations)|N/A|[Reference](/dotnet/api/azure.security.keyvault.keys)|[Reference](/python/api/azure-mgmt-keyvault/azure.mgmt.keyvault)<br>[Quickstart](../keys/quick-create-python.md)|[Reference](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-security-keyvault-keys/4.2.0/https://docsupdatetracker.net/index.html)|[Reference](/javascript/api/@azure/keyvault-keys/)|
+|[Reference](/cli/azure/keyvault/key)<br>[Quickstart](../keys/quick-create-cli.md)|[Reference](/powershell/module/az.keyvault/)<br>[Quickstart](../keys/quick-create-powershell.md)|[Reference](/rest/api/keyvault/#key-operations)|[Reference](https://docs.microsoft.com/azure/templates/microsoft.keyvault/vaults/keys)<br>[Quickstart](../keys/quick-create-template.md)|[Reference](/dotnet/api/azure.security.keyvault.keys)<br>[Quickstart](../keys/quick-create-net.md)|[Reference](/python/api/azure-mgmt-keyvault/azure.mgmt.keyvault)<br>[Quickstart](../keys/quick-create-python.md)|[Reference](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-security-keyvault-keys/4.2.0/https://docsupdatetracker.net/index.html)<br>[Quickstart](../keys/quick-create-java.md)|[Reference](/javascript/api/@azure/keyvault-keys/)<br>[Quickstart](../keys/quick-create-node.md)|
**Certificates APIs and SDKs** | Azure CLI | PowerShell | REST API | Resource Manager | .NET | Python | Java | JavaScript | |--|--|--|--|--|--|--|--|
-|[Reference](/cli/azure/keyvault/certificate)<br>[Quickstart](../certificates/quick-create-cli.md)|[Reference](/powershell/module/az.keyvault)<br>[Quickstart](../certificates/quick-create-powershell.md)|[Reference](/rest/api/keyvault/#certificate-operations)|N/A|[Reference](/dotnet/api/azure.security.keyvault.certificates)|[Reference](/python/api/overview/azure/keyvault-certificates-readme)<br>[Quickstart](../certificates/quick-create-python.md)|[Reference](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-security-keyvault-certificates/4.1.0/https://docsupdatetracker.net/index.html)|[Reference](/javascript/api/@azure/keyvault-certificates/)|
+|[Reference](/cli/azure/keyvault/certificate)<br>[Quickstart](../certificates/quick-create-cli.md)|[Reference](/powershell/module/az.keyvault)<br>[Quickstart](../certificates/quick-create-powershell.md)|[Reference](/rest/api/keyvault/#certificate-operations)|N/A|[Reference](/dotnet/api/azure.security.keyvault.certificates)<br>[Quickstart](../certificates/quick-create-net.md)|[Reference](/python/api/overview/azure/keyvault-certificates-readme)<br>[Quickstart](../certificates/quick-create-python.md)|[Reference](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-security-keyvault-certificates/4.1.0/https://docsupdatetracker.net/index.html)<br>[Quickstart](../certificates/quick-create-java.md)|[Reference](/javascript/api/@azure/keyvault-certificates/)<br>[Quickstart](../certificates/quick-create-node.md)|
**Secrets APIs and SDKs**
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-troubleshoot-backend-traffic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-troubleshoot-backend-traffic.md new file mode 100644
@@ -0,0 +1,84 @@
+---
+title: Troubleshoot Azure Load Balancer
+description: Learn how to troubleshoot known issues with Azure Load Balancer.
+services: load-balancer
+documentationcenter: na
+author: asudbring
+manager: dcscontentpm
+ms.custom: seodoc18
+ms.service: load-balancer
+ms.devlang: na
+ms.topic: troubleshooting
+ms.tgt_pltfrm: na
+ms.workload: infrastructure-services
+ms.date: 01/28/2020
+ms.author: allensu
+---
+
+# Troubleshoot Azure Load Balancer backend traffic responses
+
+This page provides troubleshooting information for Azure Load Balancer questions.
+
+## VMs behind Load Balancer are not responding to traffic on the configured data port
+
+If a backend pool VM is listed as healthy and responds to the health probes, but is still not participating in the Load Balancing, or is not responding to the data traffic, it may be due to any of the following reasons:
+* Load Balancer Backend pool VM is not listening on the data port
+* Network security group is blocking the port on the Load Balancer backend pool VM 
+* Accessing the Load Balancer from the same VM and NIC
+* Accessing the Internet Load Balancer frontend from the participating Load Balancer backend pool VM
+
+## Cause 1: Load Balancer backend pool VM is not listening on the data port
+
+If a VM does not respond to the data traffic, it may be because either the target port is not open on the participating VM, or, the VM is not listening on that port.
+
+**Validation and resolution**
+
+1. Log in to the backend VM.
+2. Open a command prompt and run the following command to validate there is an application listening on the data port: 
+ netstat -an
+3. If the port is not listed with State "LISTENING", configure the proper listener port
+4. If the port is marked as Listening, then check the target application on that port for any possible issues.
+
+## Cause 2: Network security group is blocking the port on the Load Balancer backend pool VM 
+
+If one or more network security groups configured on the subnet or on the VM, is blocking the source IP or port, then the VM is unable to respond.
+
+For the public load balancer, the IP address of the Internet clients will be used for communication between the clients and the load balancer backend VMs. Make sure the IP address of the clients are allowed in the backend VM's network security group.
+
+1. List the network security groups configured on the backend VM. For more information, see [Manage network security groups](../virtual-network/manage-network-security-group.md)
+1. From the list of network security groups, check if:
+ - the incoming or outgoing traffic on the data port has interference.
+ - a **Deny All** network security group rule on the NIC of the VM or the subnet that has a higher priority that the default rule that allows Load Balancer probes and traffic (network security groups must allow Load Balancer IP of 168.63.129.16, that is probe port)
+1. If any of the rules are blocking the traffic, remove and reconfigure those rules to allow the data traffic. 
+1. Test if the VM has now started to respond to the health probes.
+
+## Cause 3: Accessing the Load Balancer from the same VM and Network interface
+
+If your application hosted in the backend VM of a Load Balancer is trying to access another application hosted in the same backend VM over the same Network Interface, it is an unsupported scenario and will fail.
+
+**Resolution**
+You can resolve this issue via one of the following methods:
+* Configure separate backend pool VMs per application.
+* Configure the application in dual NIC VMs so each application was using its own Network interface and IP address.
+
+## Cause 4: Accessing the internal Load Balancer frontend from the participating Load Balancer backend pool VM
+
+If an internal Load Balancer is configured inside a VNet, and one of the participant backend VMs is trying to access the internal Load Balancer frontend, failures can occur when the flow is mapped to the originating VM. This scenario is not supported.
+
+**Resolution**
+There are several ways to unblock this scenario, including using a proxy. Evaluate Application Gateway or other 3rd party proxies (for example, nginx or haproxy). For more information about Application Gateway, see [Overview of Application Gateway](../application-gateway/overview.md)
+
+**Details**
+Internal Load Balancers don't translate outbound originated connections to the front end of an internal Load Balancer because both are in private IP address space. Public Load Balancers provide [outbound connections](load-balancer-outbound-connections.md) from private IP addresses inside the virtual network to public IP addresses. For internal Load Balancers, this approach avoids potential SNAT port exhaustion inside a unique internal IP address space, where translation isn't required.
+
+A side effect is that if an outbound flow from a VM in the back-end pool attempts a flow to front end of the internal Load Balancer in its pool _and_ is mapped back to itself, the two legs of the flow don't match. Because they don't match, the flow fails. The flow succeeds if the flow didn't map back to the same VM in the back-end pool that created the flow to the front end.
+
+When the flow maps back to itself, the outbound flow appears to originate from the VM to the front end and the corresponding inbound flow appears to originate from the VM to itself. From the guest operating system's point of view, the inbound and outbound parts of the same flow don't match inside the virtual machine. The TCP stack won't recognize these halves of the same flow as being part of the same flow. The source and destination don't match. When the flow maps to any other VM in the back-end pool, the halves of the flow do match and the VM can respond to the flow.
+
+The symptom for this scenario is intermittent connection timeouts when the flow returns to the same backend that originated the flow. Common workarounds include insertion of a proxy layer behind the internal Load Balancer and using Direct Server Return (DSR) style rules. For more information, see [Multiple Frontends for Azure Load Balancer](load-balancer-multivip-overview.md).
+
+You can combine an internal Load Balancer with any third-party proxy or use internal [Application Gateway](../application-gateway/overview.md) for proxy scenarios with HTTP/HTTPS. While you could use a public Load Balancer to mitigate this issue, the resulting scenario is prone to [SNAT exhaustion](load-balancer-outbound-connections.md). Avoid this second approach unless carefully managed.
+
+## Next steps
+
+If the preceding steps do not resolve the issue, open a [support ticket](https://azure.microsoft.com/support/options/).
\ No newline at end of file
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-troubleshoot-health-probe-status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-troubleshoot-health-probe-status.md new file mode 100644
@@ -0,0 +1,76 @@
+---
+title: Troubleshoot Azure Load Balancer health probe status
+description: Learn how to troubleshoot known issues with Azure Load Balancer health probe status.
+services: load-balancer
+documentationcenter: na
+author: asudbring
+manager: dcscontentpm
+ms.custom: seodoc18
+ms.service: load-balancer
+ms.devlang: na
+ms.topic: troubleshooting
+ms.tgt_pltfrm: na
+ms.workload: infrastructure-services
+ms.date: 12/02/2020
+ms.author: allensu
+---
+
+# Troubleshoot Azure Load Balancer health probe status
+
+This page provides troubleshooting information common Azure Load Balancer health probe questions.
+
+## Symptom: VMs behind the Load Balancer are not responding to health probes
+For the backend servers to participate in the load balancer set, they must pass the probe check. For more information about health probes, see [Understanding Load Balancer Probes](load-balancer-custom-probe-overview.md). 
+
+The Load Balancer backend pool VMs may not be responding to the probes due to any of the following reasons:
+- Load Balancer backend pool VM is unhealthy
+- Load Balancer backend pool VM is not listening on the probe port
+- Firewall, or a network security group is blocking the port on the Load Balancer backend pool VMs
+- Other misconfigurations in Load Balancer
+
+### Cause 1: Load Balancer backend pool VM is unhealthy
+
+**Validation and resolution**
+
+To resolve this issue, log in to the participating VMs, and check if the VM state is healthy, and can respond to **PsPing** or **TCPing** from another VM in the pool. If the VM is unhealthy, or is unable to respond to the probe, you must rectify the issue and get the VM back to a healthy state before it can participate in load balancing.
+
+### Cause 2: Load Balancer backend pool VM is not listening on the probe port
+If the VM is healthy, but is not responding to the probe, then one possible reason could be that the probe port is not open on the participating VM, or the VM is not listening on that port.
+
+**Validation and resolution**
+
+1. Log in to the backend VM.
+2. Open a command prompt and run the following command to validate there is an application listening on the probe port:
+ netstat -an
+3. If the port state is not listed as **LISTENING**, configure the proper port.
+4. Alternatively, select another port, that is listed as **LISTENING**, and update load balancer configuration accordingly.
+
+### Cause 3: Firewall, or a network security group is blocking the port on the load balancer backend pool VMs
+If the firewall on the VM is blocking the probe port, or one or more network security groups configured on the subnet or on the VM, is not allowing the probe to reach the port, the VM is unable to respond to the health probe.
+
+**Validation and resolution**
+
+1. If the firewall is enabled, check if it is configured to allow the probe port. If not, configure the firewall to allow traffic on the probe port, and test again.
+2. From the list of network security groups, check if the incoming or outgoing traffic on the probe port has interference.
+3. Also, check if a **Deny All** network security groups rule on the NIC of the VM or the subnet that has a higher priority than the default rule that allows LB probes & traffic (network security groups must allow Load Balancer IP of 168.63.129.16).
+4. If any of these rules are blocking the probe traffic, remove and reconfigure the rules to allow the probe traffic. 
+5. Test if the VM has now started responding to the health probes.
+
+### Cause 4: Other misconfigurations in Load Balancer
+If all the preceding causes seem to be validated and resolved correctly, and the backend VM still does not respond to the health probe, then manually test for connectivity, and collect some traces to understand the connectivity.
+
+**Validation and resolution**
+
+1. Use **Psping** from one of the other VMs within the VNet to test the probe port response (example: .\psping.exe -t 10.0.0.4:3389) and record results.
+2. Use **TCPing** from one of the other VMs within the VNet to test the probe port response (example: .\tcping.exe 10.0.0.4 3389) and record results.
+3. If no response is received in these ping tests, then
+ - Run a simultaneous Netsh trace on the target backend pool VM and another test VM from the same VNet. Now, run a PsPing test for some time, collect some network traces, and then stop the test.
+ - Analyze the network capture and see if there are both incoming and outgoing packets related to the ping query.
+ - If no incoming packets are observed on the backend pool VM, there is potentially a network security groups or UDR mis-configuration blocking the traffic.
+ - If no outgoing packets are observed on the backend pool VM, the VM needs to be checked for any unrelated issues (for example, Application blocking the probe port).
+ - Verify if the probe packets are being forced to another destination (possibly via UDR settings) before reaching the load balancer. This can cause the traffic to never reach the backend VM.
+4. Change the probe type (for example, HTTP to TCP), and configure the corresponding port in network security groups ACLs and firewall to validate if the issue is with the configuration of probe response. For more information about health probe configuration, see [Endpoint Load Balancing health probe configuration](/archive/blogs/mast/endpoint-load-balancing-heath-probe-configuration-details).
+
+## Next steps
+
+If the preceding steps do not resolve the issue, open a [support ticket](https://azure.microsoft.com/support/options/).
\ No newline at end of file
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-troubleshoot.md
@@ -1,6 +1,6 @@
---
-title: Troubleshoot Azure Load Balancer
-description: Learn how to troubleshoot known issues with Azure Load Balancer.
+title: Troubleshoot common issues Azure Load Balancer
+description: Learn how to troubleshoot common issues with Azure Load Balancer.
services: load-balancer documentationcenter: na author: asudbring
@@ -19,155 +19,49 @@ ms.author: allensu
This page provides troubleshooting information for Basic and Standard common Azure Load Balancer questions. For more information about Standard Load Balancer, see [Standard Load Balancer overview](load-balancer-standard-diagnostics.md).
-When the Load Balancer connectivity is unavailable, the most common symptoms are as follows:
+When the Load Balancer connectivity is unavailable, the most common symptoms are as follows:
-- VMs behind the Load Balancer are not responding to health probes -- VMs behind the Load Balancer are not responding to the traffic on the configured port
+- VMs behind the Load Balancer aren't responding to health probes
+- VMs behind the Load Balancer aren't responding to the traffic on the configured port
-When the external clients to the backend VMs go through the load balancer, the IP address of the clients will be used for the communication. Make sure the IP address of the clients are added into the NSG allow list.
+When the external clients to the backend VMs go through the load balancer, the IP address of the clients will be used for the communication. Make sure the IP address of the clients are added into the NSG allow list.
-## Symptom: No outbound connectivity from Standard internal Load Balancers (ILB)
+## No outbound connectivity from Standard internal Load Balancers (ILB)
**Validation and resolution**
-Standard ILBs are **secure by default**. Basic ILBs allowed connecting to the internet via a *hidden* Public IP address. This is not recommened for production workloads as the IP address is neither static nor locked down via NSGs that you own. If you recently moved from a Basic ILB to a Standard ILB, you should create a Public IP explicitly via [Outbound only](egress-only.md) configuration which locks down the IP via NSGs. You can also use a [NAT Gateway](../virtual-network/nat-overview.md) on your subnet.
+Standard ILBs are **secure by default**. Basic ILBs allowed connecting to the internet via a *hidden* Public IP address. This isn't recommended for production workloads as the IP address is neither static nor locked down via NSGs that you own. If you recently moved from a Basic ILB to a Standard ILB, you should create a Public IP explicitly via [Outbound only](egress-only.md) configuration, which locks down the IP via NSGs. You can also use a [NAT Gateway](../virtual-network/nat-overview.md) on your subnet.
-## Symptom: VMs behind the Load Balancer are not responding to health probes
-For the backend servers to participate in the load balancer set, they must pass the probe check. For more information about health probes, see [Understanding Load Balancer Probes](load-balancer-custom-probe-overview.md). 
+## Can't change backend port for existing LB rule of a load balancer that has virtual machine scale set deployed in the backend pool.
-The Load Balancer backend pool VMs may not be responding to the probes due to any of the following reasons:
-- Load Balancer backend pool VM is unhealthy -- Load Balancer backend pool VM is not listening on the probe port -- Firewall, or a network security group is blocking the port on the Load Balancer backend pool VMs -- Other misconfigurations in Load Balancer-
-### Cause 1: Load Balancer backend pool VM is unhealthy
-
-**Validation and resolution**
-
-To resolve this issue, log in to the participating VMs, and check if the VM state is healthy, and can respond to **PsPing** or **TCPing** from another VM in the pool. If the VM is unhealthy, or is unable to respond to the probe, you must rectify the issue and get the VM back to a healthy state before it can participate in load balancing.
-
-### Cause 2: Load Balancer backend pool VM is not listening on the probe port
-If the VM is healthy, but is not responding to the probe, then one possible reason could be that the probe port is not open on the participating VM, or the VM is not listening on that port.
-
-**Validation and resolution**
-
-1. Log in to the backend VM.
-2. Open a command prompt and run the following command to validate there is an application listening on the probe port: 
- netstat -an
-3. If the port state is not listed as **LISTENING**, configure the proper port.
-4. Alternatively, select another port, that is listed as **LISTENING**, and update load balancer configuration accordingly. 
-
-### Cause 3: Firewall, or a network security group is blocking the port on the load balancer backend pool VMs 
-If the firewall on the VM is blocking the probe port, or one or more network security groups configured on the subnet or on the VM, is not allowing the probe to reach the port, the VM is unable to respond to the health probe.
-
-**Validation and resolution**
-
-* If the firewall is enabled, check if it is configured to allow the probe port. If not, configure the firewall to allow traffic on the probe port, and test again.
-* From the list of network security groups, check if the incoming or outgoing traffic on the probe port has interference.
-* Also, check if a **Deny All** network security groups rule on the NIC of the VM or the subnet that has a higher priority than the default rule that allows LB probes & traffic (network security groups must allow Load Balancer IP of 168.63.129.16).
-* If any of these rules are blocking the probe traffic, remove and reconfigure the rules to allow the probe traffic. 
-* Test if the VM has now started responding to the health probes.
-
-### Cause 4: Other misconfigurations in Load Balancer
-If all the preceding causes seem to be validated and resolved correctly, and the backend VM still does not respond to the health probe, then manually test for connectivity, and collect some traces to understand the connectivity.
-
-**Validation and resolution**
-
-* Use **Psping** from one of the other VMs within the VNet to test the probe port response (example: .\psping.exe -t 10.0.0.4:3389) and record results.
-* Use **TCPing** from one of the other VMs within the VNet to test the probe port response (example: .\tcping.exe 10.0.0.4 3389) and record results.
-* If no response is received in these ping tests, then
- - Run a simultaneous Netsh trace on the target backend pool VM and another test VM from the same VNet. Now, run a PsPing test for some time, collect some network traces, and then stop the test.
- - Analyze the network capture and see if there are both incoming and outgoing packets related to the ping query.
- - If no incoming packets are observed on the backend pool VM, there is potentially a network security groups or UDR mis-configuration blocking the traffic.
- - If no outgoing packets are observed on the backend pool VM, the VM needs to be checked for any unrelated issues (for example, Application blocking the probe port).
- - Verify if the probe packets are being forced to another destination (possibly via UDR settings) before reaching the load balancer. This can cause the traffic to never reach the backend VM.
-* Change the probe type (for example, HTTP to TCP), and configure the corresponding port in network security groups ACLs and firewall to validate if the issue is with the configuration of probe response. For more information about health probe configuration, see [Endpoint Load Balancing health probe configuration](/archive/blogs/mast/endpoint-load-balancing-heath-probe-configuration-details).
-
-## Symptom: VMs behind Load Balancer are not responding to traffic on the configured data port
-
-If a backend pool VM is listed as healthy and responds to the health probes, but is still not participating in the Load Balancing, or is not responding to the data traffic, it may be due to any of the following reasons:
-* Load Balancer Backend pool VM is not listening on the data port
-* Network security group is blocking the port on the Load Balancer backend pool VM 
-* Accessing the Load Balancer from the same VM and NIC
-* Accessing the Internet Load Balancer frontend from the participating Load Balancer backend pool VM
-
-### Cause 1: Load Balancer backend pool VM is not listening on the data port
-If a VM does not respond to the data traffic, it may be because either the target port is not open on the participating VM, or, the VM is not listening on that port.
-
-**Validation and resolution**
-
-1. Log in to the backend VM.
-2. Open a command prompt and run the following command to validate there is an application listening on the data port: 
- netstat -an
-3. If the port is not listed with State "LISTENING", configure the proper listener port
-4. If the port is marked as Listening, then check the target application on that port for any possible issues.
-
-### Cause 2: Network security group is blocking the port on the Load Balancer backend pool VM 
-
-If one or more network security groups configured on the subnet or on the VM, is blocking the source IP or port, then the VM is unable to respond.
-
-For the public load balancer, the IP address of the Internet clients will be used for communication between the clients and the load balancer backend VMs. Make sure the IP address of the clients are allowed in the backend VM's network security group.
-
-1. List the network security groups configured on the backend VM. For more information, see [Manage network security groups](../virtual-network/manage-network-security-group.md)
-1. From the list of network security groups, check if:
- - the incoming or outgoing traffic on the data port has interference.
- - a **Deny All** network security group rule on the NIC of the VM or the subnet that has a higher priority that the default rule that allows Load Balancer probes and traffic (network security groups must allow Load Balancer IP of 168.63.129.16, that is probe port)
-1. If any of the rules are blocking the traffic, remove and reconfigure those rules to allow the data traffic. 
-1. Test if the VM has now started to respond to the health probes.
-
-### Cause 3: Accessing the Load Balancer from the same VM and Network interface
-
-If your application hosted in the backend VM of a Load Balancer is trying to access another application hosted in the same backend VM over the same Network Interface, it is an unsupported scenario and will fail.
+### Cause: The backend port cannot be modified for a load balancing rule that's used by a health probe for load balancer referenced by virtual machine scale set
**Resolution**
-You can resolve this issue via one of the following methods:
-* Configure separate backend pool VMs per application.
-* Configure the application in dual NIC VMs so each application was using its own Network interface and IP address.
+In order to change the port, you can remove the health probe by updating the virtual machine scale set, update the port and then configure the health probe again.
-### Cause 4: Accessing the internal Load Balancer frontend from the participating Load Balancer backend pool VM
+## Small traffic is still going through load balancer after removing VMs from backend pool of the load balancer.
-If an internal Load Balancer is configured inside a VNet, and one of the participant backend VMs is trying to access the internal Load Balancer frontend, failures can occur when the flow is mapped to the originating VM. This scenario is not supported.
+### Cause: VMs removed from backend pool should no longer receive traffic. The small amount of network traffic could be related to storage, DNS, and other functions within Azure.
-**Resolution**
-There are several ways to unblock this scenario, including using a proxy. Evaluate Application Gateway or other 3rd party proxies (for example, nginx or haproxy). For more information about Application Gateway, see [Overview of Application Gateway](../application-gateway/overview.md)
+To verify, you can conduct a network trace. The FQDN used for your blob storage accounts are listed within the properties of each storage account. From a virtual machine within your Azure subscription, you can perform nslookup to determine the Azure IP assigned to that storage account.
-**Details**
-Internal Load Balancers don't translate outbound originated connections to the front end of an internal Load Balancer because both are in private IP address space. Public Load Balancers provide [outbound connections](load-balancer-outbound-connections.md) from private IP addresses inside the virtual network to public IP addresses. For internal Load Balancers, this approach avoids potential SNAT port exhaustion inside a unique internal IP address space, where translation isn't required.
-
-A side effect is that if an outbound flow from a VM in the back-end pool attempts a flow to front end of the internal Load Balancer in its pool _and_ is mapped back to itself, the two legs of the flow don't match. Because they don't match, the flow fails. The flow succeeds if the flow didn't map back to the same VM in the back-end pool that created the flow to the front end.
-
-When the flow maps back to itself, the outbound flow appears to originate from the VM to the front end and the corresponding inbound flow appears to originate from the VM to itself. From the guest operating system's point of view, the inbound and outbound parts of the same flow don't match inside the virtual machine. The TCP stack won't recognize these halves of the same flow as being part of the same flow. The source and destination don't match. When the flow maps to any other VM in the back-end pool, the halves of the flow do match and the VM can respond to the flow.
-
-The symptom for this scenario is intermittent connection timeouts when the flow returns to the same backend that originated the flow. Common workarounds include insertion of a proxy layer behind the internal Load Balancer and using Direct Server Return (DSR) style rules. For more information, see [Multiple Frontends for Azure Load Balancer](load-balancer-multivip-overview.md).
-
-You can combine an internal Load Balancer with any third-party proxy or use internal [Application Gateway](../application-gateway/overview.md) for proxy scenarios with HTTP/HTTPS. While you could use a public Load Balancer to mitigate this issue, the resulting scenario is prone to [SNAT exhaustion](load-balancer-outbound-connections.md). Avoid this second approach unless carefully managed.
+## Additional network captures
-## Symptom: Cannot change backend port for existing LB rule of a load balancer which has VM Scale Set deployed in the backend pool.
-### Cause : The backend port cannot be modified for a load balancing rule that's used by a health probe for load balancer referenced by VM Scale Set.
-**Resolution**
-In order to change the port, you can remove the health probe by updating the VM Scale Set, update the port and then configure the health probe again.
+If you decide to open a support case, collect the following information for a quicker resolution. Choose a single backend VM to perform the following tests:
-## Symptom: Small traffic is still going through load balancer after removing VMs from backend pool of the load balancer.
-### Cause : VMs removed from backend pool should no longer receive traffic. The small amount of network traffic could be related to storage, DNS, and other functions within Azure.
-To verify, you can conduct a network trace. The FQDN used for your blob storage accounts are listed within the properties of each storage account. From a virtual machine within your Azure subscription, you can perform an nslookup to determine the Azure IP assigned to that storage account.
+- Use ps ping from one of the backend VMs within the VNet to test the probe port response (example: ps ping 10.0.0.4:3389) and record results.
+- If no response is received in these ping tests, run a simultaneous Netsh trace on the backend VM and the VNet test VM while you run PsPing then stop the Netsh trace.
-## Additional network captures
-If you decide to open a support case, collect the following information for a quicker resolution. Choose a single backend VM to perform the following tests:
-- Use Psping from one of the backend VMs within the VNet to test the probe port response (example: psping 10.0.0.4:3389) and record results. -- If no response is received in these ping tests, run a simultaneous Netsh trace on the backend VM and the VNet test VM while you run PsPing then stop the Netsh trace.
-
-## Symptom: Load Balancer in failed state
+## Load Balancer in failed state
**Resolution** -- Once you identify the resource that is in a failed state, go to [Azure Resource Explorer](https://resources.azure.com/) and identify the resource in this state. -- Update the toggle on the right hand top corner to Read/Write.
+- Once you identify the resource that is in a failed state, go to [Azure Resource Explorer](https://resources.azure.com/) and identify the resource in this state.
+- Update the toggle on the right-hand top corner to Read/Write.
- Click on Edit for the resource in failed state. - Click on PUT followed by GET to ensure the provisioning state was updated to Succeeded.-- You can then proceed continue with other actions as the resource is out of failed state.-
+- You can then proceed with other actions as the resource is out of failed state.
## Next steps
-If the preceding steps do not resolve the issue, open a [support ticket](https://azure.microsoft.com/support/options/).
\ No newline at end of file
+If the preceding steps don't resolve the issue, open a [support ticket](https://azure.microsoft.com/support/options/).
media-services https://docs.microsoft.com/en-us/azure/media-services/latest/stream-files-tutorial-with-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/stream-files-tutorial-with-rest.md
@@ -166,11 +166,18 @@ The output [Asset](/rest/api/media/assets) stores the result of your encoding jo
{ "properties": { "description": "My Asset",
- "alternateId" : "some GUID"
+ "alternateId" : "some GUID",
+ "storageAccountName": "<replace from environment file>",
+ "container": "<supply any valid container name of your choosing>"
} } ```
+> [!NOTE]
+> Be sure to replace the storage account and container names either with those from the environment file or supply your own.
+>
+> As you complete the steps described in the rest of this article, make sure that you supply valid parameters in request bodies.
+ ### Create a transform When encoding or processing content in Media Services, it is a common pattern to set up the encoding settings as a recipe. You would then submit a **Job** to apply that recipe to a video. By submitting new jobs for each new video, you are applying that recipe to all the videos in your library. A recipe in Media Services is called as a **Transform**. For more information, see [Transforms and Jobs](./transforms-jobs-concept.md). The sample described in this tutorial defines a recipe that encodes the video in order to stream it to a variety of iOS and Android devices.
@@ -351,8 +358,9 @@ In this section, let's build an HLS streaming URL. URLs consist of the following
To get the hostname, you can use the following GET operation: ```
- https://management.azure.com/subscriptions/00000000-0000-0000-0000-0000000000000/resourceGroups/amsResourceGroup/providers/Microsoft.Media/mediaservices/amsaccount/streamingEndpoints/default?api-version={{api-version}}
+ https://management.azure.com/subscriptions/00000000-0000-0000-0000-0000000000000/resourceGroups/:resourceGroupName/providers/Microsoft.Media/mediaservices/:accountName/streamingEndpoints/default?api-version={{api-version}}
```
+ and make sure that you set the `resourceGroupName` and `accountName` parameters to match the environment file.
3. A path that you got in the previous (List paths) section.
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/get-started-detect-motion-emit-events-quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/get-started-detect-motion-emit-events-quickstart.md
@@ -38,7 +38,7 @@ This tutorial requires the following Azure resources:
For this quickstart, we recommend that you use the [Live Video Analytics resources setup script](https://github.com/Azure/live-video-analytics/tree/master/edge/setup) to deploy the required resources in your Azure subscription. To do so, follow these steps:
-1. Go to [Azure Cloud Shell](https://shell.azure.com).
+1. Go to [Azure Portal](https://portal.azure.com) and select the cloud shell icon.
1. If you're using Cloud Shell for the first time, you'll be prompted to select a subscription to create a storage account and a Microsoft Azure Files share. Select **Create storage** to create a storage account for your Cloud Shell session information. This storage account is separate from the account that the script will create to use with your Azure Media Services account. 1. In the drop-down menu on the left side of the Cloud Shell window, select **Bash** as your environment.
media-services https://docs.microsoft.com/en-us/azure/media-services/video-indexer/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/faq.md
@@ -193,7 +193,7 @@ Access tokens expire every hour, so you need to generate a new access token ever
### What are the login options to Video Indexer Developer portal?
-You can login using Azure AD, Microsoft account, Google account or Facebook account.
+See a release note regarding [login information](release-notes.md#october-2020).
Once you register your email account using an identity provider, you cannot use this email account with another identity provider.
media-services https://docs.microsoft.com/en-us/azure/media-services/video-indexer/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/release-notes.md
@@ -11,7 +11,7 @@ ms.service: media-services
ms.subservice: video-indexer ms.workload: na ms.topic: article
-ms.date: 10/30/2020
+ms.date: 01/06/2021
ms.author: juliako ---
@@ -40,12 +40,15 @@ Video Indexer supports detection, grouping, and recognition of characters in ani
### Planned Video Indexer website authenticatication changes
-Starting January 1st 2021, you no longer will be able to sign up and sign in to the [Video Indexer](https://www.videoindexer.ai/) website (trial offering) using Facebook or LinkedIn.
+Starting March 1st 2021, you no longer will be able to sign up and sign in to the [Video Indexer](https://www.videoindexer.ai/) website using Facebook or LinkedIn.
You will be able to sign up and sign in using one of these providers: Azure AD, Microsoft, and Google. > [!NOTE]
-> You are advised to export your content before January 1st of 2021, since accounts connected to LinkedIn and Facebook will be deleted and the content will no longer be accessible.
+> The Video Indexer accounts connected to LinkedIn and Facebook will not be accessible after March 1st 2021.
+>
+> You should [invite](invite-users.md) an Azure AD, Microsoft, or Google email you own to the Video Indexer account so you will still have access.<br/>
+> Alternatively, you can create a paid account and migrate the data.
## August 2020
media-services https://docs.microsoft.com/en-us/azure/media-services/video-indexer/video-indexer-use-apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/video-indexer-use-apis.md
@@ -9,7 +9,7 @@ manager: femila
ms.service: media-services ms.subservice: video-indexer ms.topic: article
-ms.date: 11/10/2020
+ms.date: 01/07/2021
ms.author: juliako ms.custom: devx-track-csharp ---
@@ -25,8 +25,10 @@ This article shows how the developers can take advantage of the [Video Indexer A
## Subscribe to the API 1. Sign in to [Video Indexer Developer Portal](https://api-portal.videoindexer.ai/).+
+ Review a release note regarding [login information](release-notes.md#october-2020).
- ![Sign in to Video Indexer Developer Portal](./media/video-indexer-use-apis/video-indexer-api01.png)
+ ![Sign in to Video Indexer Developer Portal](./media/video-indexer-use-apis/sign-in.png)
> [!Important] > * You must use the same provider you used when you signed up for Video Indexer.
@@ -36,14 +38,14 @@ This article shows how the developers can take advantage of the [Video Indexer A
Select the [Products](https://api-portal.videoindexer.ai/products) tab. Then, select Authorization and subscribe.
- ![Products tab in Video Indexer Developer Portal](./media/video-indexer-use-apis/video-indexer-api02.png)
+ ![Products tab in Video Indexer Developer Portal](./media/video-indexer-use-apis/authorization.png)
> [!NOTE] > New users are automatically subscribed to Authorization. After you subscribe, you can find your subscription under **Products** -> **Authorization**. In the subscription page, you will find the primary and secondary keys. The keys should be protected. The keys should only be used by your server code. They shouldn't be available on the client side (.js, .html, and so on).
- ![Subscription and keys in Video Indexer Developer Portal](./media/video-indexer-use-apis/video-indexer-api03.png)
+ ![Subscription and keys in Video Indexer Developer Portal](./media/video-indexer-use-apis/subscriptions.png)
> [!TIP] > Video Indexer user can use a single subscription key to connect to multiple Video Indexer accounts. You can then link these Video Indexer accounts to different Media Services accounts.
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-authentication-and-authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-authentication-and-authorization.md
@@ -20,7 +20,7 @@ For more information about authenticating with Azure AD, see the following artic
> [Service Bus REST API](/rest/api/servicebus/) supports OAuth authentication with Azure AD. > [!IMPORTANT]
-> Authorizing users or applications using OAuth 2.0 token returned by Azure AD provides superior security and ease of use over shared access signatures (SAS). With Azure AD, there is no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use using Azure AD with your Azure Service Bus applications when possible.
+> Authorizing users or applications using OAuth 2.0 token returned by Azure AD provides superior security and ease of use over shared access signatures (SAS). With Azure AD, there is no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use Azure AD with your Azure Service Bus applications when possible.
## Shared access signature [SAS authentication](service-bus-sas.md) enables you to grant a user access to Service Bus resources, with specific rights. SAS authentication in Service Bus involves the configuration of a cryptographic key with associated rights on a Service Bus resource. Clients can then gain access to that resource by presenting a SAS token, which consists of the resource URI being accessed and an expiry signed with the configured key.
@@ -51,4 +51,4 @@ For more information about authenticating with Azure AD, see the following artic
For more information about authenticating with SAS, see the following articles: -- [Authentication with SAS](service-bus-sas.md)\ No newline at end of file
+- [Authentication with SAS](service-bus-sas.md)
storage https://docs.microsoft.com/en-us/azure/storage/blobs/recursive-access-control-lists https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/recursive-access-control-lists.md
@@ -180,8 +180,7 @@ The following table shows each of the supported roles and their ACL setting capa
With this approach, the system doesn't check Azure RBAC or ACL permissions. ```powershell
-$storageAccount = Get-AzStorageAccount -ResourceGroupName "<resource-group-name>" -AccountName "<storage-account-name>"
-$ctx = $storageAccount.Context
+$ctx = New-AzStorageContext -StorageAccountName "<storage-account-name>" -StorageAccountKey "<storage-account-key>"
``` ### [Azure CLI](#tab/azure-cli)
@@ -1009,4 +1008,4 @@ The maximum number of ACLs that you can apply to a directory or file is 32 acces
## See also - [Access control in Azure Data Lake Storage Gen2](./data-lake-storage-access-control.md)-- [Known issues](data-lake-storage-known-issues.md)\ No newline at end of file
+- [Known issues](data-lake-storage-known-issues.md)
storage https://docs.microsoft.com/en-us/azure/storage/tables/table-storage-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/tables/table-storage-overview.md
@@ -7,14 +7,14 @@ author: tamram
ms.author: tamram ms.devlang: dotnet ms.topic: overview
-ms.date: 04/23/2018
+ms.date: 01/07/2021
ms.subservice: tables --- # What is Azure Table storage ? [!INCLUDE [storage-table-cosmos-db-tip-include](../../../includes/storage-table-cosmos-db-tip-include.md)]
-Azure Table storage is a service that stores structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design. Because Table storage is schemaless, it's easy to adapt your data as the needs of your application evolve. Access to Table storage data is fast and cost-effective for many types of applications, and is typically lower in cost than traditional SQL for similar volumes of data.
+Azure Table storage is a service that stores non-relational structured data (also known as structured NoSQL data) in the cloud, providing a key/attribute store with a schemaless design. Because Table storage is schemaless, it's easy to adapt your data as the needs of your application evolve. Access to Table storage data is fast and cost-effective for many types of applications, and is typically lower in cost than traditional SQL for similar volumes of data.
You can use Table storage to store flexible datasets like user data for web applications, address books, device information, or other types of metadata your service requires. You can store any number of entities in a table, and a storage account may contain any number of tables, up to the capacity limit of the storage account.
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/load-data-wideworldimportersdw https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/load-data-wideworldimportersdw.md
@@ -7,7 +7,7 @@ manager: craigg
ms.service: synapse-analytics ms.topic: conceptual ms.subservice: sql-dw
-ms.date: 07/17/2019
+ms.date: 11/23/2020
ms.author: kevin ms.reviewer: igorstan ms.custom: seo-lt-2019, synapse-analytics
@@ -19,9 +19,6 @@ This tutorial uses PolyBase to load the WideWorldImportersDW data warehouse from
> [!div class="checklist"] >
-> * Create a data warehouse using SQL pool in the Azure portal
-> * Set up a server-level firewall rule in the Azure portal
-> * Connect to the SQL pool with SSMS
> * Create a user designated for loading data > * Create external tables that use Azure blob as the data source > * Use the CTAS T-SQL statement to load data into your data warehouse
@@ -35,110 +32,7 @@ If you don't have an Azure subscription, [create a free account](https://azure.m
Before you begin this tutorial, download and install the newest version of [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest) (SSMS).
-## Sign in to the Azure portal
-
-Sign in to the [Azure portal](https://portal.azure.com/).
-
-## Create a blank data warehouse in SQL pool
-
-A SQL pool is created with a defined set of [compute resources](memory-concurrency-limits.md). The SQL pool is created within an [Azure resource group](../../azure-resource-manager/management/overview.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) and in a [logical SQL server](../../azure-sql/database/logical-servers.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
-
-Follow these steps to create a blank SQL pool.
-
-1. Select **Create a resource** in the the Azure portal.
-
-1. Select **Databases** from the **New** page, and select **Azure Synapse Analytics** under **Featured** on the **New** page.
-
- ![create SQL pool](./media/load-data-wideworldimportersdw/create-empty-data-warehouse.png)
-
-1. Fill out the **Project Details** section with the following information:
-
- | Setting | Example | DescriptionΓÇ»|
- | ------- | --------------- | ----------- |
- | **Subscription** | Your subscription | For details about your subscriptions, see [Subscriptions](https://account.windowsazure.com/Subscriptions). |
- | **Resource group** | myResourceGroup | For valid resource group names, see [Naming rules and restrictions](/azure/architecture/best-practices/resource-naming?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). |
-
-1. Under **SQL pool details**, provide a name for your SQL pool. Next, either select an existing server from the drop down, or select **Create new** under the **Server** settings to create a new server. Fill out the form with the following information:
-
- | Setting | Suggested value | DescriptionΓÇ»|
- | ------- | --------------- | ----------- |
- |**SQL pool name**|SampleDW| For valid database names, see [Database Identifiers](/sql/relational-databases/databases/database-identifiers?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest). |
- | **Server name** | Any globally unique name | For valid server names, see [Naming rules and restrictions](/azure/architecture/best-practices/resource-naming?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). |
- | **Server admin login** | Any valid name | For valid login names, see [Database Identifiers](/sql/relational-databases/databases/database-identifiers?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest).|
- | **Password** | Any valid password | Your password must have at least eight characters and must contain characters from three of the following categories: upper case characters, lower case characters, numbers, and non-alphanumeric characters. |
- | **Location** | Any valid location | For information about regions, see [Azure Regions](https://azure.microsoft.com/regions/). |
-
- ![create server](./media/load-data-wideworldimportersdw/create-database-server.png)
-
-1. **Select performance level**. The slider by default is set to **DW1000c**. Move the slider up and down to choose the desired performance scale.
-
- ![create server 2](./media/load-data-wideworldimportersdw/create-data-warehouse.png)
-
-1. On the **Additional Settings** page, set the **Use existing data** to None, and leave the **Collation** at the default of *SQL_Latin1_General_CP1_CI_AS*.
-
-1. Select **Review + create** to review your settings, and then select **Create** to create your data warehouse. You can monitor your progress by opening the **deployment in progress** page from the **Notifications** menu.
-
- ![Screenshot shows Notifications with Deployment in progress.](./media/load-data-wideworldimportersdw/notification.png)
-
-## Create a server-level firewall rule
-
-The Azure Synapse Analytics service creates a firewall at the server-level that prevents external applications and tools from connecting to the server or any databases on the server. To enable connectivity, you can add firewall rules that enable connectivity for specific IP addresses. Follow these steps to create a [server-level firewall rule](../../azure-sql/database/firewall-configure.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) for your client's IP address.
-
-> [!NOTE]
-> The Azure Synapse Analytics SQL pool communicates over port 1433. If you are trying to connect from within a corporate network, outbound traffic over port 1433 might not be allowed by your network's firewall. If so, you cannot connect to your server unless your IT department opens port 1433.
->
-
-1. After the deployment completes, search for your pool name in the search box in the navigation menu, and select the SQL pool resource. Select the server name.
-
- ![go to your resource](./media/load-data-wideworldimportersdw/search-for-sql-pool.png)
-
-1. Select the server name.
- ![server name](././media/load-data-wideworldimportersdw/find-server-name.png)
-
-1. Select **Show firewall settings**. The **Firewall settings** page for the server opens.
-
- ![server settings](./media/load-data-wideworldimportersdw/server-settings.png)
-
-1. On the **Firewalls and virtual networks** page, select **Add client IP** to add your current IP address to a new firewall rule. A firewall rule can open port 1433 for a single IP address or a range of IP addresses.
-
- ![server firewall rule](./media/load-data-wideworldimportersdw/server-firewall-rule.png)
-
-1. Select **Save**. A server-level firewall rule is created for your current IP address opening port 1433 on the server.
-
-You can now connect to the server using your client IP address. The connection works from SQL Server Management Studio or another tool of your choice. When you connect, use the serveradmin account you created previously.
-
-> [!IMPORTANT]
-> By default, access through the SQL Database firewall is enabled for all Azure services. Click **OFF** on this page and then click **Save** to disable the firewall for all Azure services.
-
-## Get the fully qualified server name
-
-The fully qualified server name is what is used to connect to the server. Go to your SQL pool resource in the Azure portal and view the fully qualified name under **Server name**.
-
-![server name](././media/load-data-wideworldimportersdw/find-server-name.png)
-
-## Connect to the server as server admin
-
-This section uses [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest) (SSMS) to establish a connection to your server.
-
-1. Open SQL Server Management Studio.
-
-2. In the **Connect to Server** dialog box, enter the following information:
-
- | Setting ΓÇ» ΓÇ» ΓÇ»| Suggested value | DescriptionΓÇ»|
- | ------------ | --------------- | ----------- |
- | Server type | Database engine | This value is required |
- | Server name | The fully qualified server name | For example, **sqlpoolservername.database.windows.net** is a fully qualified server name. |
- | Authentication | SQL Server Authentication | SQL Authentication is the only authentication type that is configured in this tutorial. |
- | Login | The server admin account | This is the account that you specified when you created the server. |
- | Password | The password for your server admin account | This is the password that you specified when you created the server. |
-
- ![connect to server](./media/load-data-wideworldimportersdw/connect-to-server.png)
-
-3. Click **Connect**. The Object Explorer window opens in SSMS.
-
-4. In Object Explorer, expand **Databases**. Then expand **System databases** and **master** to view the objects in the master database. Expand **SampleDW** to view the objects in your new database.
-
- ![database objects](./media/load-data-wideworldimportersdw/connected.png)
+This tutorial assumes you have already created a SQL dedicated pool from the following [tutorial](https://docs.microsoft.com/azure/synapse-analytics/sql-data-warehouse/create-data-warehouse-portal#connect-to-the-server-as-server-admin).
## Create a user for loading data
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/migration-classic-resource-manager-ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/migration-classic-resource-manager-ps.md
@@ -261,7 +261,7 @@ If the prepared configuration looks good, you can move forward and commit the re
After you're done migrating the virtual machines, perform the following prerequisite checks before you migrate the storage accounts. > [!NOTE]
-> If your storage account has no associated disks or VM data, you can skip directly to the "Validate storage accounts and start migration" section.
+> If your storage account has no associated disks or VM data, you can skip directly to the "Validate storage accounts and start migration" section. Also note that deleting the classic disks, VM images or OS images does not remove the source VHD files in the storage account. However, it does break the lease on those VHD files so that they can be reused to create ARM disks or images after migration.
* Prerequisite checks if you migrated any VMs or your storage account has disk resources: * Migrate virtual machines whose disks are stored in the storage account.
virtual-wan https://docs.microsoft.com/en-us/azure/virtual-wan/about-nva-hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/about-nva-hub.md
@@ -13,7 +13,7 @@ Customer intent: As someone with a networking background, I want to learn about
Azure Virtual WAN has worked with networking partners to build automation that makes it easy to connect their Customer Premise Equipment (CPE) to an Azure VPN gateway in the virtual hub. Azure is working with select networking partners to enable customers to deploy a third-party Network Virtual Appliance (NVA) directly into the virtual hub. This allows customers who want to connect their branch CPE to the same brand NVA in the virtual hub so that they can take advantage of proprietary end-to-end SD-WAN capabilities.
-Barracuda Networks is the first partner to provide an NVA offering that can be deployed directly to the Virtual WAN hub with their [Barracuda CloudGen WAN](https://www.barracuda.com/products/cloudgenwan) product. Azure is working with more partner so expect to see other offerings follow.
+Barracuda Networks and Cisco Systems are the first partners to provide the NVAs that can be deployed directly to the Virtual WAN hub. See [Barracuda CloudGen WAN](https://www.barracuda.com/products/cloudgenwan) and [Cisco Cloud OnRamp for Multi-Cloud](https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/cloudonramp/ios-xe-17/cloud-onramp-book-xe/cloud-onramp-multi-cloud.html#Cisco_Concept.dita_c61e0e7a-fff8-4080-afee-47b81e8df701) for their respective product documentation. Azure is working with more partners so expect to see other offerings follow.
> [!NOTE] > Only NVA offers that are available to be deployed into the Virtual WAN hub can deployed into the Virtual WAN hub. They cannot be deployed into an arbitrary virtual network in Azure.
@@ -87,11 +87,11 @@ Unfortunately, we do not have capacity to on-board any new partner offers at thi
### Can I deploy any NVA from Azure Marketplace into the Virtual WAN hub?
-No. At this time, only [Barracuda CloudGen WAN](https://aka.ms/BarracudaMarketPlaceOffer) is available to be deployed into the Virtual WAN hub.
+At this time, only [Barracuda CloudGen WAN](https://aka.ms/BarracudaMarketPlaceOffer) and [Cisco Cloud vWAN Application](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cisco.cisco_cloud_vwan_app?tab=Overview) are available to be deployed into the Virtual WAN hub.
### What is the cost of the NVA?
-You must purchase a license for your Barracuda CloudGen WAN NVA from Barracuda. For more information on licensing, see [Barracuda's CloudGen WAN page](https://www.barracuda.com/products/cloudgenwan). In addition, you will also incur charges from Microsoft for the NVA Infrastructure Units you consume, and any other resources you use. For more information, see [Pricing Concepts](pricing-concepts.md).
+You must purchase a license for the NVA from the NVA vendor. For your Barracuda CloudGen WAN NVA from Barracuda license, see [Barracuda's CloudGen WAN page](https://www.barracuda.com/products/cloudgenwan). Cisco currently only offers BYOL (Bring Your Own License) licensing model that needs to be procured directly from Cisco. In addition, you will also incur charges from Microsoft for the NVA Infrastructure Units you consume, and any other resources you use. For more information, see [Pricing Concepts](pricing-concepts.md).
### Can I deploy an NVA to a Basic hub?
@@ -103,7 +103,7 @@ Yes. Barracuda CloudGen WAN can be deployed into a hub with Azure Firewall.
### Can I connect any CPE device in my branch office to Barracuda CloudGen WAN NVA in the hub?
-No. Barracuda CloudGen WAN is only compatible with Barracuda CPE devices. To learn more about CloudGen WAN requirements, see [Barracuda's CloudGen WAN page](https://www.barracuda.com/products/cloudgenwan).
+No. Barracuda CloudGen WAN is only compatible with Barracuda CPE devices. To learn more about CloudGen WAN requirements, see [Barracuda's CloudGen WAN page](https://www.barracuda.com/products/cloudgenwan). For Cisco, there a several SD-WAN CPE devices that are compatable. Please see [Cisco Cloud OnRamp for Multi-Cloud](https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/cloudonramp/ios-xe-17/cloud-onramp-book-xe/cloud-onramp-multi-cloud.html#Cisco_Concept.dita_c61e0e7a-fff8-4080-afee-47b81e8df701) documenation for compatable CPEs.
### What routing scenarios are supported with NVA in the hub?
@@ -111,4 +111,4 @@ All routing scenarios supported by Virtual WAN are supported with NVAs in the hu
## Next steps
-To learn more about Virtual WAN, see the [Virtual WAN Overview](virtual-wan-about.md) article.
\ No newline at end of file
+To learn more about Virtual WAN, see the [Virtual WAN Overview](virtual-wan-about.md) article.