Updates from: 08/13/2021 03:13:35
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Cookie Definitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/cookie-definitions.md
The following table lists the cookies used in Azure AD B2C.
| `x-ms-cpim-admin` | main.b2cadmin.ext.azure.com | End of [browser session](session-behavior.md) | Holds user membership data across tenants. The tenants a user is a member of and level of membership (Admin or User). | | `x-ms-cpim-slice` | b2clogin.com, login.microsoftonline.com, branded domain | End of [browser session](session-behavior.md) | Used to route requests to the appropriate production instance. | | `x-ms-cpim-trans` | b2clogin.com, login.microsoftonline.com, branded domain | End of [browser session](session-behavior.md) | Used for tracking the transactions (number of authentication requests to Azure AD B2C) and the current transaction. |
-| `x-ms-cpim-sso:{Id}` | b2clogin.com, login.microsoftonline.com, branded domain | End of [browser session](session-behavior.md) | Used for maintaining the SSO session. |
+| `x-ms-cpim-sso:{Id}` | b2clogin.com, login.microsoftonline.com, branded domain | End of [browser session](session-behavior.md) | Used for maintaining the SSO session. This cookie is set as `persistent`, when [Keep Me Signed In](session-behavior.md#enable-keep-me-signed-in-kmsi) is enabled.|
| `x-ms-cpim-cache:{id}_n` | b2clogin.com, login.microsoftonline.com, branded domain | End of [browser session](session-behavior.md), successful authentication | Used for maintaining the request state. | | `x-ms-cpim-csrf` | b2clogin.com, login.microsoftonline.com, branded domain | End of [browser session](session-behavior.md) | Cross-Site Request Forgery token used for CRSF protection. | | `x-ms-cpim-dc` | b2clogin.com, login.microsoftonline.com, branded domain | End of [browser session](session-behavior.md) | Used for Azure AD B2C network routing. |
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-domain.md
Previously updated : 07/20/2021 Last updated : 08/12/2021 zone_pivot_groups: b2c-policy-type
zone_pivot_groups: b2c-policy-type
[!INCLUDE [b2c-public-preview-feature](../../includes/active-directory-b2c-public-preview.md)]
-This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). Using a custom domain with your application provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign-in process rather than redirecting to the Azure AD B2C default domain *<tenant-name>.b2clogin.com*.
+This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). Using a custom domain with your application provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign-in process rather than redirecting to the Azure AD B2C default domain *&lt;tenant-name&gt;.b2clogin.com*.
-![Custom domain user experience](./media/custom-domain/custom-domain-user-experience.png)
+![Screenshot demonstrates an Azure AD B2C custom domain user experience.](./media/custom-domain/custom-domain-user-experience.png)
## Custom domain overview
The following diagram illustrates Azure Front Door integration:
1. Azure Front Door invokes Azure AD B2C content using the Azure AD B2C `<tenant-name>.b2clogin.com` default domain. The request to the Azure AD B2C endpoint includes the original custom domain name. 1. Azure AD B2C responds to the request by displaying the relevant content and the original custom domain.
-![Custom domain networking diagram](./media/custom-domain/custom-domain-network-flow.png)
+![Diagram shows the custom domain networking flow.](./media/custom-domain/custom-domain-network-flow.png)
> [!IMPORTANT] > The connection from the browser to Azure Front Door should always use IPv4 instead of IPv6.
When using custom domains, consider the following:
- You can set up multiple custom domains. For the maximum number of supported custom domains, see [Azure AD service limits and restrictions](../active-directory/enterprise-users/directory-service-limits-restrictions.md) for Azure AD B2C and [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-service-limits) for Azure Front Door. - Azure Front Door is a separate Azure service, so extra charges will be incurred. For more information, see [Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor). - To use Azure Front Door [Web Application Firewall](../web-application-firewall/afds/afds-overview.md), you need to confirm your firewall configuration and rules work correctly with your Azure AD B2C user flows.-- After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *<tenant-name>.b2clogin.com* (unless you're using a custom policy and you [block access](#block-access-to-the-default-domain-name).
+- After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *&lt;tenant-name&gt;.b2clogin.com* (unless you're using a custom policy and you [block access](#block-access-to-the-default-domain-name).
- If you have multiple applications, migrate them all to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used. ## Prerequisites
To add a frontend host, follow these steps:
1. In **Frontends/domains**, select **+** to open **Add a frontend host**. 1. For **Host name**, enter a globally unique hostname. The host name is not your custom domain. This example uses *contoso-frontend*. Select **Add**.
- ![Add a frontend host screenshot.](./media/custom-domain/add-frontend-host-azure-front-door.png)
+ ![Screenshot demonstrates how to add a frontend host.](./media/custom-domain/add-frontend-host-azure-front-door.png)
### 2.2 Add backend and backend pool
A backend refers to your [Azure AD B2C tenant name](tenant-management.md#get-you
The following screenshot demonstrates how to create a backend pool:
- ![Add a frontend backend pool screenshot.](./media/custom-domain/front-door-add-backend-pool.png)
+ ![Screenshot demonstrates how to add a frontend backend pool.](./media/custom-domain/front-door-add-backend-pool.png)
1. In the **Add a backend** blade, select the following information, and then select **Add**.
A backend refers to your [Azure AD B2C tenant name](tenant-management.md#get-you
The following screenshot demonstrates how to create a custom host backend that is associated with an Azure AD B2C tenant:
- ![Add a custom host backend screenshot.](./media/custom-domain/add-a-backend.png)
+ ![Screenshot demonstrates how to add a custom host backend.](./media/custom-domain/add-a-backend.png)
1. To complete the configuration of the backend pool, on the **Add a backend pool** blade, select **Add**. 1. After you add the **backend** to the **backend pool**, disable the **Health probes**.
- ![Add a backend pool and disable the health probes screenshot.](./media/custom-domain/add-a-backend-pool.png)
+ ![Screenshot demonstrates how to add a backend pool and disable the health probes.](./media/custom-domain/add-a-backend-pool.png)
### 2.3 Add a routing rule
Finally, add a routing rule. The routing rule maps your frontend host to the bac
1. In **Add a rule**, for **Name**, enter *LocationRule*. Accept all the default values, then select **Add** to add the routing rule. 1. Select **Review + Create**, and then **Create**.
- ![Create Azure Front Door screenshot.](./media/custom-domain/configuration-azure-front-door.png)
+ ![Screenshot demonstrates how to create Azure Front Door.](./media/custom-domain/configuration-azure-front-door.png)
## Step 3. Set up your custom domain on Azure Front Door
In this step, you add the custom domain you registered in [Step 1](#step-1-add-a
Before you can use a custom domain with your Front Door, you must first create a canonical name (CNAME) record with your domain provider to point to your Front Door's default frontend host (say contoso.azurefd.net).
-A CNAME record is a type of DNS record that maps a source domain name to a destination domain name. For Azure Front Door, the source domain name is your custom domain name, and the destination domain name is your Front Door default hostname you configure in [step 2.1](#21-add-frontend-host).
+A CNAME record is a type of DNS record that maps a source domain name to a destination domain name (alias). For Azure Front Door, the source domain name is your custom domain name, and the destination domain name is your Front Door default hostname you configure in [step 2.1](#21-add-frontend-host).
After Front Door verifies the CNAME record that you created, traffic addressed to the source custom domain (such as login.contoso.com) is routed to the specified destination Front Door default frontend host, such as `contoso.azurefd.net`. For more information, see [add a custom domain to your Front Door](../frontdoor/front-door-custom-domain.md).
To create a CNAME record for your custom domain:
After you've registered your custom domain, you can then add it to your Front Door.
-1. On the **Front Door designer** page, select **+** to add a custom domain.
+1. On the **Front Door designer** page, under the **Frontends/domains**, select **+** to add a custom domain.
+
+ ![Screenshot demonstrates how to add a custom domain.](./media/custom-domain/azure-front-door-add-custom-domain.png)
1. For **Frontend host**, the frontend host to use as the destination domain of your CNAME record is pre-filled and is derived from your Front Door: *&lt;default hostname&gt;*.azurefd.net. It cannot be changed. 1. For **Custom hostname**, enter your custom domain, including the subdomain, to use as the source domain of your CNAME record. For example, login.contoso.com.
-1. Select **Add**.
+ ![Screenshot demonstrates how to verify a custom domain.](./media/custom-domain/azure-front-door-add-custom-domain-verification.png)
+
+ Azure verifies that the CNAME record exists for the custom domain name you entered. If the CNAME is correct, your custom domain will be validated.
+
+
+1. After the custom domain name is verified, under the **Custom domain name HTTPS**, select **Enabled**.
+
+ ![Screenshot shows how to enable HTTPS using an Azure Front Door certificate.](./media/custom-domain/azure-front-door-add-custom-domain-https-settings.png)
- Azure verifies that the CNAME record exists for the custom domain name you entered. If the CNAME is correct, your custom domain will be validated.
+1. For the **Certificate management type**, select [Front Door management](../frontdoor/front-door-custom-domain-https.md#option-1-default-use-a-certificate-managed-by-front-door), or [Use my own certificate](../frontdoor/front-door-custom-domain-https.md#option-2-use-your-own-certificate). If you choose the *Front Door managed* option, wait until the certificate is fully provisioned.
+
+1. Select **Add**.
### 3.3 Update the routing rule 1. In the **Routing rules**, select the routing rule you created in [step 2.3](#23-add-a-routing-rule).+
+ ![Screenshot demonstrates how to select a routing rule.](./media/custom-domain/select-routing-rule.png)
+ 1. Under the **Frontends/domains**, select your custom domain name.
- ![Update the Azure Front Door routing rule screenshot.](./media/custom-domain/update-routing-rule.png)
+ ![Screenshot demonstrates how to update the Azure Front Door routing rule.](./media/custom-domain/update-routing-rule.png)
1. Select **Update**. 1. From the main window, select **Save**.
-
-### 3.4 Configure HTTPS on a Front Door custom domain
-
-After the custom domain name is verified, select **Custom domain name HTTPS**. Then under the **Certificate management type**, select [Front Door management](../frontdoor/front-door-custom-domain-https.md#option-1-default-use-a-certificate-managed-by-front-door), or [Use my own certificate](../frontdoor/front-door-custom-domain-https.md#option-2-use-your-own-certificate). If you choose the *Front Door managed* option, wait until the certificate is fully provisioned.
-
-The following screenshot shows how to add a custom domain and enable HTTPS using an Azure Front Door certificate.
-
-![Set up Azure Front Door custom domain](./media/custom-domain/azure-front-door-add-custom-domain.png)
- ## Step 4. Configure CORS
Configure Azure Blob storage for Cross-Origin Resource Sharing with the followin
1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`. 1. Click **Copy to clipboard**.
- ![Copy the authorization request URI](./media/custom-domain/user-flow-run-now.png)
+ ![Screenshot demonstrates how to copy the authorization request URI.](./media/custom-domain/user-flow-run-now.png)
-1. In the **Run user flow endpoint** URL, replace the Azure AD B2C domain (<tenant-name>.b2clogin.com) with your custom domain.
+1. In the **Run user flow endpoint** URL, replace the Azure AD B2C domain (_&lt;tenant-name&gt;_.b2clogin.com) with your custom domain.
For example, instead of: ```http
In the following redirect URI:
https://<custom-domain-name>/<tenant-name>/oauth2/authresp ``` -- Replace **<custom-domain-name>** with your custom domain name.-- Replace **<tenant-name>** with the name of your tenant, or your tenant ID.
+- Replace **&lt;custom-domain-name&gt;** with your custom domain name.
+- Replace **&lt;tenant-name&gt;** with the name of your tenant, or your tenant ID.
The following example shows a valid OAuth redirect URI:
https://<domain-name>/11111111-1111-1111-1111-111111111111/v2.0/
## Block access to the default domain name
-After you add the custom domain and configure your application, users will still be able to access the <tenant-name>.b2clogin.com domain. To prevent access, you can configure the policy to check the authorization request "host name" against an allowed list of domains. The host name is the domain name that appears in the URL. The host name is available through `{Context:HostName}` [claim resolvers](claim-resolver-overview.md). Then you can present a custom error message.
+After you add the custom domain and configure your application, users will still be able to access the &lt;tenant-name&gt;.b2clogin.com domain. To prevent access, you can configure the policy to check the authorization request "host name" against an allowed list of domains. The host name is the domain name that appears in the URL. The host name is available through `{Context:HostName}` [claim resolvers](claim-resolver-overview.md). Then you can present a custom error message.
1. Get the example of a conditional access policy that checks the host name from [GitHub](https://github.com/azure-ad-b2c/samples/blob/master/policies/check-host-name). 1. In each file, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is *contosob2c*, all instances of `yourtenant.onmicrosoft.com` become `contosob2c.onmicrosoft.com`.
To use your own web application firewall in front of Azure Front Door, you need
## Next steps
-Learn about [OAuth authorization requests](protocols-overview.md).
+Learn about [OAuth authorization requests](protocols-overview.md).
active-directory-b2c Enable Authentication Web Application Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-web-application-options.md
Previously updated : 07/05/2021 Last updated : 08/12/2021
You can pass parameters between your controller and the *OnRedirectToIdentityPro
If you want to customize the **Sign-in**, **Sign-up** or **Sign-out** actions, you are encouraged to create your own controller. Having your own controller allows you to pass parameters between your controller and the authentication library. The `AccountController` is part of `Microsoft.Identity.Web.UI` NuGet package, which handles the sign-in and sign-out actions. You can find its implementation in the [Microsoft Identity Web library](https://github.com/AzureAD/microsoft-identity-web/blob/master/src/Microsoft.Identity.Web.UI/Areas/MicrosoftIdentity/Controllers/AccountController.cs).
-The following code snippet demonstrates a custom `MyAccountController` with the **SignIn** action. The action passes a parameter named `campaign_id` to the authentication library.
+### Add the Account controller
+
+In your Visual Studio project, right click the **Controllers** folder, and add a new **Controller**. Select **MVC - Empty Controller**, and provide the name **MyAccountController.cs**.
+
+The following code snippet demonstrates a custom `MyAccountController` with the **SignIn** action.
```csharp using System;
namespace mywebapp.Controllers
scheme ??= OpenIdConnectDefaults.AuthenticationScheme; var redirectUrl = Url.Content("~/"); var properties = new AuthenticationProperties { RedirectUri = redirectUrl };
- properties.Items["campaign_id"] = "1234";
return Challenge(properties, scheme); } } }- ``` In the `_LoginPartial.cshtml` view, change the sign-in link to your controller
-```
+```html
<form method="get" asp-area="MicrosoftIdentity" asp-controller="MyAccount" asp-action="SignIn"> ```
-In the `OnRedirectToIdentityProvider` in the `Startup.cs` calls, you can read the custom parameter:
+### Pass the Azure AD B2C policy ID
+
+The following code snippet demonstrates a custom `MyAccountController` with the **SignIn** and **SignUp** action. The action passes a parameter named `policy` to the authentication library. This allows you to provide the correct Azure AD B2C policy ID for the specific action.
+
+```csharp
+public IActionResult SignIn([FromRoute] string scheme)
+{
+ scheme ??= OpenIdConnectDefaults.AuthenticationScheme;
+ var redirectUrl = Url.Content("~/");
+ var properties = new AuthenticationProperties { RedirectUri = redirectUrl };
+ properties.Items["policy"] = "B2C_1_SignIn";
+ return Challenge(properties, scheme);
+}
+
+public IActionResult SignUp([FromRoute] string scheme)
+{
+ scheme ??= OpenIdConnectDefaults.AuthenticationScheme;
+ var redirectUrl = Url.Content("~/");
+ var properties = new AuthenticationProperties { RedirectUri = redirectUrl };
+ properties.Items["policy"] = "B2C_1_SignUp";
+ return Challenge(properties, scheme);
+}
+```
+
+In the `_LoginPartial.cshtml` view, change the `asp-controller` value to `MyAccountController` for any other authentication links, such as sign-up or edit profile.
+
+### Pass custom parameters
+
+The following code snippet demonstrates a custom `MyAccountController` with the **SignIn** action. The action passes a parameter named `campaign_id` to the authentication library.
+
+```csharp
+public IActionResult SignIn([FromRoute] string scheme)
+{
+ scheme ??= OpenIdConnectDefaults.AuthenticationScheme;
+ var redirectUrl = Url.Content("~/");
+ var properties = new AuthenticationProperties { RedirectUri = redirectUrl };
+ properties.Items["policy"] = "B2C_1_SignIn";
+ properties.Items["campaign_id"] = "1234";
+ return Challenge(properties, scheme);
+}
+```
+
+Complete the [Support advanced scenarios](#support-advanced-scenarios) procedure. Then in the `OnRedirectToIdentityProvider` method, read the custom parameter:
```csharp private async Task OnRedirectToIdentityProviderFunc(RedirectContext context)
active-directory-b2c Javascript And Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/javascript-and-page-layout.md
Previously updated : 06/07/2021 Last updated : 08/12/2021
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] --
+With Azure Active Directory B2C (Azure AD B2C) [HTML templates](customize-ui-with-html.md), you can craft your users' identity experiences. Your HTML templates can contain only certain HTML tags and attributes. Basic HTML tags, such as &lt;b&gt;, &lt;i&gt;, &lt;u&gt;, &lt;h1&gt;, and &lt;hr&gt; are allowed. More advanced tags such as &lt;script&gt;, and &lt;iframe&gt; are removed for security reasons.
-Azure AD B2C provides a set of packaged content containing HTML, CSS, and JavaScript for the user interface elements in your user flows and custom policies. To enable JavaScript for your applications:
+To enable JavaScript and advance HTML tags and attributes:
::: zone pivot="b2c-user-flow"
active-directory-b2c Session Behavior https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/session-behavior.md
To change your session behavior and SSO configurations, you add a **UserJourneyB
## Enable Keep me signed in (KMSI)
-You can enable the KMSI feature for users of your web and native applications who have local accounts in your Azure AD B2C directory. When you enable the feature, users can opt to stay signed in so the session remains active after they close the browser. Then they can reopen the browser without being prompted to reenter their username and password. This access is revoked when a user signs out.
+You can enable the KMSI feature for users of your web and native applications who have local accounts in your Azure AD B2C directory. When you enable the feature, users can opt to stay signed in so the session remains active after they close the browser. The session is maintained by setting a [persistent cookie](cookie-definitions.md). Users who select KMSI, can reopen the browser without being prompted to reenter their username and password. This access (persistent cookie) is revoked when a user signs out.
![Example sign-up sign-in page showing a Keep me signed in checkbox](./media/session-behavior/keep-me-signed-in.png)
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
Below are sample requests to help outline what the sync engine currently sends v
], "Operations": [ {
- "op": "replace",
- "path": "active",
- "value": false
+ "op": "add",
+ "value": {
+ "nickName": "Babs"
+ }
} ] }
active-directory Application Proxy Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-configure-custom-domain.md
Previously updated : 10/24/2019 Last updated : 08/12/2021
If you're not able to make the internal and external URLs match, it's not as imp
There are several options for setting up your DNS configuration, depending on your requirements: + ### Same internal and external URL, different internal and external behavior If you don't want your internal users to be directed through the Application Proxy, you can set up a *split-brain DNS*. A split DNS infrastructure directs internal hosts to an internal domain name server, and external hosts to an external domain name server, for name resolution. ![Split-brain DNS](./media/application-proxy-configure-custom-domain/split-brain-dns.png) + ### Different internal and external URLs If the internal and external URLs are different, you don't need to configure split-brain behavior, because user routing is determined by the URL. In this case, you change only the external DNS, and route the external URL to the Application Proxy endpoint.
active-directory Concepts Azure Multi Factor Authentication Prompts Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md
Previously updated : 08/31/2020 Last updated : 08/12/2021
To give your users the right balance of security and ease of use by asking them
* If you have Azure AD Premium: * Enable single sign-on (SSO) across applications using [managed devices](../devices/overview.md) or [Seamless SSO](../hybrid/how-to-connect-sso.md).
- * If reauthentication is required, use a Conditional Access [sign-in Frequency policy](../conditional-access/howto-conditional-access-session-lifetime.md).
- * For users that sign in from non-managed devices or mobile device scenarios, use Conditional Access to enable persistent browser sessions and sign-in frequency policies.
+ * If reauthentication is required, use a Conditional Access [sign-in frequency policy](../conditional-access/howto-conditional-access-session-lifetime.md).
+ * For users that sign in from non-managed devices or mobile device scenarios, persistent browser sessions may not be preferable, or you might use Conditional Access to enable persistent browser sessions with sign-in frequency policies. Limit the duration to an appropriate time based on the sign-in risk, where a user with less risk has a longer session duration.
* If you have Microsoft 365 apps licenses or the free Azure AD tier: * Enable single sign-on (SSO) across applications using [managed devices](../devices/overview.md) or [Seamless SSO](../hybrid/how-to-connect-sso.md). * Keep the *Remain signed-in* option enabled and guide your users to accept it.
active-directory Scenario Protected Web Api Verification Scope App Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-protected-web-api-verification-scope-app-roles.md
To protect an ASP.NET or ASP.NET Core web API, you must add the `[Authorize]` at
[Authorize] public class TodoListController : Controller {
- ...
+ // ...
} ```
public class TodoListController : Controller
// Do the work and return the result. // ... }
-...
+ // ...
} ```
public class TodoListController : Controller
Defining granular scopes for your web API and verifying the scopes in each controller action is the recommended approach. However it's also possible to verify the scopes at the level of the application or a controller. For details, see [Claim-based authorization](/aspnet/core/security/authorization/claims) in the ASP.NET core documentation.
-#### What is verified?
+#### What is verified?
The `[RequiredScope]` attribute and `VerifyUserHasAnyAcceptedScope` method, does something like the following steps:
public class TodoListController : ApiController
public IEnumerable<TodoItem> Get() { ValidateScopes(new[] {"read"; "admin" } );
- ...
+ // ...
} ```
public class TodoListController : ApiController
public IEnumerable<TodoItem> Get() { HttpContext.ValidateAppRole("access_as_application");
- ...
+ // ...
} ```
Instead, you can use the [Authorize("role")] attributes on the controller or an
[Authorize("role")] MyController : ApiController {
- //
+ // ...
} ```
public class TodoListController : ApiController
public IEnumerable<TodoItem> Get() { ValidateAppRole("access_as_application");
- ...
+ // ...
} ```
If you want only daemon apps to call your web API, add the condition that the to
```csharp string oid = ClaimsPrincipal.Current.FindFirst("oid")?.Value; string sub = ClaimsPrincipal.Current.FindFirst("sub")?.Value;
-bool isAppOnlyToken = oid == sub;
+bool isAppOnly = oid != null && sub != null && oid == sub;
``` Checking the inverse condition allows only apps that sign in a user to call your API.
System.UnauthorizedAccessException: IDW10201: Neither scope or roles claim was f
```Json { "AzureAD"
- {
+ {
// other properties "AllowWebApiToBeAuthorizedByACL" : true, // other properties
active-directory V2 Permissions And Consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-permissions-and-consent.md
In OAuth 2.0, these types of permission sets are called *scopes*. They're also o
An app most commonly requests these permissions by specifying the scopes in requests to the Microsoft identity platform authorize endpoint. However, some high-privilege permissions can be granted only through administrator consent. They can be requested or granted by using the [administrator consent endpoint](#admin-restricted-permissions). Keep reading to learn more.
+In requests to the authorization, token or consent endpoints for the Microsoft Identity platform, if the resource identifier is omitted in the scope parameter, the resource is assumed to be Microsoft Graph. For example, `scope=User.Read` is equivalent to `https://graph.microsoft.com/User.Read`.
+ ## Permission types The Microsoft identity platform supports two types of permissions: *delegated permissions* and *application permissions*.
active-directory Howto Device Identity Virtual Desktop Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-device-identity-virtual-desktop-infrastructure.md
When deploying non-persistent VDI, Microsoft recommends that IT administrators i
> * `HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\AAD` > * `HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows NT\CurrentVersion\WorkplaceJoin` >
-> Romaing of the work account's device certificate is not supported. The certificate, issued by "MS-Organization-Access", is stored in the Personal (MY) certificate store of the current user.
+> Roaming of the work account's device certificate is not supported. The certificate, issued by "MS-Organization-Access", is stored in the Personal (MY) certificate store of the current user and on the local machine.
### Persistent VDI
active-directory Assign Roles Different Scopes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/assign-roles-different-scopes.md
+
+ Title: Assign Azure AD roles at different scopes - Azure Active Directory
+description: Learn how to assign roles at different scopes in Azure Active Directory
+++++++ Last updated : 08/12/2021++++++
+# Assign Azure AD roles at different scopes
+
+This article describes how to assign Azure AD roles at different scopes. To understanding scoping in Azure AD, refer to this doc - [Overview of RBAC in Azure AD](custom-overview.md). In general, you must be within the scope that you want the role assignment to be limited to. For example, if you want to assign Helpdesk Administrator role scoped over an [administrative unit](administrative-units.md), then you should go to **Azure AD > Administrative Units > {administrative unit} > Roles and administrators** and then do the role assignment. This will create a role assignment scoped to the administrative unit, not the entire tenant.
+
+## Prerequisites
+
+- Privileged Role Administrator or Global Administrator.
+- AzureADPreview module when using PowerShell.
+- Admin consent when using Graph explorer for Microsoft Graph API.
+
+For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
+
+## Assign roles scoped to the tenant
+
+This section describes how to assign roles at the tenant scope.
+
+### Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
+
+1. Select **Azure Active Directory** > **Roles and administrators** to see the list of all available roles.
+
+ ![Roles and administrators page in Azure Active Directory.](./media/manage-roles-portal/roles-and-administrators.png)
+
+1. Select a role to see its assignments. To help you find the role you need, use **Add filters** to filter the roles.
+
+1. Select **Add assignments** and then select the users you want to assign to this role.
+
+ ![Add assignments pane for selected role.](./media/manage-roles-portal/add-assignments.png)
+
+1. Select **Add** to assign the role.
+
+### PowerShell
+
+Follow these steps to assign Azure AD roles using PowerShell.
+
+1. Open a PowerShell window and use [Import-Module](/powershell/module/microsoft.powershell.core/import-module) to import the AzureADPreview module. For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
+
+ ```powershell
+ Import-Module -Name AzureADPreview -Force
+ ```
+
+1. In a PowerShell window, use [Connect-AzureAD](/powershell/module/azuread/connect-azuread) to sign in to your tenant.
+
+ ```powershell
+ Connect-AzureAD
+ ```
+
+1. Use [Get-AzureADUser](/powershell/module/azuread/get-azureaduser) to get the user.
+
+ ```powershell
+ $user = Get-AzureADUser -Filter "userPrincipalName eq 'alice@contoso.com'"
+ ```
+
+1. Use [Get-AzureADMSRoleDefinition](/powershell/module/azuread/get-azureadmsroledefinition) to get the role you want to assign.
+
+ ```powershell
+ $roleDefinition = Get-AzureADMSRoleDefinition -Filter "displayName eq 'Billing Administrator'"
+ ```
+
+1. Set tenant as scope of role assignment.
+
+ ```powershell
+ $directoryScope = '/'
+ ```
+
+1. Use [New-AzureADMSRoleAssignment](/powershell/module/azuread/new-azureadmsroleassignment) to assign the role.
+
+ ```powershell
+ $roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId $directoryScope -RoleDefinitionId $roleDefinition.Id -PrincipalId $user.objectId
+ ```
+
+### Microsoft Graph API
+
+Follow these instructions to assign a role using the Microsoft Graph API in [Graph Explorer](https://aka.ms/ge).
+
+1. Sign in to the [Graph Explorer](https://aka.ms/ge).
+
+1. Use [List user](/graph/api/user-list) API to get the user.
+
+ ```HTTP
+ GET https://graph.microsoft.com/beta/users?$filter=userPrincipalName eq 'alice@contoso.com'
+ ```
+
+1. Use the [List roleDefinitions](/graph/api/rbacapplication-list-roledefinitions) API to get the role you want to assign.
+
+ ```HTTP
+ GET https://graph.microsoft.com/beta/rolemanagement/directory/roleDefinitions?$filter=displayName eq 'Billing Administrator'
+ ```
+
+1. Use the [Create roleAssignments](/graph/api/rbacapplication-post-roleassignments) API to assign the role.\
+
+ ```HTTP
+ POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+ {
+ "principalId": "<provide objectId of the user obtained above>",
+ "roleDefinitionId": "<provide templateId of the role obtained above>",
+ "directoryScopeId": "/"
+ }
+ ```
+
+## Assign roles scoped to an administrative unit
+
+### Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
+
+1. Select **Azure Active Directory > Administrative units** to see the list of all administrative units.
+
+1. Select an administrative unit.
+
+ ![Administrative Units in Azure Active Directory.](./media/assign-roles-different-scopes/admin-units.png)
+
+1. Select **Roles and administrators** from the left nav menu to see the list of all roles available to be assigned over an administrative unit.
+
+ ![Roles and administrators menu under administrative Units in Azure Active Directory.](./media/assign-roles-different-scopes/admin-units-roles.png)
+
+1. Select the desired role.
+
+1. Select **Add assignments** and then select the users or group you want to assign this role to.
+
+1. Select **Add** to assign the role scoped over the administrative unit.
+
+>[!Note]
+>You will not see the entire list of Azure AD built-in or custom roles here. This is expected. We show the roles which have permissions related to the objects that are supported within the administrative unit. Refer to [this documentation](administrative-units.md) to see the list of objects supported within an administrative unit.
+
+### PowerShell
+
+Follow these steps to assign Azure AD roles at administrative unit scope using PowerShell.
+
+1. Open a PowerShell window and use [Import-Module](/powershell/module/microsoft.powershell.core/import-module) to import the AzureADPreview module. For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
+
+ ```powershell
+ Import-Module -Name AzureADPreview -Force
+ ```
+
+1. In a PowerShell window, use [Connect-AzureAD](/powershell/module/azuread/connect-azuread) to sign in to your tenant.
+
+ ```powershell
+ Connect-AzureAD
+ ```
+
+1. Use [Get-AzureADUser](/powershell/module/azuread/get-azureaduser) to get the user.
+
+ ```powershell
+ $user = Get-AzureADUser -Filter "userPrincipalName eq 'alice@contoso.com'"
+ ```
+
+1. Use [Get-AzureADMSRoleDefinition](/powershell/module/azuread/get-azureadmsroledefinition) to get the role you want to assign.
+
+ ```powershell
+ $roleDefinition = Get-AzureADMSRoleDefinition -Filter "displayName eq 'User Administrator'"
+ ```
+
+1. Use [Get-AzureADMSAdministrativeUnit](/powershell/module/azuread/get-azureadmsadministrativeunit) to get the administrative unit you want the role assignment to be scoped to.
+
+ ```powershell
+ $adminUnit = Get-AzureADMSAdministrativeUnit -Filter "displayName eq 'Seattle Admin Unit'"
+ $directoryScope = '/administrativeUnits/' + $adminUnit.Id
+ ```
+
+1. Use [New-AzureADMSRoleAssignment](/powershell/module/azuread/new-azureadmsroleassignment) to assign the role.
+
+ ```powershell
+ $roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId $directoryScope -RoleDefinitionId $roleDefinition.Id -PrincipalId $user.objectId
+ ```
+
+### Microsoft Graph API
+
+Follow these instructions to assign a role at administrative unit scope using the Microsoft Graph API in [Graph Explorer](https://aka.ms/ge).
+
+1. Sign in to the [Graph Explorer](https://aka.ms/ge).
+
+1. Use [List user](/graph/api/user-list) API to get the user.
+
+ ```HTTP
+ GET https://graph.microsoft.com/beta/users?$filter=userPrincipalName eq 'alice@contoso.com'
+ ```
+
+1. Use the [List roleDefinitions](/graph/api/rbacapplication-list-roledefinitions) API to get the role you want to assign.
+
+ ```HTTP
+ GET https://graph.microsoft.com/beta/rolemanagement/directory/roleDefinitions?$filter=displayName eq 'User Administrator'
+ ```
+
+1. Use the [List administrativeUnits](/graph/api/administrativeunit-list) API to get the administrative unit you want the role assignment to be scoped to.
+
+ ```HTTP
+ GET https://graph.microsoft.com/beta/administrativeUnits?$filter=displayName eq 'Seattle Admin Unit'
+ ```
+
+1. Use the [Create roleAssignments](/graph/api/rbacapplication-post-roleassignments) API to assign the role.
+
+ ```HTTP
+ POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+ {
+ "principalId": "<provide objectId of the user obtained above>",
+ "roleDefinitionId": "<provide templateId of the role obtained above>",
+ "directoryScopeId": "/administrativeUnits/<provide objectId of the admin unit obtained above>"
+ }
+ ```
+
+>[!Note]
+>Here directoryScopeId is specified as */administrativeUnits/foo*, instead of */foo*. It is by design. The scope */administrativeUnits/foo* means the principal can manage the members of the administrative unit (based on the role that she is assigned), not the administrative unit itself. The scope of */foo* means the principal can manage that Azure AD object itself. In the subsequent section, you will see that the scope is */foo* because a role scoped over an app registration grants the privilege to manage the object itself.
+
+## Assign roles scoped to an app registration
+
+### Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
+
+1. Select **Azure Active Directory > App registrations** to see the list of all app registrations.
+
+1. Select an application. You can use search box to find the desired app.
+
+ ![App registrations in Azure Active Directory.](./media/assign-roles-different-scopes/app-reg.png)
+
+1. Select **Roles and administrators** from the left nav menu to see the list of all roles available to be assigned over the app registration.
+
+ ![Roles for an app registrations in Azure Active Directory.](./media/assign-roles-different-scopes/app-reg-roles.png)
+
+1. Select the desired role.
+
+1. Select **Add assignments** and then select the users or group you want to assign this role to.
+
+ ![Add role assignment scoped to an app registrations in Azure Active Directory.](./media/assign-roles-different-scopes/app-reg-add-assignment.png)
+
+1. Select **Add** to assign the role scoped over the app registration.
+
+ ![Successfully added role assignment scoped to an app registrations in Azure Active Directory.](./media/assign-roles-different-scopes/app-reg-assignment.png)
+
+ ![Role assigned to the user scoped to an app registrations in Azure Active Directory.](./media/assign-roles-different-scopes/app-reg-scoped-assignment.png)
+
+
+>[!Note]
+>You will not see the entire list of Azure AD built-in or custom roles here. This is expected. We show the roles which have permissions related to managing app registrations only.
++
+### PowerShell
+
+Follow these steps to assign Azure AD roles at application scope using PowerShell.
+
+1. Open a PowerShell window and use [Import-Module](/powershell/module/microsoft.powershell.core/import-module) to import the AzureADPreview module. For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
+
+ ```powershell
+ Import-Module -Name AzureADPreview -Force
+ ```
+
+1. In a PowerShell window, use [Connect-AzureAD](/powershell/module/azuread/connect-azuread) to sign in to your tenant.
+
+ ```powershell
+ Connect-AzureAD
+ ```
+
+1. Use [Get-AzureADUser](/powershell/module/azuread/get-azureaduser) to get the user.
+
+ ```powershell
+ $user = Get-AzureADUser -Filter "userPrincipalName eq 'alice@contoso.com'"
+ ```
+
+1. Use [Get-AzureADMSRoleDefinition](/powershell/module/azuread/get-azureadmsroledefinition) to get the role you want to assign.
+
+ ```powershell
+ $roleDefinition = Get-AzureADMSRoleDefinition -Filter "displayName eq 'Application Administrator'"
+ ```
+
+1. Use [Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication) to get the app registration you want the role assignment to be scoped to.
+
+ ```powershell
+ $appRegistration = Get-AzureADApplication -Filter "displayName eq 'f/128 Filter Photos'"
+ $directoryScope = '/' + $appRegistration.objectId
+ ```
+
+1. Use [New-AzureADMSRoleAssignment](/powershell/module/azuread/new-azureadmsroleassignment) to assign the role.
+
+ ```powershell
+ $roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId $directoryScope -RoleDefinitionId $roleDefinition.Id -PrincipalId $user.objectId
+ ```
+
+### Microsoft Graph API
+
+Follow these instructions to assign a role at application scope using the Microsoft Graph API in [Graph Explorer](https://aka.ms/ge).
+
+1. Sign in to the [Graph Explorer](https://aka.ms/ge).
+
+1. Use [List user](/graph/api/user-list) API to get the user.
+
+ ```HTTP
+ GET https://graph.microsoft.com/beta/users?$filter=userPrincipalName eq 'alice@contoso.com'
+ ```
+
+1. Use the [List roleDefinitions](/graph/api/rbacapplication-list-roledefinitions) API to get the role you want to assign.
+
+ ```HTTP
+ GET https://graph.microsoft.com/beta/rolemanagement/directory/roleDefinitions?$filter=displayName eq 'Application Administrator'
+ ```
+
+1. Use the [List applications](/graph/api/application-list) API to get the administrative unit you want the role assignment to be scoped to.
+
+ ```HTTP
+ GET https://graph.microsoft.com/beta/applications?$filter=displayName eq 'f/128 Filter Photos'
+ ```
+
+1. Use the [Create roleAssignments](/graph/api/rbacapplication-post-roleassignments) API to assign the role.
+
+ ```HTTP
+ POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+ {
+ "principalId": "<provide objectId of the user obtained above>",
+ "roleDefinitionId": "<provide templateId of the role obtained above>",
+ "directoryScopeId": "/<provide objectId of the app registration obtained above>"
+ }
+ ```
+
+>[!Note]
+>Here directoryScopeId is specified as */foo*, unlike the section above. It is by design. The scope of */foo* means the principal can manage that Azure AD object. The scope */administrativeUnits/foo* means the principal can manage the members of the administrative unit (based on the role that she is assigned), not the administrative unit itself.
++
+## Next steps
+
+* [List Azure AD role assignments](view-assignments.md).
+* [Assign Azure AD roles to users](manage-roles-portal.md).
+* [Assign Azure AD roles to groups](groups-assign-role.md)
active-directory List Role Assignments Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/list-role-assignments-users.md
+
+ Title: List Azure AD role assignments for a user - Azure Active Directory
+description: Learn how to list Azure AD roles assignments of a user
+++++++ Last updated : 08/12/2021++++++
+# List Azure AD role assignments for a user
+
+A role can be assigned to a user directly or transitively via a group. This article describes how to list the Azure AD roles assigned to a user. For information about assigning roles to groups, see [Use Azure AD groups to manage role assignments](groups-concept.md).
+
+## Prerequisites
+
+- AzureADPreview module when using PowerShell
+- Microsoft.Graph module when using PowerShell
+- Admin consent when using Graph explorer for Microsoft Graph API
+
+For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
+
+## Azure portal
+Follow these steps to list Azure AD roles for a user using the Azure portal. Your experience will be different depending on whether you have [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) enabled.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
+
+2. Select **Azure Active Directory** > **Users** > *user name* > **Assigned roles**.
+
+ You can see the list of roles assigned to the user at different scopes. Additionally, you can see whether the role has been assigned directly or via group.
+
+ ![list of roles assigned to a user in Azure portal](./media/list-role-assignments-users/list-role-definition.png)
+
+ If you have a Premium P2 license, you will see the PIM experience, which has eligible, active, and expired role assignment details.
+
+ ![list of roles assigned to a user in PIM](./media/list-role-assignments-users/list-role-definition-pim.png)
+
+## PowerShell
+
+Follow these steps to list Azure AD roles assigned to a user using PowerShell.
+
+1. Install AzureADPreview and Microsoft.Graph module using [Install-module](/powershell/azure/active-directory/install-adv2).
+
+ ```powershell
+ Install-module -name AzureADPreview
+ Install-module -name Microsoft.Graph
+ ```
+
+2. Open a PowerShell window and use [Import-Module](/powershell/module/microsoft.powershell.core/import-module) to import the AzureADPreview module. For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
+
+ ```powershell
+ Import-Module -Name AzureADPreview -Force
+ ```
+
+3. In a PowerShell window, use [Connect-AzureAD](/powershell/module/azuread/connect-azuread) to sign in to your tenant.
+
+ ```powershell
+ Connect-AzureAD
+ ```
+4. Use [Get-AzureADMSRoleAssignment](/powershell/module/azuread/get-azureadmsroleassignment) to get roles assigned directly to a user.
+
+ ```powershell
+ #Get the user
+ $userId = (Get-AzureADUser -Filter "userPrincipalName eq 'alice@contoso.com'").ObjectId
+
+ #Get direct role assignments to the user
+ $directRoles = (Get-AzureADMSRoleAssignment -Filter "principalId eq '$userId'").RoleDefinitionId
+ ```
+
+5. To get transitive roles assigned to the user, use the following cmdlets.
+
+ a. Use [Get-AzureADMSGroup](/powershell/module/azuread/get-azureadmsgroup) to get the list of all role assignable groups.
+
+ ```powershell
+ $roleAssignableGroups = (Get-AzureADMsGroup -All $true | Where-Object IsAssignableToRole -EQ 'True').Id
+ ```
+
+ b. Use [Connect-MgGraph](/graph/powershell/get-started) to sign into and use Microsoft Graph PowerShell cmdlets.
+
+ ```powershell
+ Connect-MgGraph -Scopes "User.Read.AllΓÇ¥
+ ```
+
+ c. Use [checkMemberObjects](/graph/api/user-checkmemberobjects) API to figure out which of the role assignable groups the user is member of.
+
+ ```powershell
+ $uri = "https://graph.microsoft.com/beta/directoryObjects/$userId/microsoft.graph.checkMemberObjects"
+
+ $userRoleAssignableGroups = (Invoke-MgGraphRequest -Method POST -Uri $uri -Body @{"ids"= $roleAssignableGroups}).value
+ ```
+
+ d. Use [Get-AzureADMSRoleAssignment](/powershell/module/azuread/get-azureadmsroleassignment) to loop through the groups and get the roles assigned to them.
+
+ ```powershell
+ $transitiveRoles=@()
+ foreach($item in $userRoleAssignableGroups){
+ $transitiveRoles += (Get-AzureADMSRoleAssignment -Filter "principalId eq '$item'").RoleDefinitionId
+ }
+ ```
+
+6. Combine both direct and transitive role assignments of the user.
+
+ ```powershell
+ $allRoles = $directRoles + $transitiveRoles
+ ```
+
+## Microsoft Graph API
+
+Follow these steps to list Azure AD roles assigned to a user using the Microsoft Graph API in [Graph Explorer](https://aka.ms/ge).
+
+1. Sign in to the [Graph Explorer](https://aka.ms/ge).
+
+1. Use [List roleAssignments](/graph/api/rbacapplication-list-roleassignments) API to get roles assigned directly to a user. Add following query to the URL and select **Run query**.
+
+ ```HTTP
+ GET https://graph.microsoft.com/beta/rolemanagement/directory/roleAssignments?$filter=principalId eq '55c07278-7109-4a46-ae60-4b644bc83a31'
+ ```
+
+3. To get transitive roles assigned to the user, follow these steps.
+
+ a. Use [List groups](/graph/api/group-list) to get the list of all role assignable groups.
+
+ ```HTTP
+ GET https://graph.microsoft.com/beta/groups?$filter=isAssignableToRole eq true
+ ```
+
+ b. Pass this list to [checkMemberObjects](/graph/api/user-checkmemberobjects) API to figure out which of the role assignable groups the user is member of.
+
+ ```HTTP
+ POST https://graph.microsoft.com/beta/users/55c07278-7109-4a46-ae60-4b644bc83a31/checkMemberObjects
+ {
+ "ids": [
+ "936aec09-47d5-4a77-a708-db2ff1dae6f2",
+ "5425a4a0-8998-45ca-b42c-4e00920a6382",
+ "ca9631ad-2d2a-4a7c-88b7-e542bd8a7e12",
+ "ea3cee12-360e-411d-b0ba-2173181daa76",
+ "c3c263bb-b796-48ee-b4d2-3fbc5be5f944"
+ ]
+ }
+ ```
+
+ c. Use [List roleAssignments](/graph/api/rbacapplication-list-roleassignments) API to loop through the groups and get the roles assigned to them.
+
+ ```HTTP
+ GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments?$filter=principalId eq '5425a4a0-8998-45ca-b42c-4e00920a6382'
+ ```
+
+## Next steps
+
+* [List Azure AD role assignments](view-assignments.md).
+* [Assign Azure AD roles to users](manage-roles-portal.md).
+* [Assign Azure AD roles to groups](groups-assign-role.md)
active-directory Bis Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/bis-tutorial.md
Previously updated : 09/17/2019 Last updated : 08/06/2021
In this tutorial, you'll learn how to integrate BIS with Azure Active Directory
* Enable your users to be automatically signed-in to BIS with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* BIS supports **SP** initiated SSO
+* BIS supports **SP** initiated SSO.
-* BIS supports **Just In Time** user provisioning
+* BIS supports **Just In Time** user provisioning.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding BIS from the gallery
+## Add BIS from the gallery
To configure the integration of BIS into Azure AD, you need to add BIS from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **BIS** in the search box. 1. Select **BIS** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for BIS
+## Configure and test Azure AD SSO for BIS
Configure and test Azure AD SSO with BIS using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in BIS.
-To configure and test Azure AD SSO with BIS, complete the following building blocks:
+To configure and test Azure AD SSO with BIS, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with BIS, complete the following building blo
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **BIS** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **BIS** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following step:
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://www.bistrainer.com/sso/biscr.cfm` 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **BIS**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, a user called B.Simon is created in BIS. BIS supports just-in-t
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to BIS Sign on URL where you can initiate the login flow.
-When you click the BIS tile in the Access Panel, you should be automatically signed in to the BIS for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Go to BIS Sign-on URL directly and initiate the login flow from there.
-## Additional resources
+#### IDP initiated:
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the BIS for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the BIS tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the BIS for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try BIS with Azure AD](https://aad.portal.azure.com/)
+Once you configure BIS you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory C3m Cloud Control Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/c3m-cloud-control-tutorial.md
Previously updated : 07/17/2020 Last updated : 08/11/2021
In this tutorial, you'll learn how to integrate C3M Cloud Control with Azure Act
* Enable your users to be automatically signed-in to C3M Cloud Control with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* C3M Cloud Control supports **SP** initiated SSO
-* C3M Cloud Control supports **Just In Time** user provisioning
-* Once you configure C3M Cloud Control you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+* C3M Cloud Control supports **SP** initiated SSO.
+* C3M Cloud Control supports **Just In Time** user provisioning.
-## Adding C3M Cloud Control from the gallery
+## Add C3M Cloud Control from the gallery
To configure the integration of C3M Cloud Control into Azure AD, you need to add C3M Cloud Control from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **C3M Cloud Control** in the search box. 1. Select **C3M Cloud Control** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for C3M Cloud Control Configure and test Azure AD SSO with C3M Cloud Control using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in C3M Cloud Control.
-To configure and test Azure AD SSO with C3M Cloud Control, complete the following building blocks:
+To configure and test Azure AD SSO with C3M Cloud Control, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with C3M Cloud Control, complete the followin
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **C3M Cloud Control** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **C3M Cloud Control** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<C3MCLOUDCONTROL_ACCESS_URL>`
+1. On the **Basic SAML Configuration** section, perform the following steps:
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<C3MCLOUDCONTROL_ACCESS_URL>/api/sso/saml`
- c. In the **Reply URL** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type a URL using the following pattern:
`https://<C3MCLOUDCONTROL_ACCESS_URL>/api/sso/saml`
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<C3MCLOUDCONTROL_ACCESS_URL>`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL, Reply URL and Identifier. Contact [C3M Cloud Control Client support team](mailto:support@c3m.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier,Reply URL and Sign on URL. Contact [C3M Cloud Control Client support team](mailto:support@c3m.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **C3M Cloud Control**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, a user called B.Simon is created in C3M Cloud Control. C3M Clou
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the C3M Cloud Control tile in the Access Panel, you should be automatically signed in to the C3M Cloud Control for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Click on **Test this application** in Azure portal. This will redirect to C3M Cloud Control Sign-on URL where you can initiate the login flow.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Go to C3M Cloud Control Sign-on URL directly and initiate the login flow from there.
-- [Try C3M Cloud Control with Azure AD](https://aad.portal.azure.com/)
+* You can use Microsoft My Apps. When you click the C3M Cloud Control tile in the My Apps, this will redirect to C3M Cloud Control Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect C3M Cloud Control with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure C3M Cloud Control you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Checkproof Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/checkproof-tutorial.md
Previously updated : 10/23/2020 Last updated : 08/06/2021
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* CheckProof supports **IDP** initiated SSO
+* CheckProof supports **IDP** initiated SSO.
-## Adding CheckProof from the gallery
+## Add CheckProof from the gallery
To configure the integration of CheckProof into Azure AD, you need to add CheckProof from the gallery to your list of managed SaaS apps.
To configure the integration of CheckProof into Azure AD, you need to add CheckP
1. In the **Add from the gallery** section, type **CheckProof** in the search box. 1. Select **CheckProof** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for CheckProof Configure and test Azure AD SSO with CheckProof using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in CheckProof.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **CheckProof** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
+1. On the **Set up single sign-on with SAML** page, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://api.checkproof.com/api/v1/saml/<ID>/metadata`
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Go to the **Settings > Company Settings > SAML SETTINGS** page and Upload the **Federation Metadata XML** in **Federation XML** textbox.
- ![saml settings page](./media/checkproof-tutorial/saml-settings.png)
+ ![SAML settings page.](./media/checkproof-tutorial/settings.png)
### Create CheckProof test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Click on **Profile** and select **My profile**.
- ![CheckProof test user page](./media/checkproof-tutorial/create-user.png)
+ ![CheckProof test user page.](./media/checkproof-tutorial/create-user.png)
1. Click on **CREATE USER**. 1. In the **CREATE USER** page, fill the required fields and click on **SAVE**.
- ![CheckProof test user1 page](./media/checkproof-tutorial/create-user-2.png)
+ ![CheckProof create user page.](./media/checkproof-tutorial/user.png)
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-1. Click on Test this application in Azure portal and you should be automatically signed in to the CheckProof for which you set up the SSO
-
-1. You can use Microsoft Access Panel. When you click the CheckProof tile in the Access Panel, you should be automatically signed in to the CheckProof for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on Test this application in Azure portal and you should be automatically signed in to the CheckProof for which you set up the SSO.
+* You can use Microsoft My Apps. When you click the CheckProof tile in the My Apps, you should be automatically signed in to the CheckProof for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure CheckProof you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure CheckProof you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Exponenthr Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/exponenthr-tutorial.md
Previously updated : 10/04/2019 Last updated : 08/06/2021
In this tutorial, you'll learn how to integrate ExponentHR with Azure Active Dir
* Enable your users to be automatically signed-in to ExponentHR with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* ExponentHR supports **SP** initiated SSO
+* ExponentHR supports **SP** initiated SSO.
-* ExponentHR supports **WS-Fed** protocol
+* ExponentHR supports **WS-Fed** protocol.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding ExponentHR from the gallery
+## Add ExponentHR from the gallery
To configure the integration of ExponentHR into Azure AD, you need to add ExponentHR from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **ExponentHR** in the search box. 1. Select **ExponentHR** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for ExponentHR
+## Configure and test Azure AD SSO for ExponentHR
Configure and test Azure AD SSO with ExponentHR using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ExponentHR.
-To configure and test Azure AD SSO with ExponentHR, complete the following building blocks:
+To configure and test Azure AD SSO with ExponentHR, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with ExponentHR, complete the following build
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **ExponentHR** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **ExponentHR** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following step:
- In the **Sign-on URL** text box, type a URL using the following pattern:
+ In the **Sign-on URL** text box, type the URL:
`https://www.exponenthr.com/service/saml/login.aspx` 1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **ExponentHR**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called B.Simon in ExponentHR. Work with [Exp
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the ExponentHR tile in the Access Panel, you should be automatically signed in to the ExponentHR for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to ExponentHR Sign-on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to ExponentHR Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the ExponentHR tile in the My Apps, this will redirect to ExponentHR Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try ExponentHR with Azure AD](https://aad.portal.azure.com/)
+Once you configure ExponentHR you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Knowledgeowl Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/knowledgeowl-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure KnowledgeOwl SSO
-1. In a different web browser window, sign into your KnowledgeOwl company site as an administrator.
+1. In a different web browser window, sign in to your KnowledgeOwl company site as an administrator.
-1. Click on **Settings** and then select **Security**.
+1. Click on **Settings** and then select **SSO**.
- ![Screenshot shows Security selected from the Settings menu.](./media/knowledgeowl-tutorial/configure-1.png)
+ ![Screenshot that shows S S O selected from the Settings menu.](./media/knowledgeowl-tutorial/knowledgeowl-sso-settings-menu.png)
-1. Scroll to **SAML SSO Integration** and perform the following steps:
+1. In the Scroll to **SAML Settings** tab, perform the following steps:
- ![Screenshot shows SAML S S O Integration where you can make the changes described here.](./media/knowledgeowl-tutorial/configure-2.png)
+ ![Screenshot that shows making changes to S A M L S S O Integration settings.](./media/knowledgeowl-tutorial/knowledgeowl-required-settings.png)
a. Select **Enable SAML SSO**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
f. In the **IdP Logout URL** textbox, paste the **Logout URL** value, which you have copied from the Azure portal.
- g. Upload the downloaded certificate form the Azure portal by clicking the **Upload IdP Certificate**.
+ g. Upload the downloaded certificate form the Azure portal by clicking the **Upload** link beneath **IdP Certificate**.
+
+ h. Click **Save** at the bottom of the page.
+
+ ![Screenshot that shows the Save button for S A M L S S O integration settings.](./media/knowledgeowl-tutorial/knowledgeowl-saml-save.png)
- h. Click on **Map SAML Attributes** to map attributes and perform the following steps:
+ i. Open the **SAML Attribute Map** tab to map attributes and perform the following steps:
- ![Screenshot shows Map SAML Attributes where you can make the changes described here.](./media/knowledgeowl-tutorial/configure-3.png)
+ ![Screenshot that shows making changes to the S A M L Attribute Map.](./media/knowledgeowl-tutorial/knowledgeowl-direct-attributes-select.png)
- * Enter `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/ssoid` into the **SSO ID** textbox
+ * Enter `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/ssoid` into the **SSO ID** textbox.
* Enter `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress` into the **Username/Email** textbox. * Enter `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname` into the **First Name** textbox. * Enter `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname` into the **Last Name** textbox.
- * Click **Save**.
- i. Click **Save** at the bottom of the page.
+ j. Click **Save** at the bottom of the page.
- ![Screenshot shows the Save button.](./media/knowledgeowl-tutorial/configure-4.png)
+ ![Screenshot shows the Save button for S A M L Attribute Map settings.](./media/knowledgeowl-tutorial/knowledgeowl-direct-attributes-save.png)
### Create KnowledgeOwl test user
In this section, a user called B.Simon is created in KnowledgeOwl. KnowledgeOwl
In this section, you test your Azure AD single sign-on configuration with following options.
-#### SP initiated:
+#### SP initiated
* Click on **Test this application** in Azure portal. This will redirect to KnowledgeOwl Sign on URL where you can initiate the login flow.
-* Go to KnowledgeOwl Sign-on URL directly and initiate the login flow from there.
+* Go to the KnowledgeOwl sign-on URL directly and initiate the login flow from there.
-#### IDP initiated:
+#### IDP initiated
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the KnowledgeOwl for which you set up the SSO.
+* Click on **Test this application** in the Azure portal and you should be automatically signed in to the KnowledgeOwl application for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the KnowledgeOwl tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the KnowledgeOwl for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+You can also use the Microsoft My Apps portal to test the application in any mode. When you click the KnowledgeOwl tile in the My Apps portal, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the KnowledgeOwl application for which you set up the SSO. For more information about the My Apps portal, see [Introduction to My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure KnowledgeOwl you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure KnowledgeOwl, you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Tribeloo Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tribeloo-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
|Attribute|Type|Supported for filtering| |||| |userName|String|&check;
+ |emails[type eq "work"].value|String|
|active|Boolean| |displayName|String| |name.givenName|String| |name.familyName|String|
+ |addresses[type eq "work"].formatted|String|
1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
Once you've configured provisioning, use the following resources to monitor your
* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully * Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Change Log
+* 08/12/2021 - Added support for core user attributes **emails[type eq "work"].value** and **addresses[type eq "work"].formatted**.
## More resources
advisor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/advisor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Advisor description: Sample Azure Resource Graph queries for Azure Advisor showing use of resource types and tables to access Azure Advisor related resources and properties. Previously updated : 08/04/2021 Last updated : 08/09/2021
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/certificate-rotation.md
AKS generates and uses the following certificates, Certificate Authorities, and
* The `kubectl` client has a certificate for communicating with the AKS cluster. > [!NOTE]
-> AKS clusters created prior to May 2019 have certificates that expire after two years. Any cluster created after May 2019 or any cluster that has its certificates rotated have Cluster CA certificates that expire after 30 years. All other certificates expire after two years. To verify when your cluster was created, use `kubectl get nodes` to see the *Age* of your node pools.
+> AKS clusters created prior to May 2019 have certificates that expire after two years. Any cluster created after May 2019 or any cluster that has its certificates rotated have Cluster CA certificates that expire after 30 years. All other AKS certificates, which use the Cluster CA to for signing, expire after two years and are automatically rotated when they expire. To verify when your cluster was created, use `kubectl get nodes` to see the *Age* of your node pools.
> > Additionally, you can check the expiration date of your cluster's certificate. For example, the following bash command displays the client certificate details for the *myAKSCluster* cluster in resource group *rg* > ```console
az vmss run-command invoke -g MC_rg_myAKSCluster_region -n vmss-name --instance-
## Rotate your cluster certificates > [!WARNING]
-> Rotating your certificates using `az aks rotate-certs` can cause up to 30 minutes of downtime for your AKS cluster.
+> Rotating your certificates using `az aks rotate-certs` will recreate all of your nodes and can cause up to 30 minutes of downtime for your AKS cluster.
Use [az aks get-credentials][az-aks-get-credentials] to sign in to your AKS cluster. This command also downloads and configures the `kubectl` client certificate on your local machine.
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-security.md
For connectivity and security with on-premises networks, you can deploy your AKS
To filter virtual network traffic flow, Azure uses network security group rules. These rules define the source and destination IP ranges, ports, and protocols allowed or denied access to resources. Default rules are created to allow TLS traffic to the Kubernetes API server. You create services with load balancers, port mappings, or ingress routes. AKS automatically modifies the network security group for traffic flow.
-If you provide your own subnet for your AKS cluster, **do not** modify the subnet-level network security group managed by AKS. Instead, create more subnet-level network security groups to modify the flow of traffic. Make sure they don't interfere with necessary traffic managing the cluster, such as load balancer access, communication with the control plane, and [egress][aks-limit-egress-traffic].
+If you provide your own subnet for your AKS cluster (whether using Azure CNI or Kubenet), **do not** modify the NIC-level network security group managed by AKS. Instead, create more subnet-level network security groups to modify the flow of traffic. Make sure they don't interfere with necessary traffic managing the cluster, such as load balancer access, communication with the control plane, and [egress][aks-limit-egress-traffic].
### Kubernetes network policy
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-basic.md
NAME TYPE CLUSTER-IP EXTERNAL-I
nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.74.133 EXTERNAL_IP 80:32486/TCP,443:30953/TCP 44s app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx ```
-No ingress rules have been created yet, so the NGINX ingress controller's default 404 page is displayed if you browse to the internal IP address. Ingress rules are configured in the following steps.
+No ingress rules have been created yet, so the NGINX ingress controller's default 404 page is displayed if you browse to the external IP address. Ingress rules are configured in the following steps.
## Run demo applications
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/intro-kubernetes.md
Learn more about deploying and managing AKS with the Azure CLI Quickstart.
<!-- LINKS - external --> [aks-engine]: https://github.com/Azure/aks-engine [kubectl-overview]: https://kubernetes.io/docs/user-guide/kubectl-overview/
-[compliance-doc]: https://azure.microsoft.com/en-us/overview/trusted-cloud/compliance/
+[compliance-doc]: https://azure.microsoft.com/overview/trusted-cloud/compliance/
<!-- LINKS - internal --> [acr-docs]: ../container-registry/container-registry-intro.md
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/upgrade-cluster.md
The cluster auto-upgrade for AKS clusters is a preview feature.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
+Add the following extension, for `az cli`.
+
+```azurecli-interactive
+az extension add --name aks-preview
+```
+ Register the `AutoUpgradePreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example: ```azurecli-interactive
api-management Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/zone-redundancy.md
Previously updated : 08/09/2021 Last updated : 08/11/2021
Configuring API Management for zone redundancy is currently supported in the fol
* Australia East * Brazil South * Canada Central
-* Central India
+* Central India (*)
* Central US * East US * East US 2 * France Central * Germany West Central * Japan East
-* Korea Central
+* Korea Central (*)
* North Europe
-* South Africa North
+* Norway East (*)
+* South Africa North (*)
* South Central US * Southeast Asia * UK South
Configuring API Management for zone redundancy is currently supported in the fol
* West US 2 * West US 3
+> [!IMPORTANT]
+> The regions with * against them have restrictive access in an Azure Subscription to enable Availability Zone support. Please work with your Microsoft sales or customer representative
+ ## Prerequisites * If you have not yet created an API Management service instance, see [Create an API Management service instance](get-started-create-service-instance.md). Select the Premium service tier.
app-service Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-java.md
The deployment process to Azure App Service will use your Azure credentials from
Run the Maven command below to configure the deployment. This command will help you to set up the App Service operating system, Java version, and Tomcat version. ```azurecli-interactive
-mvn com.microsoft.azure:azure-webapp-maven-plugin:2.0.0:config
+mvn com.microsoft.azure:azure-webapp-maven-plugin:2.1.0:config
``` ::: zone pivot="platform-windows"
application-gateway Ingress Controller Install Existing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/ingress-controller-install-existing.md
shared between one or more AKS clusters and/or other Azure components.
## Prerequisites This document assumes you already have the following tools and infrastructure installed:-- [AKS](https://azure.microsoft.com/services/kubernetes-service/) with [Advanced Networking](../aks/configure-azure-cni.md) enabled
+- [AKS](https://azure.microsoft.com/services/kubernetes-service/) with [Azure Container Networking Interface (CNI)](../aks/configure-azure-cni.md)
- [Application Gateway v2](./tutorial-autoscale-ps.md) in the same virtual network as AKS - [AAD Pod Identity](https://github.com/Azure/aad-pod-identity) installed on your AKS cluster - [Cloud Shell](https://shell.azure.com/) is the Azure shell environment, which has `az` CLI, `kubectl`, and `helm` installed. These tools are required for the commands below.
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/key-vault-certs.md
Application Gateway currently supports software-validated certificates only. Har
> [!NOTE] > The Azure portal only supports KeyVault Certificates, not secrets. Application Gateway still supports referencing secrets from KeyVault, but only through non-Portal resources like PowerShell, CLI, API, ARM templates, etc.
+> [!WARNING]
+> Azure Application Gateway currently only supports Key Vault accounts in the same subscription as the Application Gateway resource. Choosing a Key Vault under a different subscription than your Application Gateway will result in a failure.
+ ## How integration works Application Gateway integration with Key Vault requires a three-step configuration process:
application-gateway Multiple Site Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/multiple-site-overview.md
Using a wildcard character in the host name, you can match multiple host names i
>[!NOTE] > This feature is in preview and is available only for Standard_v2 and WAF_v2 SKU of Application Gateway. To learn more about previews, see [terms of use here](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-In [Azure PowerShell](tutorial-multiple-sites-powershell.md), you must use `-HostNames` instead of `-HostName`. With HostNames, you can mention up to 5 host names as comma-separated values and use wildcard characters. For example, `-HostNames "*.contoso.com,*.fabrikam.com"`
+In [Azure PowerShell](tutorial-multiple-sites-powershell.md), you must use `-HostNames` instead of `-HostName`. With HostNames, you can mention up to 5 host names as comma-separated values and use wildcard characters. For example, `-HostNames "*.contoso.com","*.fabrikam.com"`
In [Azure CLI](tutorial-multiple-sites-cli.md), you must use `--host-names` instead of `--host-name`. With host-names, you can mention up to 5 host names as comma-separated values and use wildcard characters. For example, `--host-names "*.contoso.com,*.fabrikam.com"`
application-gateway Understanding Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/understanding-pricing.md
Observed Capacity Units in metrics = 49.23
See the following articles to learn more about how pricing works in Azure Application Gateway: * [Azure Application Gateway pricing page](https://azure.microsoft.com/pricing/details/application-gateway/)
-* [Azure Application Gateway pricing calculator](https://azure.microsoft.com/en-us/pricing/calculator/?service=application-gateway)
+* [Azure Application Gateway pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=application-gateway)
automation Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/how-to/private-link-security.md
To understand & configure Update Management review [About Update Management](../
If you want your machines configured for Update management to connect to Automation & Log Analytics workspace in a secure manner over Private Link channel, you have to enable Private Link for the Log Analytics workspace linked to the Automation Account configured with Private Link.
-You can control how a Log Analytics workspace can be reached from outside of the Private Link scopes by following the steps described in [Configure Log Analytics](../../azure-monitor/logs/private-link-security.md#configure-access-to-your-resources). If you set **Allow public network access for ingestion** to **No**, then machines outside of the connected scopes cannot upload data to this workspace. If you set **Allow public network access for queries** to **No**, then machines outside of the scopes cannot access data in this workspace.
+You can control how a Log Analytics workspace can be reached from outside of the Private Link scopes by following the steps described in [Configure Log Analytics](../../azure-monitor/logs/private-link-configure.md#configure-access-to-your-resources). If you set **Allow public network access for ingestion** to **No**, then machines outside of the connected scopes cannot upload data to this workspace. If you set **Allow public network access for queries** to **No**, then machines outside of the scopes cannot access data in this workspace.
Use **DSCAndHybridWorker** target sub-resource to enable Private Link for user & system hybrid workers.
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-region.md
To achieve comprehensive business continuity on Azure, build your application ar
| West US 2 | | | | | West US 3 | | | |
-\* To learn more about Availability Zones and available services support in these regions, contact your Microsoft sales or customer representative. For the upcoming regions that will support Availability Zones, see [Azure geographies](https://azure.microsoft.com/en-us/global-infrastructure/geographies/).
+\* To learn more about Availability Zones and available services support in these regions, contact your Microsoft sales or customer representative. For the upcoming regions that will support Availability Zones, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/).
## Azure Services supporting Availability Zones
azure-arc Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/connectivity.md
Some Azure-attached services are only available when they can be directly reache
|**Container images**|Microsoft Container Registry -> Customer|Required|No|Indirect or direct|Container images are the method for distributing the software. In an environment which can connect to the Microsoft Container Registry (MCR) over the Internet, the container images can be pulled directly from MCR. In the event that the deployment environment doesnΓÇÖt have direct connectivity, you can pull the images from MCR and push them to a private container registry in the deployment environment. At creation time, you can configure the creation process to pull from the private container registry instead of MCR. This will also apply to automated updates.| |**Resource inventory**|Customer environment -> Azure|Required|No|Indirect or direct|An inventory of data controllers, database instances (PostgreSQL and SQL) is kept in Azure for billing purposes and also for purposes of creating an inventory of all data controllers and database instances in one place which is especially useful if you have more than one environment with Azure Arc data services. As instances are provisioned, deprovisioned, scaled out/in, scaled up/down the inventory is updated in Azure.| |**Billing telemetry data**|Customer environment -> Azure|Required|No|Indirect or direct|Utilization of database instances must be sent to Azure for billing purposes. |
-|**Monitoring data and logs**|Customer environment -> Azure|Optional|Maybe depending on data volume (see [Azure Monitor pricing](https://azure.microsoft.com/en-us/pricing/details/monitor/))|Indirect or direct|You may want to send the locally collected monitoring data and logs to Azure Monitor for aggregating data across multiple environments into one place and also to use Azure Monitor services like alerts, using the data in Azure Machine Learning, etc.|
+|**Monitoring data and logs**|Customer environment -> Azure|Optional|Maybe depending on data volume (see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/))|Indirect or direct|You may want to send the locally collected monitoring data and logs to Azure Monitor for aggregating data across multiple environments into one place and also to use Azure Monitor services like alerts, using the data in Azure Machine Learning, etc.|
|**Azure Role-based Access Control (Azure RBAC)**|Customer environment -> Azure -> Customer Environment|Optional|No|Direct only|If you want to use Azure RBAC, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure RBAC then local Kubernetes RBAC can be used.| |**Azure Active Directory (AAD) (Future)**|Customer environment -> Azure -> Customer environment|Optional|Maybe, but you may already be paying for Azure AD|Direct only|If you want to use Azure AD for authentication, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure AD for authentication, you can us Active Directory Federation Services (ADFS) over Active Directory. **Pending availability in directly connected mode**| |**Backup and restore**|Customer environment -> Customer environment|Required|No|Direct or indirect|The backup and restore service can be configured to point to local storage classes. **Pending availability in directly connected mode**|
azure-arc Managed Instance Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-features.md
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
- Multi-model capabilities - [Graph processing](/sql/relational-databases/graphs/sql-graph-overview), [JSON data](/sql/relational-databases/json/json-data-sql-server), [OPENXML](/sql/t-sql/functions/openxml-transact-sql), [Spatial](/sql/relational-databases/spatial/spatial-data-sql-server), [OPENJSON](/sql/t-sql/functions/openjson-transact-sql), and [XML indexes](/sql/t-sql/statements/create-xml-index-transact-sql). --
-## Features of Azure Arc-enabled SQL Managed Instance
-
-### <a name="RDBMSHA"></a> RDBMS High Availability
+## <a name="RDBMSHA"></a> RDBMS High Availability
|Feature|Azure Arc-enabled SQL Managed Instance| |-|-|
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
<sup>1</sup> In the scenario where there is a pod failure, a new SQL Managed Instance will start up and re-attach to the persistent volume containing your data. [Learn more about Kubernetes persistent volumes here](https://kubernetes.io/docs/concepts/storage/persistent-volumes).
-### <a name="RDBMSSP"></a> RDBMS Scalability and Performance
+## <a name="RDBMSSP"></a> RDBMS Scalability and Performance
| Feature | Azure Arc-enabled SQL Managed Instance | |--|--|
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
| Interleaved Execution for Multi-Statement Table Valued Functions | Yes | | Bulk insert improvements | Yes |
-### <a name="RDBMSS"></a> RDBMS Security
+## <a name="RDBMSS"></a> RDBMS Security
| Feature | Azure Arc-enabled SQL Managed Instance | |--|--|
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
| User-defined roles | Yes | | Contained databases | Yes | | Encryption for backups | Yes |
+| SQL Server Authentication | Yes |
+| Azure Active Directory Authentication | No |
+| Windows Authentication | No |
-### <a name="RDBMSM"></a> RDBMS Manageability
+## <a name="RDBMSM"></a> RDBMS Manageability
| Feature | Azure Arc-enabled SQL Managed Instance | |--|--|
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/release-notes.md
Use the following tools:
#### Azure Arc-enabled SQL Managed Instance - Automated backup and point-in-time restore is in preview.- - Supports point-in-time restore from an existing database in an Azure Arc-enabled SQL managed instance to a new database within the same instance. - If the current datetime is given as point-in-time in UTC format, it resolves to the latest valid restore time and restores the given database until last valid transaction. - A database can be restored to any point-in-time where the transactions took place.
Use the following tools:
- System database `model` is not backed up in order to prevent interference with creation/deletion of database. The DB gets locked when admin operations are performed. - Currently only `master` and `msdb` system databases are backed up. Only full backups are performed every 12 hours. - Only `ONLINE` user databases are backup up.
+- Default recovery point objective (RPO): 5 minutes. Can not be modified in current release.
+- Backups are retained indefinitely. To recover space, manually delete backups.
##### Other limitations - Transaction replication is currently not supported.-- Log shipping is currently blocked
+- Log shipping is currently blocked.
+- Only SQL Server Authentication is supported.
## June 2021
azure-arc Storage Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/storage-configuration.md
For public cloud-based, managed Kubernetes services we can make the following re
|Public cloud service|Recommendation| |||
-|**Azure Kubernetes Service (AKS)**|Azure Kubernetes Service (AKS) has two types of storage - Azure Files and Azure Managed Disks. Each type of storage has two pricing/performance tiers - standard (HDD) and premium (SSD). Thus, the four storage classes provided in AKS are `azurefile` (Azure Files standard tier), `azurefile-premium` (Azure Files premium tier), `default` (Azure Disks standard tier), and `managed-premium` (Azure Disks premium tier). The default storage class is `default` (Azure Disks standard tier). There are substantial **[pricing differences](https://azure.microsoft.com/en-us/pricing/details/storage/)** between the types and tiers which should be factored into your decision. For production workloads with high-performance requirements, we recommend using `managed-premium` for all storage classes. For dev/test workloads, proofs of concept, etc. where cost is a consideration, then `azurefile` is the least expensive option. All four of the options can be used for situations requiring remote, shared storage as they are all network-attached storage devices in Azure. Read more about [AKS Storage](../../aks/concepts-storage.md).|
+|**Azure Kubernetes Service (AKS)**|Azure Kubernetes Service (AKS) has two types of storage - Azure Files and Azure Managed Disks. Each type of storage has two pricing/performance tiers - standard (HDD) and premium (SSD). Thus, the four storage classes provided in AKS are `azurefile` (Azure Files standard tier), `azurefile-premium` (Azure Files premium tier), `default` (Azure Disks standard tier), and `managed-premium` (Azure Disks premium tier). The default storage class is `default` (Azure Disks standard tier). There are substantial **[pricing differences](https://azure.microsoft.com/pricing/details/storage/)** between the types and tiers which should be factored into your decision. For production workloads with high-performance requirements, we recommend using `managed-premium` for all storage classes. For dev/test workloads, proofs of concept, etc. where cost is a consideration, then `azurefile` is the least expensive option. All four of the options can be used for situations requiring remote, shared storage as they are all network-attached storage devices in Azure. Read more about [AKS Storage](../../aks/concepts-storage.md).|
|**AWS Elastic Kubernetes Service (EKS)**| Amazon's Elastic Kubernetes Service has one primary storage class - based on the [EBS CSI storage driver](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html). This is recommended for production workloads. There is a new storage driver - [EFS CSI storage driver](https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html) - that can be added to an EKS cluster, but it is currently in a beta stage and subject to change. Although AWS says that this storage driver is supported for production, we don't recommend using it because it is still in beta and subject to change. The EBS storage class is the default and is called `gp2`. Read more about [EKS Storage](https://docs.aws.amazon.com/eks/latest/userguide/storage-classes.html).| |**Google Kubernetes Engine (GKE)**|Google Kubernetes Engine (GKE) has just one storage class called `standard`, which is used for [GCE persistent disks](https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk). Being the only one, it is also the default. Although there is a [local, static volume provisioner](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd#run-local-volume-static-provisioner) for GKE that you can use with direct-attached SSDs, we don't recommend using it as it is not maintained or supported by Google. Read more about [GKE storage](https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes).
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/version-log.md
+
+ Title: Azure Arc-enabled data services - release versions
+description: A log of versions by release date for Azure Arc-enabled data services
++++++ Last updated : 08/06/2021+
+# Customer intent: As a data professional, I want to understand what versions of components align with specific releases.
++
+# Version log
+
+The following table describes the various versions over time as they change:
+
+|Date|Release name|Container images tag|CRD prefixes and versions|ARM API version|`arcdata` Azure CLI extension version|Arc enabled Kubernetes helm chart extension version|Arc Data extension for Azure Data Studio|
+|||||||||
+|July 30, 2021|Arc enabled SQL Managed Instance General Purpose and Arc enabled SQL Server General Availability|v1.0.0_2021-07-30|`datacontrollers`: v1beta1, v1 <br/>`exporttasks.tasks`: v1beta1, v1 <br/>`monitors`: v1beta1, v1 <br/>`sqlmanagedinstances.sql`: v1beta1, v1 <br/>`postgresqls`: v1beta1 <br/>`sqlmanagedinstancerestoretasks.tasks.sql`: v1beta1 <br/>`dags.sql`: v1beta1 <br/>|2021-08-01 (stable)|1.0|1.0.16701001, release train: stable|0.9.5|
+|Aug 3, 2021|Update to Azure Arc extension for Azure Data Studio to align with July 30, General Availability|No change|No change|No change|No change|No change|0.9.6|
+
+For the complete CRD, append `.arcdata.microsoft.com` to the prefix.
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes description: Sample Azure Resource Graph queries for Azure Arc-enabled Kubernetes showing use of resource types and tables to access Azure Arc-enabled Kubernetes related resources and properties. Previously updated : 08/04/2021 Last updated : 08/09/2021
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc description: Sample Azure Resource Graph queries for Azure Arc showing use of resource types and tables to access Azure Arc related resources and properties. Previously updated : 08/04/2021 Last updated : 08/09/2021
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled servers description: Sample Azure Resource Graph queries for Azure Arc-enabled servers showing use of resource types and tables to access Azure Arc-enabled servers related resources and properties. Previously updated : 08/04/2021 Last updated : 08/09/2021
azure-functions Durable Functions Timers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-timers.md
[Durable Functions](durable-functions-overview.md) provides *durable timers* for use in orchestrator functions to implement delays or to set up timeouts on async actions. Durable timers should be used in orchestrator functions instead of `Thread.Sleep` and `Task.Delay` (C#), or `setTimeout()` and `setInterval()` (JavaScript), or `time.sleep()` (Python).
-You create a durable timer by calling the `CreateTimer` (.NET) method or the `createTimer` (JavaScript) method of the [orchestration trigger binding](durable-functions-bindings.md#orchestration-trigger). The method returns a task that completes on a specified date and time.
+You create a durable timer by calling the [`CreateTimer` (.NET)](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationcontext.createtimer), the [`createTimer` (JavaScript)](/javascript/api/durable-functions/durableorchestrationcontext#createTimer_Date_), or the [`create_timer` (Python)](/python/api/azure-functions-durable/azure.durable_functions.durableorchestrationcontext#create-timer-fire-at--datetime-datetime--azure-durable-functions-models-task-task) method of the [orchestration trigger binding](durable-functions-bindings.md#orchestration-trigger). The method returns a task that completes on a specified date and time.
## Timer limitations When you create a timer that expires at 4:30 pm, the underlying Durable Task Framework enqueues a message that becomes visible only at 4:30 pm. When running in the Azure Functions Consumption plan, the newly visible timer message will ensure that the function app gets activated on an appropriate VM. > [!NOTE]
-> * Starting with [version 2.3.0](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.3.0) of the Durable Extension, Durable timers are unlimited. In earlier versions of the extension, Durable timers are limited to seven days. When you are using an earlier version and need a delay longer than seven days, use the timer APIs in a `while` loop to simulate this delay.
+> * Starting with [version 2.3.0](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.3.0) of the Durable Extension, Durable timers are unlimited for .NET apps. For JavaScript, Python, and PowerShell apps, as well as .NET apps using earlier versions of the extension, Durable timers are limited to seven days. When you are using an older extension version or a non-.NET language runtime and need a delay longer than seven days, use the timer APIs in a `while` loop to simulate a longer delay.
> * Always use `CurrentUtcDateTime` instead of `DateTime.UtcNow` in .NET or `currentUtcDateTime` instead of `Date.now` or `Date.UTC` in JavaScript when computing the fire time for durable timers. For more information, see the [orchestrator function code constraints](durable-functions-code-constraints.md) article. ## Usage for delay
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
Open the *HttpExample.cs* project file and add the following parameter to the `R
ConnectionStringSetting = "CosmosDbConnectionString")]IAsyncCollector<dynamic> documentsOut, ```
-The `documentsOut` parameter is an IAsyncCollector<T> type, which represents a collection of JSON documents that will be written to your Azure Cosmos DB container when the function completes. Specific attributes specifies the name of the container and the name of its parent database. The connection string for your Azure Cosmos DB account is set by the `ConnectionStringSettingAttribute`.
+The `documentsOut` parameter is an `IAsyncCollector<T>` type, which represents a collection of JSON documents that will be written to your Azure Cosmos DB container when the function completes. Specific attributes specifies the name of the container and the name of its parent database. The connection string for your Azure Cosmos DB account is set by the `ConnectionStringSettingAttribute`.
The Run method definition should now look like the following:
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-app-settings.md
Title: App settings reference for Azure Functions description: Reference documentation for the Azure Functions app settings or environment variables. Previously updated : 09/22/2018 Last updated : 07/27/2021 # App settings reference for Azure Functions
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-develop-vs-code.md
Title: Develop Azure Functions by using Visual Studio Code
description: Learn how to develop and test Azure Functions by using the Azure Functions extension for Visual Studio Code. Previously updated : 08/21/2019 Last updated : 02/21/2021 #Customer intent: As an Azure Functions developer, I want to understand how Visual Studio Code supports Azure Functions so that I can more efficiently create, publish, and maintain my Functions projects.
To learn more about Azure Functions Core Tools, see [Work with Azure Functions C
To learn more about developing functions as .NET class libraries, see [Azure Functions C# developer reference](functions-dotnet-class-library.md). This article also provides links to examples of how to use attributes to declare the various types of bindings supported by Azure Functions. [Azure Functions extension for Visual Studio Code]: https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions
-[Azure Functions Core Tools]: functions-run-local.md
+[Azure Functions Core Tools]: functions-run-local.md
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-develop-vs.md
Title: Develop Azure Functions using Visual Studio
description: Learn how to develop and test Azure Functions by using Azure Functions Tools for Visual Studio 2019. Previously updated : 06/10/2020 Last updated : 12/10/2020 # Develop Azure Functions using Visual Studio
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-dotnet-class-library.md
description: Understand how to use C# to develop and publish code as class libra
Previously updated : 07/24/2020 Last updated : 07/24/2021 # Develop C# class library functions using Azure Functions
azure-functions Functions How To Use Azure Function App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
Title: Configure function app settings in Azure Functions
description: Learn how to configure function app settings in Azure Functions. ms.assetid: 81eb04f8-9a27-45bb-bf24-9ab6c30d205c Previously updated : 04/13/2020 Last updated : 01/21/2021
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-run-local.md
Title: Work with Azure Functions Core Tools
description: Learn how to code and test Azure Functions from the command prompt or terminal on your local computer before you run them on Azure Functions. ms.assetid: 242736be-ec66-4114-924b-31795fd18884 Previously updated : 03/13/2019 Last updated : 07/27/2021
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
description: This article tracks FedRAMP, DoD, and ICD 503 compliance scope for
Previously updated : 08/09/2021 Last updated : 08/11/2021 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
### Terminology used -- FedRAMP High = FedRAMP High P-ATO in Azure-- DoD IL2 = DoD SRG Impact Level 2 PA in Azure
+- FedRAMP High = FedRAMP High Provisional Authorization to Operate (P-ATO) in Azure
+- DoD IL2 = DoD SRG Impact Level 2 Provisional Authorization (PA) in Azure
- &#x2705; = service is included in audit scope and has been authorized - Planned 2021 = service will undergo a FedRAMP High assessment in 2021 - once the service is authorized, status will be updated
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
### Terminology used -- FR High = FedRAMP High P-ATO in Azure Government-- DoD IL2 = DoD SRG Impact Level 2 PA in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia-- DoD IL4 = DoD SRG Impact Level 4 PA in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia-- DoD IL5 = DoD SRG Impact Level 5 PA in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia-- DoD IL6 = DoD SRG Impact Level 6 PA in Azure Government Secret-- ICD 503 = Intelligence Community Directive 503 PA in Azure Government Secret
+- FR High = FedRAMP High Provisional Authorization to Operate (P-ATO) in Azure Government
+- DoD IL2 = DoD SRG Impact Level 2 Provisional Authorization (PA) in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia
+- DoD IL4 = DoD SRG Impact Level 4 Provisional Authorization (PA) in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia
+- DoD IL5 = DoD SRG Impact Level 5 Provisional Authorization (PA) in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia
+- DoD IL6 = DoD SRG Impact Level 6 Provisional Authorization (PA) in Azure Government Secret
+- ICD 503 Secret = Intelligence Community Directive 503 Authorization to Operate (ATO) in Azure Government Secret
- &#x2705; = service is included in audit scope and has been authorized > [!NOTE]
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
> - Some services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in **[Isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md).** > - For DoD IL5 PA compliance scope in Azure Government DoD regions (US DoD Central and US DoD East), see **[Azure Government DoD regions IL5 audit scope](../documentation-government-overview-dod.md#azure-government-dod-regions-il5-audit-scope).**
-| Service | FR High / DoD IL2 | DoD IL4 | DoD IL5 | DoD IL6 | ICD 503 |
-| - |:--:|:-:|:-:|:-:|:-:|
+| Service | FR High / DoD IL2 | DoD IL4 | DoD IL5 | DoD IL6 | ICD 503 Secret |
+| - |:--:|:-:|:-:|:-:|:--:|
| [API Management](https://azure.microsoft.com/services/api-management/) | &#x2705; | &#x2705; | &#x2705; | | | | [App Configuration](https://azure.microsoft.com/services/app-configuration/) | &#x2705; | | | | | | [Application Gateway](https://azure.microsoft.com/services/application-gateway/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | &#x2705; | &#x2705; | &#x2705; | | | | [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | &#x2705; | | | | [Azure Cache for Redis](https://azure.microsoft.com/services/cache/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
| [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Cognitive Search](https://azure.microsoft.com/services/search/) | &#x2705; | &#x2705; | &#x2705; | | | | [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | &#x2705; | | | | [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | &#x2705; | &#x2705; | &#x2705; | | | | [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | &#x2705; | &#x2705; | &#x2705; | | |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
| [Azure DNS](https://azure.microsoft.com/services/dns/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure File Sync](../../storage/file-sync/file-sync-introduction.md) | &#x2705; | &#x2705; | &#x2705; | | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure IoT Security](https://azure.microsoft.com/overview/iot/security/) | &#x2705; | &#x2705; | &#x2705; | | | | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | &#x2705; | &#x2705; | &#x2705; | | | | [Azure Lab Services](https://azure.microsoft.com/services/lab-services/) | &#x2705; | &#x2705; | &#x2705; | | |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
| [Azure Lighthouse](https://azure.microsoft.com/services/azure-lighthouse/)| &#x2705; | &#x2705; | &#x2705; | | | | [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) | &#x2705; | &#x2705; | &#x2705; | | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Public IP](../../virtual-network/public-ip-addresses.md) | &#x2705; | | | | | | [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | &#x2705; | | | | [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
| [Azure Scheduler](../../scheduler/scheduler-intro.md) | &#x2705; | &#x2705; | &#x2705; | | | | [Azure Security Center](https://azure.microsoft.com/services/security-center/) | &#x2705; | &#x2705; | &#x2705; | | | | [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel/) | &#x2705; | &#x2705; | &#x2705; | | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) | &#x2705; | &#x2705; | &#x2705; | | | | [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) | &#x2705; | &#x2705; | &#x2705; | | | | [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | | | | |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | | | | [Azure Web Application Firewall)](https://azure.microsoft.com/services/web-application-firewall/) | &#x2705; | &#x2705; | &#x2705; | | | | [Batch](https://azure.microsoft.com/services/batch/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cognitive | [Cognitive | [Container Instances](https://azure.microsoft.com/services/container-instances/)| &#x2705; | &#x2705; | &#x2705; | | |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
| [Container Registry](https://azure.microsoft.com/services/container-registry/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Content Delivery Network](https://azure.microsoft.com/services/cdn/) | &#x2705; | &#x2705; | &#x2705; | | | | [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | &#x2705; | | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Dynamics 365 Sales](https://dynamics.microsoft.com/sales/overview/) | &#x2705; | &#x2705; | &#x2705; | | | | [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | &#x2705; | | | | [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | | | | | | [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | &#x2705; | | | | [Key Vault](https://azure.microsoft.com/services/key-vault/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | &#x2705; | | | | [Multifactor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
| [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | &#x2705; | | | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; | &#x2705; | | | | [Planned Maintenance for VMs](../../virtual-machines/maintenance-control-portal.md) | &#x2705; | | | | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Storage: Blobs](https://azure.microsoft.com/services/storage/blobs/) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Storage: Disks (incl. Managed Disks)](https://azure.microsoft.com/services/storage/disks/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Storage: Files](https://azure.microsoft.com/services/storage/files/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
| [Storage: Queues](https://azure.microsoft.com/services/storage/queues/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Storage: Tables](https://azure.microsoft.com/services/storage/tables/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [StorSimple](https://azure.microsoft.com/services/storsimple/) | &#x2705; | &#x2705; | &#x2705; | | |
azure-government Documentation Government Overview Dod https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-dod.md
The following services are in scope for DoD IL5 PA in Azure Government DoD regio
- [Network Watcher Traffic Analytics](../network-watcher/traffic-analytics.md) - [Power Apps](/powerapps/powerapps-overview) - [Power Apps portal](https://powerapps.microsoft.com/portals/)-- [Power Automate](/power-automate/) (formerly Microsoft Flow)
+- [Power Automate](/power-automate/getting-started) (formerly Microsoft Flow)
- [Power BI](https://powerbi.microsoft.com/) - [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/) - [Service Bus](https://azure.microsoft.com/services/service-bus/)
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/about-azure-maps.md
Last updated 12/07/2020
-+ #Customer intent: As an Azure enterprise customer, I want to know what capabilities Azure Maps has, so that I can take advantage of mapping in my applications.
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/azure-maps-authentication.md
Last updated 05/25/2021
-+
azure-maps Azure Maps Event Grid Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/azure-maps-event-grid-integration.md
Last updated 07/16/2020
-+
azure-maps Choose Map Style https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/choose-map-style.md
Last updated 04/26/2020
-+
azure-maps Choose Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/choose-pricing-tier.md
Last updated 04/27/2020
-+ # Choose the right pricing tier in Azure Maps
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/creator-facility-ontology.md
Last updated 06/14/2021
-+ zone_pivot_groups: facility-ontology-schema
azure-maps Creator Geographic Scope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/creator-geographic-scope.md
Last updated 05/18/2021
-
+
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/creator-indoor-maps.md
Last updated 05/26/2021
-+
azure-maps Creator Long Running Operation V2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/creator-long-running-operation-v2.md
Last updated 05/18/2021
-
+
azure-maps Creator Long Running Operation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/creator-long-running-operation.md
Last updated 12/07/2020
-
+
azure-maps Drawing Conversion Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/drawing-conversion-error-codes.md
Last updated 05/21/2021
-+ # Drawing conversion errors and warnings
azure-maps Drawing Error Visualizer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/drawing-error-visualizer.md
Last updated 05/26/2021
-+ # Using the Azure Maps Drawing Error Visualizer with Creator
azure-maps Drawing Package Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/drawing-package-guide.md
Last updated 05/18/2021
-+
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/drawing-requirements.md
Last updated 07/02/2021
-+ # Drawing package requirements
azure-maps Geocoding Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/geocoding-coverage.md
Last updated 07/28/2019
-+ # Azure Maps geocoding coverage
azure-maps How To Manage Creator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-manage-creator.md
Last updated 05/18/2021
-+ # Manage Azure Maps Creator
azure-maps How To Request Elevation Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-elevation-data.md
Last updated 05/18/2021
-+
azure-maps How To Request Real Time Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-real-time-data.md
Last updated 06/23/2021
-+
azure-maps How To Request Transit Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-transit-data.md
Last updated 06/23/2021
-+
azure-maps How To Request Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-weather-data.md
Last updated 04/26/2021
-+
azure-maps How To Search For Address https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-search-for-address.md
Last updated 01/19/2021
-+ # Search for a location using Azure Maps Search services
azure-maps How To Secure Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-secure-spa-app.md
Last updated 06/21/2021
-+
azure-maps How To Use Best Practices For Routing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-best-practices-for-routing.md
Last updated 09/02/2020
-+ # Best practices for Azure Maps Route service
azure-maps How To Use Best Practices For Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-best-practices-for-search.md
Last updated 09/02/2020
-+ # Best practices for Azure Maps Search Service
azure-maps How To Use Feedback Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-feedback-tool.md
Last updated 12/07/2020
-+
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-indoor-module.md
Last updated 07/13/2021
-+
azure-maps How To Use Map Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-map-control.md
Last updated 07/20/2020
-+
azure-maps How To Use Spatial Io Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-spatial-io-module.md
Last updated 02/28/2020
-+ #Customer intent: As an Azure Maps web sdk user, I want to install and use the spatial io module so that I can integrate spatial data with the Azure Maps web sdk.
azure-maps Indoor Map Dynamic Styling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/indoor-map-dynamic-styling.md
Last updated 05/20/2021
-+ # Implement dynamic styling for Creator indoor maps
azure-maps Map Add Drawing Toolbar https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-add-drawing-toolbar.md
Last updated 09/04/2019
-+
azure-maps Map Get Shape Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-get-shape-data.md
Last updated 09/04/2019
-+
azure-maps Mobility Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/mobility-coverage.md
Last updated 12/07/2020
-+ # Azure Maps Mobility services (Preview) coverage
azure-maps Mobility Service Data Structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/mobility-service-data-structure.md
Last updated 12/07/2020
-+ # Data structures in Azure Maps Mobility services (Preview)
azure-maps Quick Demo Map App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/quick-demo-map-app.md
Last updated 04/26/2021
-+
azure-maps Schema Stateset Stylesobject https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/schema-stateset-stylesobject.md
Last updated 12/07/2020
-+ # StylesObject Schema reference guide for dynamic Maps
azure-maps Set Drawing Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/set-drawing-options.md
Last updated 01/29/2020
-+
azure-maps Spatial Io Add Ogc Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/spatial-io-add-ogc-map-layer.md
Last updated 03/02/2020
-+ # Add a map layer from the Open Geospatial Consortium (OGC)
azure-maps Spatial Io Add Simple Data Layer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/spatial-io-add-simple-data-layer.md
Last updated 02/29/2020
-+ #Customer intent: As an Azure Maps web sdk user, I want to add simple data layer so that I can render styled features on the map.
azure-maps Spatial Io Connect Wfs Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/spatial-io-connect-wfs-service.md
Last updated 03/03/2020
-+
azure-maps Spatial Io Core Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/spatial-io-core-operations.md
Last updated 03/03/2020
-+
azure-maps Spatial Io Read Write Spatial Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/spatial-io-read-write-spatial-data.md
Last updated 03/01/2020
-+ #Customer intent: As an Azure Maps web sdk user, I want to read and write spatial data so that I can use data for map rendering.
azure-maps Spatial Io Supported Data Format Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/spatial-io-supported-data-format-details.md
Last updated 03/03/2020
-+ # Supported data format details
azure-maps Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/supported-languages.md
Last updated 12/07/2020
-+ # Localization support in Azure Maps
azure-maps Supported Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/supported-map-styles.md
Last updated 04/26/2020
-+ # Azure Maps supported built-in map styles
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-create-store-locator.md
Last updated 06/07/2021
-+
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-creator-indoor-maps.md
Last updated 5/19/2021
-+ # Tutorial: Use Creator to create indoor maps
azure-maps Tutorial Ev Routing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-ev-routing.md
Last updated 04/26/2021
-+
azure-maps Tutorial Geofence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-geofence.md
Last updated 7/06/2021
-+
azure-maps Tutorial Iot Hub Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-iot-hub-maps.md
Last updated 06/21/2021
-+ #Customer intent: As a customer, I want to build an IoT system so that I can use Azure Maps APIs for spatial analytics on the device data.
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-coverage.md
-+ # Azure Maps Weather services coverage
azure-maps Weather Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-service-tutorial.md
Last updated 12/07/2020
-+
azure-maps Weather Services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-services-concepts.md
Last updated 09/10/2020
-+ # Weather services in Azure Maps
azure-maps Zoom Levels And Tile Grid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/zoom-levels-and-tile-grid.md
Last updated 07/14/2020
-+ # Zoom levels and tile grid
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/activity-log.md
You can also access Activity log events using the following methods.
- No data ingestion charges for Activity log data stored in a Log Analytics workspace. - No data retention charges till 90 days for Activity log data stored in a Log Analytics workspace.
-[Create a diagnostic setting](./diagnostic-settings.md) to send the Activity log to a Log Analytics workspace. You can send the Activity log from any single subscription to up to five workspaces. Collecting logs across tenants requires [Azure Lighthouse](../../lighthouse/index.yml).
+[Create a diagnostic setting](./diagnostic-settings.md) to send the Activity log to a Log Analytics workspace. You can send the Activity log from any single subscription to up to five workspaces.
Activity log data in a Log Analytics workspace is stored in a table called *AzureActivity* that you can retrieve with a [log query](../logs/log-query-overview.md) in [Log Analytics](../logs/log-analytics-tutorial.md). The structure of this table varies depending on the [category of the log entry](activity-log-schema.md). For a description of the table properties, see the [Azure Monitor data reference](/azure/azure-monitor/reference/tables/azureactivity).
azure-monitor Metrics Aggregation Explained https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-aggregation-explained.md
There are two types of collection periods.
### Granularity
-The minimum time interval is 1 minute, but the underlying system may capture data faster depending on the metric. For example, CPU percentage is tracked every 15 seconds at a regular interval. Because HTTP failures are tracked as transactions, they can easily exceed many more than one a minute. Other metrics such as SQL Storage are captured every 20 minutes. This choice is up to the individual resource provider and type. Most try to provide the smallest interval possible.
+The minimum time granularity is 1 minute, but the underlying system may capture data faster depending on the metric. For example, CPU percentage for an Azure VM is captured at a time interval of 15 seconds. Because HTTP failures are tracked as transactions, they can easily exceed many more than one a minute. Other metrics such as SQL Storage are captured at a time interval of every 20 minutes. This choice is up to the individual resource provider and type. Most try to provide the smallest time interval possible.
### Dimensions, splitting, and filtering
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-categories.md
A combination of the resource type (available in the `resourceId` property) and
## Costs
-[Azure Monitor Log Analytics](https://azure.microsoft.com/pricing/details/monitor/), [Azure Storage](https://azure.microsoft.com/en-us/product-categories/storage/), [Event hub](https://azure.microsoft.com/en-us/pricing/details/event-hubs/), and partners who integrate directly with Azure Monitor ([for example Datadog](../../partner-solutions/datadog/overview.md)) have costs associated with ingesting data and storing data. Check the previous links to pricing pages for these services to understand those costs. Resource logs are just one type of data you can send to these locations.
+[Azure Monitor Log Analytics](https://azure.microsoft.com/pricing/details/monitor/), [Azure Storage](https://azure.microsoft.com/product-categories/storage/), [Event hub](https://azure.microsoft.com/pricing/details/event-hubs/), and partners who integrate directly with Azure Monitor ([for example Datadog](../../partner-solutions/datadog/overview.md)) have costs associated with ingesting data and storing data. Check the previous links to pricing pages for these services to understand those costs. Resource logs are just one type of data you can send to these locations.
In addition, there may be costs to export some categories of resource logs to those locations. Those logs with possible export costs are listed in the table below. For more information on export pricing, see the *Platform Logs* section in the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Network Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/network-insights-overview.md
Diagnostic Toolkit provides access to all the diagnostic features available for
![Screenshot that shows the Diagnostic Toolkit tab.](media/network-insights-overview/azure-monitor-for-networks-diagnostic-toolkit.png)
-## Onboarded resources
-
-Onboarded resources have built-in workbooks, and dependency views. Currently onboarded resources are Virtual WAN, Application Gateway, Load Balancer, and ExpressRoute.
+## Availability of resources
+
+By default, all networking resources are visible in Network Insights. Customers can click on the resource type for viewing resource health and metrics (if available), subscription details, location, etc. A subset of networking resources have been _Onboarded_. For Onboarded resources, customers have access to a resource specific topology view and a built-in metrics workbook. These out-of-the-box experiences makes it easier to explore resource metrics and troubleshoot issues.
+
+Resources that been onboarded are:
+* Virtual WAN
+* Application Gateway
+* Load Balancer
+* ExpressRoute
+* Private Link
+* NAT Gateway
+* Public IP
+* NIC
## Troubleshooting For general troubleshooting guidance, see the dedicated workbook-based insights [troubleshooting article](troubleshoot-workbooks.md).
azure-monitor Sql Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-overview.md
There is no direct cost for SQL insights. All costs are incurred by the virtual
**Virtual machines**
-For virtual machines, you're charged based on the pricing published on the [virtual machines pricing page](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/). The number of virtual machines required will vary based on the number of connection strings you want to monitor. We recommend to allocate 1 virtual machine of size Standard_B2s for every 100 connection strings. See [Azure virtual machine requirements](sql-insights-enable.md#azure-virtual-machine-requirements) for more details.
+For virtual machines, you're charged based on the pricing published on the [virtual machines pricing page](https://azure.microsoft.com/pricing/details/virtual-machines/linux/). The number of virtual machines required will vary based on the number of connection strings you want to monitor. We recommend to allocate 1 virtual machine of size Standard_B2s for every 100 connection strings. See [Azure virtual machine requirements](sql-insights-enable.md#azure-virtual-machine-requirements) for more details.
**Log Analytics workspaces**
azure-monitor Azure Data Explorer Monitor Cross Service Query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/azure-data-explorer-monitor-cross-service-query.md
Title: Cross service query between Azure Monitor and Azure Data Explorer (preview)
+ Title: Cross service query between Azure Monitor and Azure Data Explorer
description: Query Azure Data Explorer data through Azure Log Analytics tools vice versa to join and analyze all your data in one place.
Last updated 06/12/2020
-# Cross service query - Azure Monitor and Azure Data Explorer (Preview)
+# Cross service query - Azure Monitor and Azure Data Explorer
Create cross service queries between [Azure Data Explorer](/azure/data-explorer/), [Application Insights](../app/app-insights-overview.md), and [Log Analytics](../logs/data-platform-logs.md). ## Azure Monitor and Azure Data Explorer cross-service querying This experience enables you to [create cross service queries between Azure Data Explorer and Azure Monitor](/azure/data-explorer/query-monitor-data) and to [create cross service queries between Azure Monitor and Azure Data Explorer](./azure-monitor-data-explorer-proxy.md).
CustomEvents | where aField == 1
``` Where the outer query is querying a table in the workspace, and then joining with another table in an Azure Data Explorer cluster (in this case, clustername=help, databasename=samples) by using a new "adx()" function, like how you can do the same to query another workspace from inside query text.
-> [!NOTE]
-> * The ability to query Azure Monitor data from Azure Data Explorer, either directly from Azure Data Explorer client tools, or indirectly by running a query on an Azure Data Explorer cluster, is in preview mode.
-> * Contact the [Cross service query](mailto:adxproxy@microsoft.com) team with any questions.
- ## Query exported Log Analytics data from Azure Blob storage account Exporting data from Azure Monitor to an Azure storage account enables low-cost retention and the ability to reallocate logs to different regions.
-Use Azure Data Explorer to query data that was exported from your Log Analytics workspaces. Once configured, supported tables that are sent from your workspaces to an Azure storage account will be available as a data source for Azure Data Explorer. [Query exported data from Azure Monitor using Azure Data Explorer (preview)](../logs/azure-data-explorer-query-storage.md).
+Use Azure Data Explorer to query data that was exported from your Log Analytics workspaces. Once configured, supported tables that are sent from your workspaces to an Azure storage account will be available as a data source for Azure Data Explorer. [Query exported data from Azure Monitor using Azure Data Explorer](../logs/azure-data-explorer-query-storage.md).
[Azure Data Explorer query from storage flow](media\azure-data-explorer-query-storage\exported-data-query.png) >[!tip]
-> * To export all data from your Log Analytics workspace to an Azure storage account or event hub, use the Log Analytics workspace data export feature of Azure Monitor Logs. [See Log Analytics workspace data export in Azure Monitor (preview)](/azure/data-explorer/query-monitor-data).
+> * To export all data from your Log Analytics workspace to an Azure storage account or event hub, use the Log Analytics workspace data export feature of Azure Monitor Logs. [See Log Analytics workspace data export in Azure Monitor](/azure/data-explorer/query-monitor-data).
## Next steps Learn more about: * [create cross service queries between Azure Data Explorer and Azure Monitor](/azure/data-explorer/query-monitor-data). Query Azure Monitor data from Azure Data Explorer * [create cross service queries between Azure Monitor and Azure Data Explorer](./azure-monitor-data-explorer-proxy.md). Query Azure Data Explorer data from Azure Monitor
-* [Log Analytics workspace data export in Azure Monitor (preview)](/azure/data-explorer/query-monitor-data). Link and query Azure Blob storage account with Log Analytics Exported data.
+* [Log Analytics workspace data export in Azure Monitor](/azure/data-explorer/query-monitor-data). Link and query Azure Blob storage account with Log Analytics Exported data.
azure-monitor Azure Data Explorer Monitor Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/azure-data-explorer-monitor-proxy.md
Title: Query data in Azure Monitor using Azure Data Explorer (preview)
+ Title: Query data in Azure Monitor using Azure Data Explorer
description: Use Azure Data Explorer to perform cross product queries between Azure Data Explorer, Log Analytics workspaces and classic Application Insights applications in Azure Monitor.
Last updated 10/13/2020
-# Query data in Azure Monitor using Azure Data Explorer (Preview)
+# Query data in Azure Monitor using Azure Data Explorer
The Azure Data Explorer supports cross service queries between Azure Data Explorer, [Application Insights (AI)](../app/app-insights-overview.md), and [Log Analytics (LA)](./data-platform-logs.md). You can then query your Log Analytics/Application Insights workspace using Azure Data Explorer tools and refer to it in a cross service query. The article shows how to make a cross service query and how to add the Log Analytics/Application Insights workspace to Azure Data Explorer Web UI. The Azure Data Explorer cross service queries flow: :::image type="content" source="media\azure-data-explorer-monitor-proxy\azure-data-explorer-monitor-flow.png" alt-text="Azure data explorer proxy flow.":::
-> [!NOTE]
-> * The ability to query Azure Monitor data from Azure Data Explorer, either directly from Azure Data Explorer client tools, or indirectly by running a query on an Azure Data Explorer cluster, is in preview mode.
->* Contact the [Cross service query](mailto:adxproxy@microsoft.com) team with any questions.
- ## Add a Log Analytics/Application Insights workspace to Azure Data Explorer client tools 1. Verify your Azure Data Explorer native cluster (such as *help* cluster) appears on the left menu before you connect to your Log Analytics or Application Insights cluster.
azure-monitor Azure Data Explorer Query Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/azure-data-explorer-query-storage.md
Title: Query exported data from Azure Monitor using Azure Data Explorer (preview)
+ Title: Query exported data from Azure Monitor using Azure Data Explorer
description: Use Azure Data Explorer to query data that was exported from your Log Analytics workspace to an Azure storage account.
-# Query exported data from Azure Monitor using Azure Data Explorer (preview)
+# Query exported data from Azure Monitor using Azure Data Explorer
Exporting data from Azure Monitor to an Azure storage account enables low-cost retention and the ability to reallocate logs to different regions. Use Azure Data Explorer to query data that was exported from your Log Analytics workspaces. Once configured, supported tables that are sent from your workspaces to an Azure storage account will be available as a data source for Azure Data Explorer. The process flow is as follows:
The process flow is as follows:
## Send data to Azure storage Azure Monitor logs can be exported to an Azure Storage Account using any of the following options. -- To export all data from your Log Analytics workspace to an Azure storage account or event hub, use the Log Analytics workspace data export feature of Azure Monitor Logs. See [Log Analytics workspace data export in Azure Monitor (preview)](./logs-data-export.md)
+- To export all data from your Log Analytics workspace to an Azure storage account or event hub, use the Log Analytics workspace data export feature of Azure Monitor Logs. See [Log Analytics workspace data export in Azure Monitor](./logs-data-export.md)
- Scheduled export from a log query using a Logic App. This is similar to the data export feature but allows you to send filtered or aggregated data to Azure storage. This method though is subject to [log query limits](../service-limits.md#log-analytics-workspaces) See [Archive data from Log Analytics workspace to Azure storage using Logic App](./logs-export-logic-app.md). - One time export using a Logic App. See [Azure Monitor Logs connector for Logic Apps and Power Automate](./logicapp-flow-connector.md). - One time export to local machine using PowerShell script. See [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport).
azure-monitor Data Ingestion Time https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/data-ingestion-time.md
Heartbeat
``` ## Next steps
-* Read the [Service Level Agreement (SLA)](https://azure.microsoft.com/en-us/support/legal/sla/monitor/v1_3/) for Azure Monitor.
+* Read the [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/monitor/v1_3/) for Azure Monitor.
azure-monitor Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-configure.md
+
+ Title: Configure your Private Link
+description: Configure Private Link
+++ Last updated : 08/01/2021++
+# Configure your Private Link
+Configuring a Private Link requires a few steps:
+* Creating a Private Link Scope with resources
+* Creating a Private Endpoint on your network and connecting it to the scope
+* Configuring the required access on your Azure Monitor resources.
+
+This article reviews how it's done through the Azure portal and provides an example Azure Resource Manager (ARM) template to automate the process.
+
+## Create a Private Link connection
+
+Start by creating an Azure Monitor Private Link Scope resource.
+
+1. Go to **Create a resource** in the Azure portal and search for **Azure Monitor Private Link Scope**.
+
+ ![Find Azure Monitor Private Link Scope](./media/private-link-security/ampls-find-1c.png)
+
+2. Select **create**.
+3. Pick a Subscription and Resource Group.
+4. Give the AMPLS a name. It's best to use a meaningful and clear name, such as "AppServerProdTelem".
+5. Select **Review + Create**.
+
+ ![Create Azure Monitor Private Link Scope](./media/private-link-security/ampls-create-1d.png)
+
+6. Let the validation pass, and then select **Create**.
+
+### Connect Azure Monitor resources
+
+Connect Azure Monitor resources (Log Analytics workspaces and Application Insights components) to your AMPLS.
+
+1. In your Azure Monitor Private Link scope, select **Azure Monitor Resources** in the left-hand menu. Select the **Add** button.
+2. Add the workspace or component. Selecting the **Add** button brings up a dialog where you can select Azure Monitor resources. You can browse through your subscriptions and resource groups, or you can type in their name to filter down to them. Select the workspace or component and select **Apply** to add them to your scope.
+
+ ![Screenshot of select a scope UX](./media/private-link-security/ampls-select-2.png)
+
+> [!NOTE]
+> Deleting Azure Monitor resources requires that you first disconnect them from any AMPLS objects they are connected to. It's not possible to delete resources connected to an AMPLS.
+
+### Connect to a private endpoint
+
+Now that you have resources connected to your AMPLS, create a private endpoint to connect our network. You can do this task in the [Azure portal Private Link center](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/privateendpoints), or inside your Azure Monitor Private Link Scope, as done in this example.
+
+1. In your scope resource, select **Private Endpoint connections** in the left-hand resource menu. Select **Private Endpoint** to start the endpoint create process. You can also approve connections that were started in the Private Link center here by selecting them and selecting **Approve**.
+
+ ![Screenshot of Private Endpoint Connections UX](./media/private-link-security/ampls-select-private-endpoint-connect-3.png)
+
+2. Pick the subscription, resource group, and name of the endpoint, and the region it should live in. The region needs to be the same region as the VNet you connect it to.
+
+3. Select **Next: Resource**.
+
+4. In the Resource screen,
+
+ a. Pick the **Subscription** that contains your Azure Monitor Private Scope resource.
+
+ b. For **resource type**, choose **Microsoft.insights/privateLinkScopes**.
+
+ c. From the **resource** drop-down, choose your Private Link scope you created earlier.
+
+ d. Select **Next: Configuration >**.
+ ![Screenshot of select Create Private Endpoint](./media/private-link-security/ampls-select-private-endpoint-create-4.png)
+
+5. On the configuration pane,
+
+ a. Choose the **virtual network** and **subnet** that you want to connect to your Azure Monitor resources.
+
+ b. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone. The actual DNS zones may be different from what is shown in the screenshot below.
+ > [!NOTE]
+ > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link - including this Private Endpoint and the AMPLS configuration. Then, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Azure Monitor.
+
+ c. Select **Review + create**.
+
+ d. Let validation pass.
+
+ e. Select **Create**.
+
+ ![Screenshot of select Private Endpoint details.](./media/private-link-security/ampls-select-private-endpoint-create-5.png)
+
+You've now created a new private endpoint that is connected to this AMPLS.
++
+## Configure access to your resources
+So far we covered the configuration of your network, but you should also consider how you want to configure network access to your monitored resources - Log Analytics workspaces and Application Insights components.
+
+Go to the Azure portal. In your resource's menu, there's a menu item called **Network Isolation** on the left-hand side. This page controls both which networks can reach the resource through a Private Link, and whether other networks can reach it or not.
++
+> [!NOTE]
+> Starting August 16, 2021, Network Isolation will be strictly enforced. Resources set to block queries from public networks, and that aren't connected to any private network (through an AMPLS) will stop accepting queries from any network.
+
+![LA Network Isolation](./media/private-link-security/ampls-network-isolation.png)
+
+### Connected Azure Monitor Private Link scopes
+Here you can review and configure the resource's connections to Azure Monitor Private Links scopes. Connecting to scopes (AMPLSs) allows traffic from the virtual network connected to each AMPLS to reach the resource. It has the same effect as connecting it from the scope as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources).
+
+To add a new connection, select **Add** and select the Azure Monitor Private Link Scope. Select **Apply** to connect it. Your resource can connect to five AMPLS objects, as mentioned in [Consider AMPLS limits](./private-link-design.md#consider-ampls-limits).
+
+### Virtual networks access configuration - Managing access from outside of private links scopes
+The settings on the bottom part of this page control access from public networks, meaning networks not connected to the listed scopes (AMPLSs).
+
+If you set **Allow public network access for ingestion** to **No**, then clients (machines, SDKs, etc.) outside of the connected scopes can't upload data or send logs to the resource.
+
+If you set **Allow public network access for queries** to **No**, then clients (machines, SDKs etc.) outside of the connected scopes can't query data in the resource. That data includes access to logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal and that query Log Analytics data also have to be running within the private-linked VNET.
++
+### Exceptions
+
+#### Diagnostic logs
+Logs and metrics uploaded to a workspace via [Diagnostic Settings](../essentials/diagnostic-settings.md) go over a secure private Microsoft channel, and are not controlled by these settings.
+
+#### Azure Resource Manager
+Restricting access as explained above applies to data in the resource. However, configuration changes, including turning these access settings on or off, are managed by Azure Resource Manager. To control these settings, you should restrict access to resources using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor Roles, Permissions, and Security](../roles-permissions-security.md)
+
+Additionally, specific experiences (such as the LogicApp connector, Update Management solution, and the Workspace Summary blade in the portal, showing the solutions dashboard) query data through Azure Resource Manager and therefore won't be able to query data unless Private Link settings are applied to the Resource Manager as well.
++
+## Review and validate your Private Link setup
+
+### Reviewing your Endpoint's DNS settings
+The Private Endpoint you created should now have an five DNS zones configured:
+
+* privatelink-monitor-azure-com
+* privatelink-oms-opinsights-azure-com
+* privatelink-ods-opinsights-azure-com
+* privatelink-agentsvc-azure-automation-net
+* privatelink-blob-core-windows-net
+
+> [!NOTE]
+> Each of these zones maps specific Azure Monitor endpoints to private IPs from the VNet's pool of IPs. The IP addresses showns in the below images are only examples. Your configuration should instead show private IPs from your own network.
+
+#### Privatelink-monitor-azure-com
+This zone covers the global endpoints used by Azure Monitor, meaning these endpoints serve requests considering all resources, not a specific one. This zone should have endpoints mapped for:
+* `in.ai` - Application Insights ingestion endpoint (both a global and a regional entry)
+* `api` - Application Insights and Log Analytics API endpoint
+* `live` - Application Insights live metrics endpoint
+* `profiler` - Application Insights profiler endpoint
+* `snapshot` - Application Insights snapshots endpoint
+[![Screenshot of Private DNS zone monitor-azure-com.](./media/private-link-security/dns-zone-privatelink-monitor-azure-com.png)](./media/private-link-security/dns-zone-privatelink-monitor-azure-com-expanded.png#lightbox)
+
+#### privatelink-oms-opinsights-azure-com
+This zone covers workspace-specific mapping to OMS endpoints. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint.
+[![Screenshot of Private DNS zone oms-opinsights-azure-com.](./media/private-link-security/dns-zone-privatelink-oms-opinsights-azure-com.png)](./media/private-link-security/dns-zone-privatelink-oms-opinsights-azure-com-expanded.png#lightbox)
+
+#### privatelink-ods-opinsights-azure-com
+This zone covers workspace-specific mapping to ODS endpoints - the ingestion endpoint of Log Analytics. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint.
+[![Screenshot of Private DNS zone ods-opinsights-azure-com.](./media/private-link-security/dns-zone-privatelink-ods-opinsights-azure-com.png)](./media/private-link-security/dns-zone-privatelink-ods-opinsights-azure-com-expanded.png#lightbox)
+
+#### privatelink-agentsvc-azure-automation-net
+This zone covers workspace-specific mapping to the agent service automation endpoints. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint.
+[![Screenshot of Private DNS zone agent svc-azure-automation-net.](./media/private-link-security/dns-zone-privatelink-agentsvc-azure-automation-net.png)](./media/private-link-security/dns-zone-privatelink-agentsvc-azure-automation-net-expanded.png#lightbox)
+
+#### privatelink-blob-core-windows-net
+This zone configures connectivity to the global agents' solution packs storage account. Through it, agents can download new or updated solution packs (also known as management packs). Only one entry is required to handle to Log Analytics agents, no matter how many workspaces are used.
+[![Screenshot of Private DNS zone blob-core-windows-net.](./media/private-link-security/dns-zone-privatelink-blob-core-windows-net.png)](./media/private-link-security/dns-zone-privatelink-blob-core-windows-net-expanded.png#lightbox)
+> [!NOTE]
+> This entry is only added to Private Links setups created at or after April 19, 2021 (or starting June, 2021 on Azure Sovereign clouds).
++
+### Validating you are communicating over a Private Link
+* To validate your requests are now sent through the Private Endpoint, you can review them with a network tracking tool or even your browser. For example, when attempting to query your workspace or application, make sure the request is sent to the private IP mapped to the API endpoint, in this example it's *172.17.0.9*.
+
+ Note: Some browsers may use other DNS settings (see [Browser DNS settings](./private-link-design.md#browser-dns-settings)). Make sure your DNS settings apply.
+
+* To make sure your workspace or component aren't receiving requests from public networks (not connected through AMPLS), set the resource's public ingestion and query flags to *No* as explained in [Configure access to your resources](#configure-access-to-your-resources).
+
+* From a client on your protected network, use `nslookup` to any of the endpoints listed in your DNS zones. It should be resolved by your DNS server to the mapped private IPs instead of the public IPs used by default.
++
+## Use APIs and command line
+
+You can automate the process described earlier using Azure Resource Manager templates, REST, and command-line interfaces.
+
+To create and manage private link scopes, use the [REST API](/rest/api/monitor/privatelinkscopes(preview)/private%20link%20scoped%20resources%20(preview)) or [Azure CLI (az monitor private-link-scope)](/cli/azure/monitor/private-link-scope).
+
+To manage the network access flag on your workspace or component, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]`on [Log Analytics workspaces](/cli/azure/monitor/log-analytics/workspace) or [Application Insights components](/cli/azure/ext/application-insights/monitor/app-insights/component).
+
+### Example Azure Resource Manager template (ARM template)
+The below Azure Resource Manager template creates:
+* A private link scope (AMPLS) named "my-scope"
+* A Log Analytics workspace named "my-workspace"
+* Add a scoped resource to the "my-scope" AMPLS, named "my-workspace-connection"
+
+```
+{
+ "$schema": https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#,
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "private_link_scope_name": {
+ "defaultValue": "my-scope",
+ "type": "String"
+ },
+ "workspace_name": {
+ "defaultValue": "my-workspace",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "microsoft.insights/privatelinkscopes",
+ "apiVersion": "2019-10-17-preview",
+ "name": "[parameters('private_link_scope_name')]",
+ "location": "global",
+ "properties": {}
+ },
+ {
+ "type": "microsoft.operationalinsights/workspaces",
+ "apiVersion": "2020-10-01",
+ "name": "[parameters('workspace_name')]",
+ "location": "westeurope",
+ "properties": {
+ "sku": {
+ "name": "pergb2018"
+ },
+ "publicNetworkAccessForIngestion": "Enabled",
+ "publicNetworkAccessForQuery": "Enabled"
+ }
+ },
+ {
+ "type": "microsoft.insights/privatelinkscopes/scopedresources",
+ "apiVersion": "2019-10-17-preview",
+ "name": "[concat(parameters('private_link_scope_name'), '/', concat(parameters('workspace_name'), '-connection'))]",
+ "dependsOn": [
+ "[resourceId('microsoft.insights/privatelinkscopes', parameters('private_link_scope_name'))]",
+ "[resourceId('microsoft.operationalinsights/workspaces', parameters('workspace_name'))]"
+ ],
+ "properties": {
+ "linkedResourceId": "[resourceId('microsoft.operationalinsights/workspaces', parameters('workspace_name'))]"
+ }
+ }
+ ]
+}
+```
+
+## Next steps
+
+- Learn about [private storage](private-storage.md)
+- Learn about [Private Link for Automation](../../automation/how-to/private-link-security.md)
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-design.md
+
+ Title: Design your Private Link setup
+description: Design your Private Link setup
+++ Last updated : 08/01/2021++
+# Design your Private Link setup
+
+Before setting up your Azure Monitor Private Link setup, consider your network topology, and specifically your DNS routing topology.
+As discussed in [How it work](./private-link-security.md#how-it-works), setting up a Private Link affects traffic to all Azure Monitor resources. That's especially true for Application Insights resources. Additionally, it affects not only the network connected to the Private Endpoint but also all other networks the share the same DNS.
+
+> [!NOTE]
+> The simplest and most secure approach would be:
+> 1. Create a single Private Link connection, with a single Private Endpoint and a single AMPLS. If your networks are peered, create the Private Link connection on the shared (or hub) VNet.
+> 2. Add *all* Azure Monitor resources (Application Insights components and Log Analytics workspaces) to that AMPLS.
+> 3. Block network egress traffic as much as possible.
+
+If for some reason you can't use a single Private Link and a single Azure Monitor Private Link Scope (AMPLS), the next best thing would be to create isolated Private Link connections for isolation networks. If you are (or can align with) using spoke vnets, follow the guidance in [Hub-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). Then, setup separate private link settings in the relevant spoke VNets. **Make sure to separate DNS zones as well**, since sharing DNS zones with other spoke networks will cause DNS overrides.
+
+## Plan by network topology
+### Hub-spoke networks
+Hub-spoke topologies can avoid the issue of DNS overrides by setting the Private Link on the hub (main) VNet, and not on each spoke VNet. This setup makes sense especially if the Azure Monitor resources used by the spoke VNets are shared.
+
+![Hub-and-spoke-single-PE](./media/private-link-security/hub-and-spoke-with-single-private-endpoint.png)
+
+> [!NOTE]
+> You may intentionally prefer to create separate Private Links for your spoke VNets, for example to allow each VNet to access a limited set of monitoring resources. In such cases, you can create a dedicated Private Endpoint and AMPLS for each VNet, but **must also verify they don't share the same DNS zones in order to avoid DNS overrides**.
+
+### Peered networks
+Network peering is used in various topologies, other than hub-spoke. Such networks can share reach each others' IP addresses, and most likely share the same DNS. In such cases, our recommendation is similar to Hub-spoke - select a single network that is reached by all other (relevant) networks and set the Private Link connection on that network. Avoid creating multiple Private Endpoints and AMPLS objects, since ultimately only the last one set in the DNS will apply.
+
+## Isolated networks
+#If your networks aren't peered, **you must also separate their DNS in order to use Private Links**. Once that's done, you can create a Private Link for one (or many) network, without affecting traffic of other networks. That means creating a separate Private Endpoint for each network, and a separate AMPLS object. Your AMPLS objects can link to the same workspaces/components, or to different ones.
+
+### Testing with a local bypass: Edit your machine's hosts file instead of the DNS
+As a local bypass to the All or Nothing behavior, you can select not to update your DNS with the Private Link records, and instead edit the hosts files on select machines so only these machines would send requests to the Private Link endpoints.
+* Set up a Private Link, but when connecting to a Private Endpoint choose **not** to auto-integrate with the DNS (step 5b).
+* Configure the relevant endpoints on your machines' hosts files. To review the Azure Monitor endpoints that need mapping, see [Reviewing your Endpoint's DNS settings](./private-link-configure.md#reviewing-your-endpoints-dns-settings).
+
+That approach isn't recommended for production environments.
+
+## Consider AMPLS limits
+The AMPLS object has the following limits:
+* A VNet can only connect to **one** AMPLS object. That means the AMPLS object must provide access to all the Azure Monitor resources the VNet should have access to.
+* An AMPLS object can connect to 50 Azure Monitor resources at most.
+* An Azure Monitor resource (Workspace or Application Insights component) can connect to 5 AMPLSs at most.
+* An AMPLS object can connect to 10 Private Endpoints at most.
+
+In the below diagram:
+* Each VNet connects to only **one** AMPLS object.
+* AMPLS A connects to two workspaces and one Application Insight component, using 3 of the 50 possible Azure Monitor resources connections.
+* Workspace2 connects to AMPLS A and AMPLS B, using two of the five possible AMPLS connections.
+* AMPLS B is connected to Private Endpoints of two VNets (VNet2 and VNet3), using two of the 10 possible Private Endpoint connections.
+
+![Diagram of AMPLS limits](./media/private-link-security/ampls-limits.png)
++
+## Controlling network access to your resources
+Your Log Analytics workspaces or Application Insights components can be set to accept or block access from public networks, meaning networks not connected to the resource's AMPLS.
+That granularity allows you to set access according to your needs, per workspace. For example, you may accept ingestion only through Private Link connected networks (i.e. specific VNets), but still choose to accept queries from all networks, public and private.
+Note that blocking queries from public networks means, clients (machines, SDKs etc.) outside of the connected AMPLSs can't query data in the resource. That data includes access to logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal and that query Log Analytics data are also affected by that setting.
++
+## Application Insights considerations
+* YouΓÇÖll need to add resources hosting the monitored workloads to a private link. For example, see [Using Private Endpoints for Azure Web App](../../app-service/networking/private-endpoint.md).
+* Non-portal consumption experiences must also run on the private-linked VNET that includes the monitored workloads.
+* In order to support Private Links for Profiler and Debugger, you'll need to [provide your own storage account](../app/profiler-bring-your-own-storage.md)
+
+> [!NOTE]
+> To fully secure workspace-based Application Insights, you need to lock down both access to Application Insights resource as well as the underlying Log Analytics workspace.
+
+## Log Analytics considerations
+
+### Log Analytics solution packs download
+Log Analytics agents need to access a global storage account to download solution packs. Private Link setups created at or after April 19, 2021 (or starting June 2021 on Azure Sovereign clouds) can reach the agents' solution packs storage over the private link. This capability is made possible through a DNS zone created for 'blob.core.windows.net'.
+
+If your Private Link setup was created before April 19, 2021, it won't reach the solution packs storage over a private link. To handle that you can either:
+* Re-create your AMPLS and the Private Endpoint connected to it
+* Allow your agents to reach the storage account through its public endpoint, by adding the following rules to your firewall allowlist:
+
+ | Cloud environment | Agent Resource | Ports | Direction |
+ |:--|:--|:--|:--|
+ |Azure Public | scadvisorcontent.blob.core.windows.net | 443 | Outbound
+ |Azure Government | usbn1oicore.blob.core.usgovcloudapi.net | 443 | Outbound
+ |Azure China 21Vianet | mceast2oicore.blob.core.chinacloudapi.cn| 443 | Outbound
+
+### Collecting custom logs and IIS log over Private Link
+Storage accounts are used in the ingestion process of custom logs. By default, service-managed storage accounts are used. However to ingest custom logs on private links, you must use your own storage accounts and associate them with Log Analytics workspace(s). See more details on how to set up such accounts using the [command line](/cli/azure/monitor/log-analytics/workspace/linked-storage).
+
+For more information on connecting your own storage account, see [Customer-owned storage accounts for log ingestion](private-storage.md)
+
+### Automation
+If you use Log Analytics solutions that require an Automation account, such as Update Management, Change Tracking, or Inventory, you should also set up a separate Private Link for your Automation account. For more information, see [Use Azure Private Link to securely connect networks to Azure Automation](../../automation/how-to/private-link-security.md).
+
+> [!NOTE]
+> Some products and Azure portal experiences query data through Azure Resource Manager and therefore won't be able to query data over a Private Link, unless Private Link settings are applied to the Resource Manager as well. To overcome this, you can configure your resources to accept queries from public networks as explained in [Controlling network access to your resources](./private-link-design.md#controlling-network-access-to-your-resources) (Ingestion can remain limited to Private Link networks).
+We've identified the following products and experiences query workspaces through Azure Resource
+> * LogicApp connector
+> * Update Management solution
+> * Change Tracking solution
+> * The Workspace Summary blade in the portal (showing the solutions dashboard)
+> * VM Insights
+> * Container Insights
+
+## Requirements
+### Agents
+The latest versions of the Windows and Linux agents must be used to support secure ingestion to Log Analytics workspaces. Older versions can't upload monitoring data over a private network.
+
+**Log Analytics Windows agent**
+
+Use the Log Analytics agent version 10.20.18038.0 or later.
+
+**Log Analytics Linux agent**
+
+Use agent version 1.12.25 or later. If you can't, run the following commands on your VM.
+
+```cmd
+$ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -X
+$ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -w <workspace id> -s <workspace key>
+```
+### Azure portal
+To use Azure Monitor portal experiences such as Application Insights and Log Analytics, you need to allow the Azure portal and Azure Monitor extensions to be accessible on the private networks. Add **AzureActiveDirectory**, **AzureResourceManager**, **AzureFrontDoor.FirstParty**, and **AzureFrontdoor.Frontend** [service tags](../../firewall/service-tags.md) to your Network Security Group.
+
+### Programmatic access
+To use the REST API, [CLI](/cli/azure/monitor) or PowerShell with Azure Monitor on private networks, add the [service tags](../../virtual-network/service-tags-overview.md) **AzureActiveDirectory** and **AzureResourceManager** to your firewall.
+
+### Application Insights SDK downloads from a content delivery network
+Bundle the JavaScript code in your script so that the browser doesn't attempt to download code from a CDN. An example is provided on [GitHub](https://github.com/microsoft/ApplicationInsights-JS#npm-setup-ignore-if-using-snippet-setup)
+
+### Browser DNS settings
+If you're connecting to your Azure Monitor resources over a Private Link, traffic to these resources must go through the private endpoint that is configured on your network. To enable the private endpoint, update your DNS settings as explained in [Connect to a private endpoint](./private-link-configure.md#connect-to-a-private-endpoint). Some browsers use their own DNS settings instead of the ones you set. The browser might attempt to connect to Azure Monitor public endpoints and bypass the Private Link entirely. Verify that your browsers settings don't override or cache old DNS settings.
+
+### Querying limitation: externaldata operator
+The [`externaldata` operator](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor) isn't supported over a Private Link, as it reads data from storage accounts but doesn't guarantee the storage is accessed privately.
+
+## Next steps
+- Learn how to [configure your Private Link](private-link-configure.md)
+- Learn about [private storage](private-storage.md)
+- Learn about [Private Link for Automation](../../automation/how-to/private-link-security.md)
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-security.md
Last updated 10/05/2020
With [Azure Private Link](../../private-link/private-link-overview.md), you can securely link Azure platform as a service (PaaS) services to your virtual network by using private endpoints. For many services, you just set up an endpoint for each resource. However, Azure Monitor is a constellation of different interconnected services that work together to monitor your workloads.
-You can use a resource called an Azure Monitor Private Link Scope to define the boundaries of your monitoring network and connect to your virtual network. This article covers when to use and how to set up an Azure Monitor Private Link Scope.
-
-This article uses the following abbreviations in examples:
--- AMPLS: Azure Monitor Private Link Scope-- VNet: virtual network-- AI: Application Insights-- LA: Log Analytics- ## Advantages
-With Private Link, you can:
+With Private Link you can:
-- Connect privately to Azure Monitor without opening any public network access.-- Ensure that your monitoring data is accessed only through authorized private networks.-- Prevent data exfiltration from your private networks by defining specific Azure Monitor resources that connect through your private endpoint.-- Securely connect your private on-premises network to Azure Monitor by using Azure ExpressRoute.-- Keep all traffic inside the Microsoft Azure backbone network.
+- Connect privately to Azure Monitor without opening up any public network access
+- Ensure your monitoring data is only accessed through authorized private networks
+- Prevent data exfiltration from your private networks by defining specific Azure Monitor resources that connect through your private endpoint
+- Securely connect your private on-premises network to Azure Monitor using ExpressRoute and Private Link
+- Keep all traffic inside the Microsoft Azure backbone network
-For more information, see [Key benefits of Private Link](../../private-link/private-link-overview.md#key-benefits).
+For more information, see [Key Benefits of Private Link](../../private-link/private-link-overview.md#key-benefits).
## How it works
-An Azure Monitor Private Link Scope connects private endpoints (and the virtual networks they're contained in) to one or more Azure Monitor resources. These resources are Log Analytics workspaces and Application Insights components.
+Azure Monitor Private Link Scope (AMPLS) connects private endpoints (and the VNets they're contained in) to one or more Azure Monitor resources - Log Analytics workspaces and Application Insights components.
-![Diagram of a basic resource topology.](./media/private-link-security/private-link-basic-topology.png)
+![Diagram of basic resource topology](./media/private-link-security/private-link-basic-topology.png)
-* The private endpoint allows your virtual network to reach Azure Monitor endpoints through private IPs from your network's pool, instead of using the public IPs of these endpoints. That allows you to keep using your Azure Monitor resources without opening your virtual network to unrequired outbound traffic.
-* Traffic from the private endpoint to your Azure Monitor resources will go over the Microsoft Azure backbone and not be routed to public networks.
+* The Private Endpoint on your VNet allows it to reach Azure Monitor endpoints through private IPs from your network's pool, instead of using to the public IPs of these endpoints. That allows you to keep using your Azure Monitor resources without opening your VNet to unrequired outbound traffic.
+* Traffic from the Private Endpoint to your Azure Monitor resources will go over the Microsoft Azure backbone, and not routed to public networks.
* You can configure each of your workspaces or components to allow or deny ingestion and queries from public networks. That provides a resource-level protection, so that you can control traffic to specific resources. > [!NOTE]
-> A single Azure Monitor resource can belong to multiple Azure Monitor Private Link Scopes, but you can't connect a single virtual network to more than one Azure Monitor Private Link Scope.
-
-### Azure Monitor Private Links and DNS: It's all or nothing
-
-Some Azure Monitor services use global endpoints, meaning that they serve requests targeting any workspace or component. When you set up a Private Link connection, your DNS is updated to map Azure Monitor endpoints to private IPs, in order to send traffic through Private Link.
+> A single Azure Monitor resource can belong to multiple AMPLSs, but you cannot connect a single VNet to more than one AMPLS.
-For global endpoints, setting up a Private Link instance (even to a single resource) affects traffic to all resources. In other words, it's impossible to create a Private Link connection for only a specific component or workspace.
+### Azure Monitor Private Links and your DNS: It's All or Nothing
+Some Azure Monitor services use global endpoints, meaning they serve requests targeting any workspace/component. When you set up a Private Link connection your DNS is updated to map Azure Monitor endpoints to private IPs, in order to send traffic through the Private Link. When it comes to global endpoints, setting up a Private Link (even to a single resource) affects traffic to all resources. In other words, it's impossible to create a Private Link connection only for a specific component or workspace.
#### Global endpoints
+Most importantly, traffic to the below global endpoints will be sent through the Private Link:
+* All Application Insights endpoints - endpoints handling ingestion, live metrics, profiler, debugger etc. to Application Insights endpoints are global.
+* The Query endpoint - the endpoint handling queries to both Application Insights and Log Analytics resources is global.
-Traffic to the following global endpoints will be sent through Private Link:
-* All Application Insights endpoints. Endpoints that handle ingestion, live metrics, profiler, and debugger (for example) to Application Insights endpoints are global.
-* The query endpoint. The endpoint that handles queries to both Application Insights and Log Analytics resources is global.
+That effectively means that all Application Insights traffic will be sent to the Private Link, and that all queries - to both Application Insights and Log Analytics resources - will be sent to the Private Link.
-Effectively, all Application Insights traffic will be sent to Private Link. All queries, to both Application Insights and Log Analytics resources, will be sent to Private Link.
+Traffic to Application Insights resource not added to your AMPLS will not pass the Private Link validation, and will fail.
-Traffic to Application Insights resources that aren't added to your Azure Monitor Private Link Scope will not pass the Private Link validation, and will fail.
-
-![Diagram of all-or-nothing behavior.](./media/private-link-security/all-or-nothing.png)
+![Diagram of All or Nothing behavior](./media/private-link-security/all-or-nothing.png)
#### Resource-specific endpoints
+All Log Analytics endpoints except the Query endpoint, are workspace-specific. So, creating a Private Link to a specific Log Analytics workspace won't affect ingestion (or other) traffic to other workspaces, which will continue to use the public Log Analytics endpoints. All queries, however, will be sent through the Private Link.
-All Log Analytics endpoints, except the query endpoint, are workspace specific. Creating a Private Link connection to a specific Log Analytics workspace won't affect ingestion (or other) traffic to other workspaces, which will continue to use the public Log Analytics endpoints. All queries, however, will be sent through Private Link.
-
-### Private Link applies to all networks that share the same DNS
-
-Some networks consist of multiple virtual networks or other connected networks. If these networks share the same DNS, setting up a Private Link instance on any of them would update the DNS and affect traffic across all networks. That's especially important to note, because of the all-or-nothing behavior described earlier.
-
-In the following diagram, virtual network 10.0.1.x first connects to AMPLS1 and maps the Azure Monitor global endpoints to IPs from its range. Later, virtual network 10.0.2.x connects to AMPLS2, and it overrides the DNS mapping of the *same global endpoints* with IPs from its range. Because these virtual networks aren't peered, the first virtual network now fails to reach these endpoints.
-
-![Diagram of DNS overrides in multiple virtual networks.](./media/private-link-security/dns-overrides-multiple-vnets.png)
-
-## Planning your Private Link setup
-
-Before you set up Private Link, consider your network topology, and specifically your DNS routing topology.
-
-As discussed earlier, setting up Private Link affects traffic to all Azure Monitor resources. That's especially true for Application Insights resources. Additionally, it affects not only the network connected to the private endpoint (and through it to the Azure Monitor Private Link Scope resources) but also all other networks that share the same DNS.
-
-Given all that, the simplest and most secure approach would be:
-
-1. Create a single Private Link connection, with a single private endpoint and a single Azure Monitor Private Link Scope. If your networks are peered, create the Private Link connection on the shared (or hub) virtual network.
-2. Add *all* Azure Monitor resources (Application Insights components and Log Analytics workspaces) to that Azure Monitor Private Link Scope.
-3. Block network egress traffic as much as possible.
-
-If you can't use a single Private Link connection and a single Azure Monitor Private Link Scope, the next best thing is to create isolated Private Link connections for isolation networks. If you are (or can align with) using spoke virtual networks, follow the guidance in [Hub-and-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). Then, set up separate Private Link settings in the relevant spoke virtual networks. *Be sure to separate DNS zones*, because sharing DNS zones with other spoke networks will cause DNS overrides.
-
-### Hub-and-spoke networks
-
-Hub-and-spoke topologies can avoid the issue of DNS overrides by setting the Private Link connection on the hub (main) virtual network, and not on each spoke virtual network. This setup makes sense, especially if the Azure Monitor resources that the spoke virtual networks use are shared.
-
-![Diagram of a hub-and-spoke network with a single private endpoint.](./media/private-link-security/hub-and-spoke-with-single-private-endpoint.png)
-
-> [!NOTE]
-> You might prefer to create separate Private Link connections for your spoke virtual networks - for example, to allow each virtual network to access a limited set of monitoring resources. In such cases, you can create a dedicated Private Endpoint connection and Azure Monitor Private Link Scope for each virtual network. But you must also verify that they don't share the same DNS zones in order to avoid DNS overrides.
-
-### Peered networks
-
-Network peering is used in various topologies other than hub-and-spoke. Such networks can share reach each other's IP addresses, and most likely share the same DNS.
-
-In these cases, our recommendation is similar to hub-and-spoke. Select a single network that's reached by all other (relevant) networks, and set the Private Link connection on that network. Avoid creating multiple private endpoints and Azure Monitor Private Link Scope objects, because ultimately only the last one set in the DNS will apply.
-
-### Isolated networks
-
-If your networks aren't peered, *you must also separate their DNS in order to use a Private Link connection*. After that's done, you can create a Private Link connection for one or many networks, without affecting the traffic of other networks. That means creating a separate private endpoint for each network, and a separate Azure Monitor Private Link Scope object. Your Azure Monitor Private Link Scope objects can link to the same workspaces or components, or to different ones.
-
-### Test with a local bypass: Edit your machine's host file instead of DNS
-
-As a local bypass to the all-or-nothing behavior, you can select not to update your DNS with the Private Link records. Instead, you can edit the host files on specific machines so that only these machines send requests to the Private Link endpoints:
-
-* Set up a Private Link connection as described later in the [Connect to a private endpoint](#connect-to-a-private-endpoint) section of this article. But when you're connecting to a private endpoint, choose *not* to automatically integrate with the DNS (step 5b).
-* Configure the relevant endpoints on your machines' host files. To review the Azure Monitor endpoints that need mapping, see [Reviewing your endpoint's DNS settings](#reviewing-your-endpoints-dns-settings).
-
-> [!NOTE]
-> We don't recommend this approach for production environments.
-
-## Limits and additional considerations
-
-### Azure Monitor Private Link Scope limits
-
-The Azure Monitor Private Link Scope object has the following limits:
-* A virtual network can connect to only *one* Azure Monitor Private Link Scope object. The Azure Monitor Private Link Scope object must provide access to all the Azure Monitor resources that the virtual network should have access to.
-* An Azure Monitor Private Link Scope object can connect to 50 Azure Monitor resources at most.
-* An Azure Monitor resource (workspace or Application Insights component) can connect to five Azure Monitor Private Link Scopes at most.
-* An Azure Monitor Private Link Scope object can connect to 10 private endpoints at most.
-
-In the following diagram:
-* Each virtual network connects to only one Azure Monitor Private Link Scope object.
-* AMPLS A connects to two workspaces and one Application Insights component, by using three of the 50 possible Azure Monitor resource connections.
-* Workspace 2 connects to AMPLS A and AMPLS B, by using two of the five possible Azure Monitor Private Link Scope connections.
-* AMPLS B is connected to private endpoints of two virtual networks (VNet2 and VNet3), by using two of the 10 possible private endpoint connections.
-
-![Diagram of Azure Monitor Private Link Scope limits.](./media/private-link-security/ampls-limits.png)
-
-### Application Insights considerations
-
-* You'll need to add resources that host the monitored workloads to a Private Link instance. For an example, see [Using Private Endpoints for Azure Web Apps](../../app-service/networking/private-endpoint.md).
-* Non-portal consumption experiences must also run on the connected virtual network that includes the monitored workloads.
-* To support Private Link connections for Profiler and Debugger, you'll need to [provide your own storage account](../app/profiler-bring-your-own-storage.md).
-
-> [!NOTE]
-> To fully secure workspace-based Application Insights, you need to lock down access to both Application Insights resources and the underlying Log Analytics workspace.
-
-### Log Analytics considerations
-
-#### Automation
-
-If you use Log Analytics solutions that require an Azure Automation account, such as Update Management, Change Tracking, or Inventory, you should also set up a separate Private Link connection for your Automation account. For more information, see [Use Azure Private Link to securely connect networks to Azure Automation](../../automation/how-to/private-link-security.md).
-
-#### Log Analytics solution packs
-
-Log Analytics agents need to access a global storage account to download solution packs. Private Link setups created on or after April 19, 2021 (or starting June 2021 on Azure Sovereign clouds) can reach the agents' solution pack storage over the Private Link connection. This capability is made possible through the new DNS zone created for [blob.core.windows.net](#privatelink-blob-core-windows-net).
-
-If your Private Link setup was created before April 19, 2021, it won't reach the solution pack storage over a Private Link connection. To handle that, you can either:
-* Re-create your Azure Monitor Private Link Scope and the private endpoint that's connected to it.
-* Allow your agents to reach the storage account through its public endpoint, by adding the following rules to your firewall allowlist:
-
- | Cloud environment | Agent resource | Ports | Direction |
- |:--|:--|:--|:--|
- |Azure Public | scadvisorcontent.blob.core.windows.net | 443 | Outbound
- |Azure Government | usbn1oicore.blob.core.usgovcloudapi.net | 443 | Outbound
- |Azure China 21Vianet | mceast2oicore.blob.core.chinacloudapi.cn| 443 | Outbound
-
-## Private Link connection setup
-
-Start by creating an Azure Monitor Private Link Scope resource.
-
-1. Go to **Create a resource** in the Azure portal and search for **Azure Monitor Private Link Scope**.
-
- ![Screenshot that shows finding Azure Monitor Private Link Scope.](./media/private-link-security/ampls-find-1c.png)
-
-2. Select **Create**.
-3. Choose a subscription and resource group.
-4. Give the Azure Monitor Private Link Scope a name. It's best to use a meaningful and clear name, such as **AppServerProdTelem**.
-5. Select **Review + create**.
-
- ![Screenshot that shows selections for creating an Azure Monitor Private Link Scope.](./media/private-link-security/ampls-create-1d.png)
-
-6. After validation, select **Create**.
-
-### Connect Azure Monitor resources
-
-Connect Azure Monitor resources (Log Analytics workspaces and Application Insights components) to your Azure Monitor Private Link Scope.
-
-1. In your Azure Monitor Private Link Scope, select **Azure Monitor Resources** on the left menu. Select **Add**.
-2. Add the workspace or component. Selecting the **Add** button brings up a dialog where you can select Azure Monitor resources. You can browse through your subscriptions and resource groups, or you can enter their names to filter down to them. Select the workspace or component, and select **Apply** to add them to your scope.
-
- ![Screenshot of the interface for selecting a scope.](./media/private-link-security/ampls-select-2.png)
-
-> [!NOTE]
-> Deleting Azure Monitor resources requires that you first disconnect them from any Azure Monitor Private Link Scope objects that they're connected to. It's not possible to delete resources that are connected to an Azure Monitor Private Link Scope.
-
-### Connect to a private endpoint
-
-Now that you have resources connected to your Azure Monitor Private Link Scope, create a private endpoint to connect your network. You can do this task in the [Azure portal Private Link center](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/privateendpoints) or inside your Azure Monitor Private Link Scope. This example uses the Azure Monitor Private Link Scope:
-
-1. In your scope resource, select **Private Endpoint connections** on the left resource menu. Select **Private Endpoint** to start the endpoint creation process. You can also approve connections that were started in the Private Link center here by selecting them and then selecting **Approve**.
-
- ![Screenshot of the interface for setting up private endpoint connections.](./media/private-link-security/ampls-select-private-endpoint-connect-3.png)
-
-2. Choose the subscription, resource group, and name of the endpoint. Choose a region that matches the region of the virtual network that you connected the endpoint to.
-
-3. Select **Next: Resource**.
-
-4. On the **Resource** tab:
-
- a. For **Subscription**, select the subscription that contains your Azure Monitor Private Scope resource.
-
- b. For **Resource type**, select **Microsoft.Insights/privateLinkScopes**.
-
- c. For **Resource**, select the Azure Monitor Private Link Scope that you created earlier.
-
- ![Screenshot of resource selections for creating a private endpoint.](./media/private-link-security/ampls-select-private-endpoint-create-4.png)
-
- d. Select **Next: Configuration >**.
-
-5. On the **Configuration** tab:
-
- a. Choose the virtual network and subnet that you want to connect to your Azure Monitor resources.
-
- b. For **Integrate with private DNS zone**, select **Yes** to automatically create a new private DNS zone. The actual DNS zones might be different from what's shown in the following screenshot.
-
- > [!NOTE]
- > If you select **No** and prefer to manage DNS records manually, first complete setting up Private Link - including this private endpoint and the Azure Monitor Private Link Scope configuration. Then, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md).
- >
- > Make sure not to create empty records as preparation for your Private Link setup. The DNS records that you create can override existing settings and affect your connectivity with Azure Monitor.
-
- c. Select **Review + create**.
-
- ![Screenshot of selections for configuring a private endpoint.](./media/private-link-security/ampls-select-private-endpoint-create-5.png)
-
- d. After validation, select **Create**.
-
-You've now created a new private endpoint that's connected to this Azure Monitor Private Link Scope.
-
-## Configure access to your resources
-
-We've now covered the configuration of your network. You should also consider how you want to configure network access to your monitored resources: Log Analytics workspaces and Application Insights components.
-
-Go to the Azure portal. On your resource's menu, an item called **Network Isolation** is on the left side. This page controls both which networks can reach the resource through Private Link and whether other networks can reach it.
-
-> [!NOTE]
-> Starting August 16, 2021, network isolation will be strictly enforced. Resources that are set to block queries from public networks, and that aren't associated with an Azure Monitor Private Link Scope, will stop accepting queries from any network.
-
-![Screenshot that shows network isolation.](./media/private-link-security/ampls-network-isolation.png)
-
-### Connected Azure Monitor Private Link Scopes
-
-On the Azure portal page for network isolation, in the **Azure Monitor Private Links Scopes** area, you can review and configure the resource's connections to Azure Monitor Private Link Scopes. Connecting to Azure Monitor Private Link Scopes allows traffic from the virtual network connected to each Azure Monitor Private Link Scope to reach this resource. It has the same effect as connecting a resource from the scope as we did in [Connect Azure Monitor resources](#connect-azure-monitor-resources).
-
-To add a new connection, select **Add** and select the Azure Monitor Private Link Scope. Select **Apply** to connect it. Your resource can connect to five Azure Monitor Private Link Scope objects, as mentioned in [Restrictions and limitations](#restrictions-and-limitations).
-
-### Managing access from outside Azure Monitor Private Link Scopes
-
-The bottom part of the Azure portal page for network isolation control is the **Virtual networks access configuration** area. These settings control access from public networks, meaning networks not connected to the listed Azure Monitor Private Link Scopes.
-
-If you set **Accept data ingestion from public networks not connected through a Private Link Scope** to **No**, then clients (such as machines and SDKs) outside the connected scopes can't upload data or send logs to this resource.
-
-If you set **Accept queries from public networks not connected through a Private Link Scope** to **No**, then clients (such as machines and SDKs) outside the connected scopes can't query data in this resource. That data includes access to logs, metrics, and the live metrics stream. It also includes experiences built on top, such as workbooks, dashboards, query API-based client experiences, and insights in the Azure portal. Experiences that run outside the Azure portal and that query Log Analytics data also have to be running within the virtual network that uses Private Link.
-
-### Exceptions
-
-#### Diagnostic logs
-
-Logs and metrics uploaded to a workspace via [diagnostic settings](../essentials/diagnostic-settings.md) go over a secure private Microsoft channel and are not controlled by these settings.
-
-#### Azure Resource Manager
-
-Restricting access as explained earlier applies to data in the resource. However, Azure Resource Manager manages configuration changes, including turning these access settings on or off. To control these settings, you should restrict access to resources by using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor roles, permissions, and security](../roles-permissions-security.md).
-
-Additionally, specific experiences (such as the Azure Logic Apps connector, the Update Management solution, and the **Workspace Summary** pane in the portal, showing the solution dashboard) query data through Azure Resource Manager. These experiences won't be able to query data unless Private Link settings are also applied to Resource Manager.
-
-## Review and validate your Private Link setup
-
-### Reviewing your endpoint's DNS settings
-The private endpoint that you created should now have five DNS zones configured:
-
-* privatelink-monitor-azure-com
-* privatelink-oms-opinsights-azure-com
-* privatelink-ods-opinsights-azure-com
-* privatelink-agentsvc-azure-automation-net
-* privatelink-blob-core-windows-net
-
-> [!NOTE]
-> Each of these zones maps specific Azure Monitor endpoints to private IPs from the virtual network's pool of IPs. The IP addresses shown in the following images are only examples. Your configuration should instead show private IPs from your own network.
-
-#### privatelink-monitor-azure-com
-
-This zone covers the global endpoints that Azure Monitor uses. These endpoints serve requests that consider all resources, not a specific one. This zone should have endpoints mapped for:
-
-* `in.ai`: Application Insights ingestion endpoint (both a global and a regional entry).
-* `api`: Application Insights and Log Analytics API endpoint.
-* `live`: Application Insights live metrics endpoint.
-* `profiler`: Application Insights profiler endpoint.
-* `snapshot`: Application Insights snapshot endpoint.
-
-[![Screenshot of the private D N S zone for global endpoints.](./media/private-link-security/dns-zone-privatelink-monitor-azure-com.png)](./media/private-link-security/dns-zone-privatelink-monitor-azure-com-expanded.png#lightbox)
-
-#### privatelink-oms-opinsights-azure-com
-
-This zone covers workspace-specific mapping to Operations Management Suite (OMS) endpoints. You should see an entry for each workspace linked to the Azure Monitor Private Link Scope that's connected with this private endpoint.
-
-[![Screenshot of the private D N S zone for mapping to O M S endpoints.](./media/private-link-security/dns-zone-privatelink-oms-opinsights-azure-com.png)](./media/private-link-security/dns-zone-privatelink-oms-opinsights-azure-com-expanded.png#lightbox)
-
-#### privatelink-ods-opinsights-azure-com
-
-This zone covers workspace-specific mapping to Operational Data Store (ODS) endpoints - the ingestion endpoint of Log Analytics. You should see an entry for each workspace linked to the Azure Monitor Private Link Scope connected with this private endpoint.
-
-[![Screenshot of the private D N S zone for mapping to O D S endpoints.](./media/private-link-security/dns-zone-privatelink-ods-opinsights-azure-com.png)](./media/private-link-security/dns-zone-privatelink-ods-opinsights-azure-com-expanded.png#lightbox)
-
-#### privatelink-agentsvc-azure-automation-net
-
-This zone covers workspace-specific mapping to the agent service automation endpoints. You should see an entry for each workspace linked to the Azure Monitor Private Link Scope connected with this private endpoint.
-
-[![Screenshot of the private D N S zone for mapping to agent service automation endpoints.](./media/private-link-security/dns-zone-privatelink-agentsvc-azure-automation-net.png)](./media/private-link-security/dns-zone-privatelink-agentsvc-azure-automation-net-expanded.png#lightbox)
-
-#### privatelink-blob-core-windows-net
-
-This zone configures connectivity to the global agents' storage account for solution packs. Through it, agents can download new or updated solution packs (also known as management packs). Only one entry is required to handle to Log Analytics agents, no matter how many workspaces are used.
-
-[![Screenshot of the private D N S zone for connectivity to the global agents' storage account for solution packs.](./media/private-link-security/dns-zone-privatelink-blob-core-windows-net.png)](./media/private-link-security/dns-zone-privatelink-blob-core-windows-net-expanded.png#lightbox)
-
-> [!NOTE]
-> This entry is only added to Private Link setups created on or after April 19, 2021 (or starting June 2021 on Azure Sovereign clouds).
-
-### Validating that you're communicating over a Private Link connection
-
-To validate that your requests are now sent through the private endpoint, you can review them with a network tracking tool or even your browser. For example, when you're trying to query your workspace or application, make sure the request is sent to the private IP that's mapped to the API endpoint. In this example, it's 172.17.0.9.
-
-> [!NOTE]
-> Some browsers might use [other DNS settings](#browser-dns-settings). Make sure that your DNS settings apply.
-
-To make sure that your workspace or component isn't receiving requests from public networks (not connected through an Azure Monitor Private Link Scope), set the resource's public ingestion and query flags to **No** as explained in [Configure access to your resources](#configure-access-to-your-resources).
-
-From a client on your protected network, use `nslookup` for any of the endpoints listed in your DNS zones. Your DNS server should resolve it to the mapped private IPs instead of the public IPs that are used by default.
-
-## Use APIs and the command line
-
-You can automate the process of using Azure Resource Manager templates, REST, and command-line interfaces.
-
-To create and manage Azure Monitor Private Link Scopes, use the [REST API](/rest/api/monitor/privatelinkscopes(preview)/private%20link%20scoped%20resources%20(preview)) or the [Azure CLI (az monitor private-link-scope)](/cli/azure/monitor/private-link-scope).
-
-To manage the network access flag on your workspace or component, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]` on [Log Analytics workspaces](/cli/azure/monitor/log-analytics/workspace) or [Application Insights components](/cli/azure/ext/application-insights/monitor/app-insights/component).
-
-### Example Azure Resource Manager template
-
-The following Azure Resource Manager template creates:
-
-* An Azure Monitor Private Link Scope named *my-scope*.
-* A Log Analytics workspace named *my-workspace*.
-* A scoped resource named *my-workspace-connection* added to the *my-scope* Azure Monitor Private Link Scope.
-
-```
-{
- "$schema": https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#,
- "contentVersion": "1.0.0.0",
- "parameters": {
- "private_link_scope_name": {
- "defaultValue": "my-scope",
- "type": "String"
- },
- "workspace_name": {
- "defaultValue": "my-workspace",
- "type": "String"
- }
- },
- "variables": {},
- "resources": [
- {
- "type": "microsoft.insights/privatelinkscopes",
- "apiVersion": "2019-10-17-preview",
- "name": "[parameters('private_link_scope_name')]",
- "location": "global",
- "properties": {}
- },
- {
- "type": "microsoft.operationalinsights/workspaces",
- "apiVersion": "2020-10-01",
- "name": "[parameters('workspace_name')]",
- "location": "westeurope",
- "properties": {
- "sku": {
- "name": "pergb2018"
- },
- "publicNetworkAccessForIngestion": "Enabled",
- "publicNetworkAccessForQuery": "Enabled"
- }
- },
- {
- "type": "microsoft.insights/privatelinkscopes/scopedresources",
- "apiVersion": "2019-10-17-preview",
- "name": "[concat(parameters('private_link_scope_name'), '/', concat(parameters('workspace_name'), '-connection'))]",
- "dependsOn": [
- "[resourceId('microsoft.insights/privatelinkscopes', parameters('private_link_scope_name'))]",
- "[resourceId('microsoft.operationalinsights/workspaces', parameters('workspace_name'))]"
- ],
- "properties": {
- "linkedResourceId": "[resourceId('microsoft.operationalinsights/workspaces', parameters('workspace_name'))]"
- }
- }
- ]
-}
-```
-
-## Collect custom logs and IIS logs over Private Link
-
-Storage accounts are used in the ingestion process of custom logs. By default, the process uses service-managed storage accounts. To ingest custom logs on Private Link, you must use your own storage accounts and associate them with Log Analytics workspaces. You can set up such accounts by using the [command line](/cli/azure/monitor/log-analytics/workspace/linked-storage).
-
-For more information on bringing your own storage account, see [Customer-owned storage accounts for log ingestion](private-storage.md).
-
-## Restrictions and limitations
-
-### Azure Monitor Private Link Scope
-
-When you're planning your Private Link setup, consider the [limits on the Azure Monitor Private Link Scope object](#azure-monitor-private-link-scope-limits).
-
-### Agents
-
-To support secure ingestion to Log Analytics workspaces, you must use the latest versions of the Windows and Linux agents. Older versions can't upload monitoring data over a private network.
--- **Log Analytics Windows agent**: Use Log Analytics agent version 10.20.18038.0 or later.-- **Log Analytics Linux agent**: Use agent version 1.12.25 or later. If you can't, run the following commands on your VM:-
- ```cmd
- $ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -X
- $ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -w <workspace id> -s <workspace key>
- ```
-
-### Azure portal
-
-To use Azure Monitor portal experiences such as Application Insights and Log Analytics, you need to allow the Azure portal and Azure Monitor extensions to be accessible on the private networks. Add `AzureActiveDirectory`, `AzureResourceManager`, `AzureFrontDoor.FirstParty`, and `AzureFrontdoor.Frontend` [service tags](../../firewall/service-tags.md) to your network security group.
-
-### Querying data
-
-The [`externaldata` operator](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor) isn't supported over a Private Link connection. It reads data from storage accounts but doesn't guarantee that the storage is accessed privately.
-
-### Programmatic access
-
-To use the REST API, the [CLI](/cli/azure/monitor), or PowerShell with Azure Monitor on private networks, add the [service tags](../../virtual-network/service-tags-overview.md) `AzureActiveDirectory` and `AzureResourceManager` to your firewall.
-
-### Application Insights SDK downloads from a content delivery network
-
-Bundle the JavaScript code in your script so that the browser doesn't try to download code from a content delivery network. An example is provided on [GitHub](https://github.com/microsoft/ApplicationInsights-JS#npm-setup-ignore-if-using-snippet-setup).
+### Azure Monitor Private Link applies to all networks that share the same DNS
+Some networks are composed of multiple VNets or other connected networks. If these networks share the same DNS, setting up a Private Link on any of them would update the DNS and affect traffic across all networks. That's especially important to note due to the "All or Nothing" behavior described above.
-### Browser DNS settings
+![Diagram of DNS overrides in multiple VNets](./media/private-link-security/dns-overrides-multiple-vnets.png)
-If you're connecting to your Azure Monitor resources over a Private Link connection, traffic to these resources must go through the private endpoint that's configured on your network. To enable the private endpoint, update your DNS settings as explained in [Connect to a private endpoint](#connect-to-a-private-endpoint).
+In the above diagram, VNet 10.0.1.x first connects to AMPLS1 and maps the Azure Monitor global endpoints to IPs from its range. Later, VNet 10.0.2.x connects to AMPLS2, and overrides the DNS mapping of the *same global endpoints* with IPs from its range. Since these VNets aren't peered, the first VNet now fails to reach these endpoints.
-Some browsers use their own DNS settings instead of the ones you set. The browser might try to connect to Azure Monitor public endpoints and bypass the Private Link connection entirely. Verify that your browser's settings don't override or cache old DNS settings.
## Next steps
+- [Design your Private Link setup](private-link-design.md)
+- Learn how to [configure your Private Link](private-link-configure.md)
-- Learn about [private storage](private-storage.md).
+<h3><a id="connect-to-a-private-endpoint"></a></h3>
azure-monitor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Monitor description: Sample Azure Resource Graph queries for Azure Monitor showing use of resource types and tables to access Azure Monitor related resources and properties. Previously updated : 08/04/2021 Last updated : 08/09/2021
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/usage-estimated-costs.md
If you're not yet using Azure Monitor Logs, you can use the [Azure Monitor prici
In each of these, the pricing calculator will help you estimate your likely costs based on your expected utilization.
-For example, with Log Analytics you can enter the number of VMs and the GB of data you expect to collect from each VM. Typically 1 GB to 3 GB of data month is ingested from a typical Azure VM. If you're already evaluating Azure Monitor Logs already, you can use your data statistics from your own environment. See below for how to determine the [number of monitored VMs](logs/manage-cost-storage.md#understanding-nodes-sending-data) and the [volume of data your workspace is ingesting](logs/manage-cost-storage.md#understanding-ingested-data-volume).
+For example, with Log Analytics you can enter the number of VMs and the GB of data you expect to collect from each VM. Typically 1 GB to 3 GB of data per month is ingested from a typical Azure VM. If you're already evaluating Azure Monitor Logs already, you can use your data statistics from your own environment. See below for how to determine the [number of monitored VMs](logs/manage-cost-storage.md#understanding-nodes-sending-data) and the [volume of data your workspace is ingesting](logs/manage-cost-storage.md#understanding-ingested-data-volume).
Similarly for Application Insights, if you enable the "Estimate data volume based on application activity" functionality, you can provide inputs about your application (requests per month and page views per month, in case you will collect client-side telemetry), and then the calculator will tell you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration (e.g some have default sampling, some have no sampling etc.), so you still have the control to reduce the volume of data you ingest far below the median level using sampling. But this is a starting point to understand what other, similar customers are seeing. [Learn more](app/pricing.md#estimating-the-costs-to-manage-your-application) about estimating costs for Application Insights.
azure-monitor Vminsights Enable Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-enable-hybrid.md
You can download the Dependency agent from these locations:
| File | OS | Version | SHA-256 | |:--|:--|:--|:--|
-| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.9.15340 | 3F34A36CF569724A5C83B1C8DFEC54263B7ABCFEAC9EB9BB40AF822A31265AF7 |
-| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.9.15340 | 0B0566A11A9B218FA6E44B057E7BA93986B8D6539B928C6D36D97D13A2F8B8A6 |
+| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.10.16690 | 8E4EAFFF74F64B71AD3A6EB94EA6A57A719D77487153AEC1BEC05FFF849DA7BE |
+| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.10.16690 | F2C648762620C00545F02904D3C58971338B28FBA129069B0CDEBC994591762E |
## Install the Dependency agent on Windows
azure-netapp-files Azure Netapp Files Manage Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-manage-snapshots.md
na ms.devlang: na Previously updated : 07/12/2021 Last updated : 08/12/2021 # Manage snapshots by using Azure NetApp Files
You can create volume snapshots on demand.
You can schedule for volume snapshots to be taken automatically by using snapshot policies. You can also modify a snapshot policy as needed, or delete a snapshot policy that you no longer need.
-### Register the feature
-
-The **snapshot policy** feature is currently in preview. If you are using this feature for the first time, you need to register the feature first.
-
-1. Register the feature:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSnapshotPolicy
- ```
-
-2. Check the status of the feature registration:
-
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSnapshotPolicy
- ```
-You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
- ### Create a snapshot policy A snapshot policy enables you to specify the snapshot creation frequency in hourly, daily, weekly, or monthly cycles. You also need to specify the maximum number of snapshots to retain for the volume.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
na ms.devlang: na Previously updated : 08/06/2021 Last updated : 08/12/2021
Azure NetApp Files is updated regularly. This article provides a summary about t
## August 2021
+* [Snapshot policy](azure-netapp-files-manage-snapshots.md#manage-snapshot-policies) now generally available (GA)
+
+ The snapshot policy feature is now generally available. You no longer need to register the feature before using it.
+ * [NFS `Chown Mode` export policy and UNIX export permissions](configure-unix-permissions-change-ownership-mode.md) (Preview) You can now set the Unix permissions and the change ownership mode (`Chown Mode`) options on Azure NetApp Files NFS volumes or dual-protocol volumes with the Unix security style. You can specify these settings during volume creation or after volume creation.
azure-percept Audio Button Led Behavior https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/audio-button-led-behavior.md
Title: Azure Percept Audio button and LED behavior
+ Title: Azure Percept Audio button and LED states
description: Learn more about the button and LED states of Azure Percept Audio
Last updated 03/25/2021
-# Azure Percept Audio button and LED behavior
+# Azure Percept Audio button and LED states
See the following guidance for information on the button and LED states of the Azure Percept Audio device.
Use the buttons to control the behavior of the device.
|Mute|Press to mute/unmute the mic array. The button event is release-triggered when pressed.| |PTT/PTS|Press PTT to bypass the keyword spotting state and activate the command listening state. Press again to stop the agent's active dialogue and revert to the keyword spotting state. The button event is release-triggered when pressed. PTS only works when the button is pressed while the agent is speaking, not when the agent is listening or thinking.|
-## LED behavior
+## LED states
Use the LED indicators to understand which state your device is in.
azure-percept Connect Over Cellular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/connect-over-cellular.md
Title: Connect Azure Percept Over 5G or LTE Networks
+ Title: Connect Azure Percept over 5G or LTE networks
description: This article explains how to connect the Azure Percept DK over 5G or LTE networks.
azure-percept Dev Tools Installer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/dev-tools-installer.md
Title: Azure Percept Dev Tools Pack Installer overview
+ Title: Install Azure Percept development tools
description: Learn more about using the Dev Tools Pack Installer to accelerate advanced development with Azure Percept
azure-percept How To Configure Voice Assistant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-configure-voice-assistant.md
Title: Configure voice assistant application using Azure IoT Hub
-description: Configure voice assistant application using Azure IoT Hub
+ Title: Configure your voice assistant application using Azure IoT Hub
+description: Configure your voice assistant application using Azure IoT Hub
Last updated 02/15/2021
-# Configure voice assistant application using Azure IoT Hub
+# Configure your voice assistant application using Azure IoT Hub
This article describes how to configure your voice assistant application using IoT Hub. For a step-by-step tutorial for the process of creating a voice assistant, see [Build a no-code voice assistant with Azure Percept Studio and Azure Percept Audio](./tutorial-no-code-speech.md).
azure-percept How To Select Update Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-select-update-package.md
Title: Select the best update package for your Azure Percept DK
+ Title: Select your Azure Percept DK update package
description: How to identify your Azure Percept DK version and select the best update package for it
azure-percept How To Update Over The Air https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-update-over-the-air.md
Title: Update your Azure Percept DK over-the-air (OTA)
+ Title: Update your Azure Percept DK using over-the-air (OTA) updates
description: Learn how to receive over-the air (OTA) updates to your Azure Percept DK
Last updated 03/30/2021
-# Update your Azure Percept DK over-the-air (OTA)
+# Update your Azure Percept DK using over-the-air (OTA) updates
Follow this guide to learn how to update the OS and firmware of the carrier board of your Azure Percept DK over-the-air (OTA) with Device Update for IoT Hub.
azure-percept How To View Video Stream https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-view-video-stream.md
Title: View your Azure Percept DK's RTSP video stream
+ Title: View your Azure Percept DK RTSP video stream
description: Learn how to view the RTSP video stream from Azure Percept DK
Last updated 02/12/2021
-# View your Azure Percept DK's RTSP video stream
+# View your Azure Percept DK RTSP video stream
Follow this guide to view the RTSP video stream from the Azure Percept DK within Azure Percept Studio. Inferencing from vision AI models deployed to your device will be viewable in the web stream.
azure-percept Overview 8020 Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-8020-integration.md
Title: Azure Percept DK 80/20 integration
+ Title: Azure Percept DK and 80/20 integration
description: Learn more about how Azure Percept DK integrates with the 80/20 railing system.
azure-percept Overview Advanced Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-advanced-code.md
Title: Azure Percept Advanced Development
+ Title: Azure Percept advanced development
description: Learn more about advanced development tools on Azure Percept
azure-percept Overview Ai Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-ai-models.md
Title: Azure Percept AI models
+ Title: Azure Percept sample AI models
description: Learn more about the AI models available for prototyping and deployment
Last updated 03/23/2021
-# Azure Percept AI models
+# Azure Percept sample AI models
Azure Percept enables you to develop and deploy AI models directly to your [Azure Percept DK](./overview-azure-percept-dk.md) from [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819). Model deployment utilizes [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) and [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/#iotedge-overview).
azure-percept Quickstart Percept Dk Unboxing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-dk-unboxing.md
Title: Unbox and assemble your Azure Percept DK components
+ Title: Unbox and assemble your Azure Percept DK
description: Learn how to unbox, connect, and power on your Azure Percept DK
Last updated 02/16/2021
-# Quickstart: unbox and assemble your Azure Percept DK components
+# Quickstart: unbox and assemble your Azure Percept DK
Once you have received your Azure Percept DK, reference this guide for information on connecting the components and powering on the device.
azure-percept Troubleshoot Dev Kit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-dev-kit.md
Title: Troubleshoot general issues with Azure Percept DK and IoT Edge
-description: Get troubleshooting tips for some of the more common issues with Azure Percept DK
+ Title: Troubleshoot issues with Azure Percept DK
+description: Get troubleshooting tips for some of the more common issues with Azure Percept DK and IoT Edge
Last updated 03/25/2021-+ # Azure Percept DK troubleshooting
-See the guidance below for general troubleshooting tips for the Azure Percept DK.
-
-## General troubleshooting commands
-
-To run these commands, [SSH into the dev kit](./how-to-ssh-into-percept-dk.md) and enter the commands into the SSH client prompt.
-
-To redirect any output to a .txt file for further analysis, use the following syntax:
-
-```console
-sudo [command] > [file name].txt
-```
-
-Change the permissions of the .txt file so it can be copied:
-
-```console
-sudo chmod 666 [file name].txt
-```
-
-After redirecting output to a .txt file, copy the file to your host PC via SCP:
-
-```console
-scp [remote username]@[IP address]:[remote file path]/[file name].txt [local host file path]
-```
-
-```[local host file path]``` refers to the location on your host PC that you would like to copy the .txt file to. ```[remote username]``` is the SSH username chosen during the [setup experience](./quickstart-percept-dk-set-up.md).
-
-For additional information on the Azure IoT Edge commands, see the [Azure IoT Edge device troubleshooting documentation](../iot-edge/troubleshoot.md).
-
-|Category: |Command: |Function: |
+The purpose of this troubleshooting article is to help Azure Percept DK users to quickly resolve common issues with their dev kits. It also provides guidance on collecting logs for when extra support is needed.
+
+## Log collection
+In this section, you'll get guidance on which logs to collect and how to collect them.
+
+### How to collect logs
+1. Connect to your dev kit [over SSH](./how-to-ssh-into-percept-dk.md).
+1. Run the needed commands in the SSH terminal window. See the next section for the list of log collection commands.
+1. Redirect any output to a .txt file for further analysis, use the following syntax:
+ ```console
+ sudo [command] > [file name].txt
+ ```
+1. Change the permissions of the .txt file so it can be copied:
+ ```console
+ sudo chmod 666 [file name].txt
+ ```
+1. Copy the file to your host PC via SCP:
+ ```console
+ scp [remote username]@[IP address]:[remote file path]/[file name].txt [local host file path]
+ ```
+
+ ```[local host file path]``` refers to the location on your host PC that you would like to copy the .txt file to. ```[remote username]``` is the SSH username chosen during the [setup experience](./quickstart-percept-dk-set-up.md).
+
+### Log types and commands
+
+|Log purpose |When to collect it |Command |
+|--||-|
+|*Support bundle* - provides a set of logs needed for most customer support requests.|Collect whenever requesting support.|```sudo iotedge support-bundle --since 1h``` <br><br>*"--since 1h" can be changed to any time span, for example, "6h" (6 hours), "6d" (6 days) or "6m" (6 minutes)*|
+|*OOBE logs* - records details about the setup experience.|Collect when you find issues during the setup experience.|```sudo journalctl -u oobe -b```|
+|*edgeAgent logs* - records the version numbers of all modules running on your device.|Collect when one or more modules aren't working.|```sudo iotedge logs edgeAgent```|
+|*Module container logs* - records details about specific IoT Edge module containers|Collect when you find issues with a module|```sudo iotedge logs [container name]```|
+|*Wi-Fi access point logs* - records details about the connection to the dev kit's Wi-Fi access point.|Collect when you find issues when connecting to the dev kit's Wi-Fi access point.|```sudo journalctl -u hostapd.service```|
+|*Network logs* - a set of logs covering Wi-Fi services and the network stack.|Collect when you find Wi-Fi or network issues.|```sudo journalctl -u hostapd.service -u wpa_supplicant.service -u ztpd.service -u systemd-networkd > network_log.txt```<br><br>```cat /etc/os-release && cat /etc/os-subrelease && cat /etc/adu-version && rpm -q ztpd > system_ver.txt```<br><br>Run both commands. Each command collects multiple logs and puts them into a single output.|
+
+## Troubleshooting commands
+Here's a set of commands that can be used for troubleshooting issues you may find with the dev kit. To run these commands, you must first connect to your dev kit [over SSH](./how-to-ssh-into-percept-dk.md).
+
+For more information on the Azure IoT Edge commands, see the [Azure IoT Edge device troubleshooting documentation](../iot-edge/troubleshoot.md).
+
+|Function |When to use |Command |
||-||
-|OS |```cat /etc/os-release``` |check Mariner image version |
-|OS |```cat /etc/os-subrelease``` |check derivative image version |
-|OS |```cat /etc/adu-version``` |check ADU version |
-|Temperature |```cat /sys/class/thermal/thermal_zone0/temp``` |check temperature of devkit |
-|Wi-Fi |```sudo journalctl -u hostapd.service``` |check SoftAP logs|
-|Wi-Fi |```sudo journalctl -u wpa_supplicant.service``` |check Wi-Fi services logs |
-|Wi-Fi |```sudo journalctl -u ztpd.service``` |check Wi-Fi Zero Touch Provisioning Service logs |
-|Wi-Fi |```sudo journalctl -u systemd-networkd``` |check Mariner Network stack logs |
-|Wi-Fi |```sudo cat /etc/hostapd/hostapd-wlan1.conf``` |check wifi access point configuration details |
-|OOBE |```sudo journalctl -u oobe -b``` |check OOBE logs |
-|Telemetry |```sudo azure-device-health-id``` |find unique telemetry HW_ID |
-|Azure IoT Edge |```sudo iotedge check``` |run configuration and connectivity checks for common issues |
-|Azure IoT Edge |```sudo iotedge logs [container name]``` |check container logs, such as speech and vision modules |
-|Azure IoT Edge |```sudo iotedge support-bundle --since 1h``` |collect module logs, Azure IoT Edge security manager logs, container engine logs, ```iotedge check``` JSON output, and other useful debug information from the past hour |
-|Azure IoT Edge |```sudo journalctl -u iotedge -f``` |view the logs of the Azure IoT Edge security manager |
-|Azure IoT Edge |```sudo systemctl restart iotedge``` |restart the Azure IoT Edge security daemon |
-|Azure IoT Edge |```sudo iotedge list``` |list the deployed Azure IoT Edge modules |
-|Other |```df [option] [file]``` |display information on available/total space in specified file system(s) |
-|Other |`ip route get 1.1.1.1` |display device IP and interface information |
-|Other |<code>ip route get 1.1.1.1 &#124; awk '{print $7}'</code> <br> `ifconfig [interface]` |display device IP address only |
--
-The ```journalctl``` Wi-Fi commands can be combined into the following single command:
-
-```console
-sudo journalctl -u hostapd.service -u wpa_supplicant.service -u ztpd.service -u systemd-networkd -b
-```
-
-## Docker troubleshooting commands
-
-|Command: |Function: |
-|--||
-|```sudo docker ps``` |[shows which containers are running](https://docs.docker.com/engine/reference/commandline/ps/) |
-|```sudo docker images``` |[shows which images are on the device](https://docs.docker.com/engine/reference/commandline/images/)|
-|```sudo docker rmi [image id] -f``` |[deletes an image from the device](https://docs.docker.com/engine/reference/commandline/rmi/) |
-|```sudo docker logs -f edgeAgent``` <br> ```sudo docker logs -f [module_name]``` |[takes container logs of specified module](https://docs.docker.com/engine/reference/commandline/logs/) |
-|```sudo docker image prune``` |[removes all dangling images](https://docs.docker.com/engine/reference/commandline/image_prune/) |
-|```sudo watch docker ps``` <br> ```watch ifconfig [interface]``` |check docker container download status |
+|Checks the software version on the dev kit.|Use anytime you need confirm which software version is on your dev kit.|```cat /etc/adu-version```|
+|Checks the temperature of the dev kit|Use in cases where you think the dev kit might be overheating.|```cat /sys/class/thermal/thermal_zone0/temp```|
+|Checks the dev kit's telemetry ID|Use in cases where you need to know the dev kits unique telemetry identifier.|```sudo azure-device-health-id```|
+|Checks the status of IoT Edge|Use whenever there are issues with IoT Edge modules connecting to the cloud.|```sudo iotedge check```|
+|Restarts the Azure IoT Edge security daemon|Use when IoT Edge is unresponsive or not working correctly.|```sudo systemctl restart iotedge``` |
+|Lists the deployed Azure IoT Edge modules|Uwe when you need to see all of the modules deployed on the dev kit|```sudo iotedge list``` |
+|Displays the available/total space in the specified file system(s)|Use if you need to know the available storage on the dev kit.|```df [option] [file]```|
+|Displays the dev kit's IP and interface information|Use when you need to know the dev kit's IP address.|`ip route get 1.1.1.1` |
+|Display dev kit's IP address only|Use when you only want the dev kit's IP address and not the other interface information.|<code>ip route get 1.1.1.1 &#124; awk '{print $7}'</code> <br> `ifconfig [interface]` |
## USB update errors |Error: |Solution: | ||--|
-|LIBUSB_ERROR_XXX during USB flash via UUU |This error is the result of a USB connection failure during UUU updating. If the USB cable is not properly connected to the USB ports on the PC or the Percept DK carrier board, an error of this form will occur. Try unplugging and reconnecting both ends of the USB cable and jiggling the cable to ensure a secure connection. This almost always solves the issue. |
+|LIBUSB_ERROR_XXX during USB flash via UUU |This error is the result of a USB connection failure during UUU updating. If the USB cable isn't properly connected to the USB ports on the PC or the Percept DK carrier board, an error of this form will occur. Try unplugging and reconnecting both ends of the USB cable and jiggling the cable to ensure a secure connection.|
## Clearing hard drive space on the Azure Percept DK
-There are two components that take up the hard drive space on the Azure Percept DK, the docker container logs and the docker containers themselves. To ensure the container logs do not take up all fo the hard space, the Azure Percept DK has log rotation built in. This will rotate out any old logs as new logs get generated.
+There are two components that take up the hard drive space on the Azure Percept DK, the docker container logs and the docker containers themselves. To ensure the container logs don't take up all fo the hard space, the Azure Percept DK has log rotation built in which rotates out any old logs as new logs get generated.
-For situations where the number of docker containers cause hard drive space issues you can delete unused containers by following these steps:
+For situations when the number of docker containers cause hard drive space issues you can delete unused containers by following these steps:
1. [SSH into the dev kit](./how-to-ssh-into-percept-dk.md) 1. Run this command: `docker system prune`
azure-resource-manager Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Resource Manager description: Sample Azure Resource Graph queries for Azure Resource Manager showing use of resource types and tables to access Azure Resource Manager related resources and properties. Previously updated : 08/04/2021 Last updated : 08/09/2021
azure-sql-edge Date Bucket Tsql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/date-bucket-tsql.md
DATE_BUCKET (datePart, number, date, origin)
The part of *date* that is used with the ΓÇÿnumberΓÇÖ parameter. Ex. Year, month, minute, second etc. > [!NOTE]
-> `DATE_BUCKET` does not accept user-defined variable equivalents for the *datepPart* arguments.
+> `DATE_BUCKET` does not accept user-defined variable equivalents for the *datePart* arguments.
|*datePart*|Abbreviations| |||
An expression that can resolve to one of the following values:
+ **date** + **datetime**
-+ **datetimeoffset**
+ **datetime2**++ **datetimeoffset** + **smalldatetime** + **time**
An optional expression that can resolve to one of the following values:
+ **date** + **datetime**
-+ **datetimeoffset**
+ **datetime2**++ **datetimeoffset** + **smalldatetime** + **time**
azure-sql Arm Templates Content Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/arm-templates-content-guide.md
The following table includes links to Azure Resource Manager templates for Azure
| [Network environment for SQL Managed Instance](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sql/sql-managed-instance-azure-environment) | This deployment will create a configured Azure virtual network with two subnets, one that will be dedicated to your managed instances and another where you can place other resources (for example VMs, App Service environments, etc.). This template will create a properly configured networking environment where you can deploy managed instances. | | [SQL Managed Instance with P2S connection](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sql/sqlmi-new-vnet-w-point-to-site-vpn) | This deployment will create an Azure virtual network with two subnets, `ManagedInstance` and `GatewaySubnet`. SQL Managed Instance will be deployed in the ManagedInstance subnet. A virtual network gateway will be created in the `GatewaySubnet` subnet and configured for Point-to-Site VPN connection. | | [SQL Managed Instance with a virtual machine](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sql/sqlmi-new-vnet-w-jumpbox) | This deployment will create an Azure virtual network with two subnets, `ManagedInstance` and `Management`. SQL Managed Instance will be deployed in the `ManagedInstance` subnet. A virtual machine with the latest version of SQL Server Management Studio (SSMS) will be deployed in the `Management` subnet. |
+| [SQL Managed Instance with diagnostic logs enabled](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sql/sqlmi-new-vnet-w-diagnostic-settings) | This deployment creates an Azure Virtual Network with `ManagedInstance` subnet and deploys a Managed Instance inside with enabled diagnostic logs. It will also deploy event hub, diagnostic workspace and storage account for the purpose of sending and storing instance diagnostic logs. |
azure-sql Auto Failover Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auto-failover-group-overview.md
When you set up a failover group between primary and secondary SQL Managed Insta
- The virtual networks used by the instances of SQL Managed Instance need to be connected through a [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or [Express Route](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md). When two virtual networks connect through an on-premises network, ensure there is no firewall rule blocking ports 5022, and 11000-11999. Global VNet Peering is supported with the limitation described in the note below. > [!IMPORTANT]
- > [On 9/22/2020 support for global virtual network peering for newly created virtual clusters was announced](https://azure.microsoft.com/en-us/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/). It means that global virtual network peering is supported for SQL managed instances created in empty subnets after the announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL managed instances peering support is limited to the networks in the same region due to the [constraints of global virtual network peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). See also the relevant section of the [Azure Virtual Networks frequently asked questions](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) article for more details. To be able to use global virtual network peering for SQL managed instances from virtual clusters created before the announcement date, consider configuring [maintenance window](./maintenance-window.md) on the instances, as it will move the instances into new virtual clusters that support global virtual network peering.
+ > [On 9/22/2020 support for global virtual network peering for newly created virtual clusters was announced](https://azure.microsoft.com/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/). It means that global virtual network peering is supported for SQL managed instances created in empty subnets after the announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL managed instances peering support is limited to the networks in the same region due to the [constraints of global virtual network peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). See also the relevant section of the [Azure Virtual Networks frequently asked questions](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) article for more details. To be able to use global virtual network peering for SQL managed instances from virtual clusters created before the announcement date, consider configuring [maintenance window](./maintenance-window.md) on the instances, as it will move the instances into new virtual clusters that support global virtual network peering.
- The two SQL Managed Instance VNets cannot have overlapping IP addresses. - You need to set up your Network Security Groups (NSG) such that ports 5022 and the range 11000~12000 are open inbound and outbound for connections from the subnet of the other managed instance. This is to allow replication traffic between the instances.
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automated-backups-overview.md
To perform a restore, see [Restore database from backups](recovery-using-backups
| Operation | Azure portal | Azure PowerShell | |||| | **Change backup retention** | [SQL Database](automated-backups-overview.md?tabs=single-database#change-the-pitr-backup-retention-period-by-using-the-azure-portal) <br/> [SQL Managed Instance](automated-backups-overview.md?tabs=managed-instance#change-the-pitr-backup-retention-period-by-using-the-azure-portal) | [SQL Database](automated-backups-overview.md#change-the-pitr-backup-retention-period-by-using-powershell) <br/>[SQL Managed Instance](/powershell/module/az.sql/set-azsqlinstancedatabasebackupshorttermretentionpolicy) |
-| **Change long-term backup retention** | [SQL Database](long-term-backup-retention-configure.md#configure-long-term-retention-policies)<br/>SQL Managed Instance - N/A | [SQL Database](long-term-backup-retention-configure.md)<br/>[SQL Managed Instance](../managed-instance/long-term-backup-retention-configure.md) |
+| **Change long-term backup retention** | [SQL Database](long-term-backup-retention-configure.md#configure-long-term-retention-policies)<br/> [SQL Managed Instance](../managed-instance/long-term-backup-retention-configure.md#using-the-azure-portal) | [SQL Database](long-term-backup-retention-configure.md)<br/>[SQL Managed Instance](../managed-instance/long-term-backup-retention-configure.md#using-powershell) |
| **Restore a database from a point in time** | [SQL Database](recovery-using-backups.md#point-in-time-restore)<br>[SQL Managed Instance](../managed-instance/point-in-time-restore.md) | [SQL Database](/powershell/module/az.sql/restore-azsqldatabase) <br/> [SQL Managed Instance](/powershell/module/az.sql/restore-azsqlinstancedatabase) | | **Restore a deleted database** | [SQL Database](recovery-using-backups.md)<br>[SQL Managed Instance](../managed-instance/point-in-time-restore.md#restore-a-deleted-database) | [SQL Database](/powershell/module/az.sql/get-azsqldeleteddatabasebackup) <br/> [SQL Managed Instance](/powershell/module/az.sql/get-azsqldeletedinstancedatabasebackup)| | **Restore a database from Azure Blob storage** | SQL Database - N/A <br/>SQL Managed Instance - N/A | SQL Database - N/A <br/>[SQL Managed Instance](../managed-instance/restore-sample-database-quickstart.md) |
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/doc-changes-updates-release-notes.md
This table provides a quick comparison for the change in terminology:
| [Azure Active Directory only authentication for Azure SQL](https://techcommunity.microsoft.com/t5/azure-sql/azure-active-directory-only-authentication-for-azure-sql/ba-p/2417673) | Public Preview for Azure Active Directory only authenticaion on Azure SQL Managed Instance. | | [Migration with Log Replay Service](../managed-instance/log-replay-service-migrate.md) | Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service. | | [Maintenance window](./maintenance-window.md)| The maintenance window feature allows you to configure maintenance schedule. |
-| [Service Broker cross-instance message exchange for Azure SQL Managed Instance](https://azure.microsoft.com/en-gb/updates/service-broker-message-exchange-for-azure-sql-managed-instance-in-public-preview/) | Support for cross-instance message exchange on Azure SQL Managed Instance. |
-| [Long-term backup retention for Azure SQL Managed Instance](https://azure.microsoft.com/en-gb/updates/longterm-backup-retention-ltr-for-azure-sql-managed-instance-in-public-preview/) | Support for Long-term backup retention up to 10 years on Azure SQL Managed Instance. |
-| [Azure Monitor SQL insights for Azure SQL Managed Instance](https://azure.microsoft.com/en-gb/updates/azure-monitor-sql-insights-for-azure-sql-in-public-preview/) | Azure Monitor SQL insights for Azure SQL Managed Instance in public preview |
+| [Service Broker cross-instance message exchange for Azure SQL Managed Instance](https://azure.microsoft.com/updates/service-broker-message-exchange-for-azure-sql-managed-instance-in-public-preview/) | Support for cross-instance message exchange on Azure SQL Managed Instance. |
+| [Long-term backup retention for Azure SQL Managed Instance](https://azure.microsoft.com/updates/longterm-backup-retention-ltr-for-azure-sql-managed-instance-in-public-preview/) | Support for Long-term backup retention up to 10 years on Azure SQL Managed Instance. |
+| [Azure Monitor SQL insights for Azure SQL Managed Instance](https://azure.microsoft.com/updates/azure-monitor-sql-insights-for-azure-sql-in-public-preview/) | Azure Monitor SQL insights for Azure SQL Managed Instance in public preview |
| [Distributed transactions](./elastic-transactions-overview.md) | Distributed transactions across Managed Instances. | | [Instance pools](../managed-instance/instance-pools-overview.md) | A convenient and cost-efficient way to migrate smaller SQL instances to the cloud. | | [Transactional Replication](../managed-instance/replication-transactional-overview.md) | Replicate the changes from your tables into other databases in SQL Managed Instance, SQL Database, or SQL Server. Or update your tables when some rows are changed in other instances of SQL Managed Instance or SQL Server. For information, see [Configure replication in Azure SQL Managed Instance](../managed-instance/replication-between-two-instances-configure-tutorial.md). |
This table provides a quick comparison for the change in terminology:
- [Service-aided subnet configuration for Azure SQL Managed Instance now makes use of service tags for user-defined routes](../managed-instance/connectivity-architecture-overview.md) - support for User defined route (UDR) table. - [Migrate to Managed Instance with Log Replay Service](../managed-instance/log-replay-service-migrate.md) - allows migrating databases from SQL Server to SQL Managed Instance by using Log Replay Service (Public Preview). - [Maintenance window](./maintenance-window.md) - the maintenance window feature allows you to configure maintenance schedule, see [Maintenance window announcement](https://techcommunity.microsoft.com/t5/azure-sql/maintenance-window-for-azure-sql-database-and-managed-instance/ba-p/2174835) (Public Preview).-- [Machine Learning Services on Azure SQL Managed Instance now generally available](https://azure.microsoft.com/en-gb/updates/machine-learning-services-on-azure-sql-managed-instance-now-generally-available/) - General availability for Machine Learning Services on Azure SQL Managed Instance.-- [Service Broker cross-instance message exchange for Azure SQL Managed Instance](https://azure.microsoft.com/en-gb/updates/service-broker-message-exchange-for-azure-sql-managed-instance-in-public-preview/) - support for cross-instance message exchange.-- [Long-term backup retention for Azure SQL Managed Instance](https://azure.microsoft.com/en-gb/updates/longterm-backup-retention-ltr-for-azure-sql-managed-instance-in-public-preview/) - Support for Long-term backup retention up to 10 years on Azure SQL Managed Instance.
+- [Machine Learning Services on Azure SQL Managed Instance now generally available](https://azure.microsoft.com/updates/machine-learning-services-on-azure-sql-managed-instance-now-generally-available/) - General availability for Machine Learning Services on Azure SQL Managed Instance.
+- [Service Broker cross-instance message exchange for Azure SQL Managed Instance](https://azure.microsoft.com/updates/service-broker-message-exchange-for-azure-sql-managed-instance-in-public-preview/) - support for cross-instance message exchange.
+- [Long-term backup retention for Azure SQL Managed Instance](https://azure.microsoft.com/updates/longterm-backup-retention-ltr-for-azure-sql-managed-instance-in-public-preview/) - Support for Long-term backup retention up to 10 years on Azure SQL Managed Instance.
- [Dynamic data masking granular permissions for Azure SQL Managed Instance](dynamic-data-masking-overview.md) - general availability for Dynamic data masking granular permissions for Azure SQL Managed Instance. -- [Azure SQL Managed Instance auditing of Microsoft operations](https://azure.microsoft.com/en-gb/updates/azure-sql-auditing-of-microsoft-operations-is-now-generally-available/) - general availability for Azure SQL Managed Instance auditing of Microsoft operations.-- [Azure Monitor SQL insights for Azure SQL Managed Instance](https://azure.microsoft.com/en-gb/updates/azure-monitor-sql-insights-for-azure-sql-in-public-preview/) - Azure Monitor SQL insights for Azure SQL Managed Instance in public preview.
+- [Azure SQL Managed Instance auditing of Microsoft operations](https://azure.microsoft.com/updates/azure-sql-auditing-of-microsoft-operations-is-now-generally-available/) - general availability for Azure SQL Managed Instance auditing of Microsoft operations.
+- [Azure Monitor SQL insights for Azure SQL Managed Instance](https://azure.microsoft.com/updates/azure-monitor-sql-insights-for-azure-sql-in-public-preview/) - Azure Monitor SQL insights for Azure SQL Managed Instance in public preview.
### SQL Managed Instance H2 2020 updates -- [Public preview: Auditing of Microsoft support operations in Azure SQL DB and Azure SQL MI](https://azure.microsoft.com/en-us/updates/auditing-of-microsoft-support-operations-in-azure-sql-db-and-azure-sql-mi/) - The auditing of Microsoft support operations capability enables you to audit Microsoft support operations when you need to access your servers and/or databases during a support request to your audit logs destination (Public Preview).-- [Distributed database transactions spanning multiple Azure SQL Managed Instances](https://azure.microsoft.com/en-us/updates/distributed-database-transactions-spanning-multiple-azure-sql-managed-instances/) - Distributed database transactions spanning multiple Azure SQL Managed Instances have been added to enable frictionless migration of existing applications, as well as development of modern multi-tenant applications relying on vertically or horizontally partitioned database architecture (Public Preview).-- [Configurable Backup Storage Redundancy option for Azure SQL Managed Instance](https://azure.microsoft.com/en-us/updates/configurable-backup-storage-redundancy-option-for-azure-sql-managed-instance-2/) - Locally redundant storage (LRS) and zone-redundant storage (ZRS) options have been added to backup storage redundancy, providing more flexibility and choice. -- [Backup storage cost savings for Azure SQL Database and Managed Instance](https://azure.microsoft.com/en-us/updates/backup-storage-cost-savings-for-azure-sql-database-and-managed-instance/) - User can set the PITR backup retention period & automated compression of backups for databases with transparent data encryption(TDE) is now up to 30 percent more efficient in backup storage space consumption.-- [Azure AD authentication features for Azure SQL MI](https://azure.microsoft.com/en-us/updates/azure-ad-authentication-features-for-azure-sql-db-azure-synapse-analytics-and-azure-sql-managed-instance/) - hese features help automate user creation using Azure AD applications and allow individual Azure AD guest users to be created in SQL Managed Instance (Public Preview).-- [Global virtual network peering support for Azure SQL Managed Instance](https://azure.microsoft.com/en-us/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/)-- [Hosting catalog databases for all supported versions of SSRS in Azure SQL Managed Instance](https://azure.microsoft.com/en-us/updates/hosting-catalog-databases-for-all-supported-versions-of-ssrs-in-azure-sql-managed-instance/) - Azure SQL Managed Instance can host catalog databases for all supported versions of SQL Server Reporting Services (SSRS).
+- [Public preview: Auditing of Microsoft support operations in Azure SQL DB and Azure SQL MI](https://azure.microsoft.com/updates/auditing-of-microsoft-support-operations-in-azure-sql-db-and-azure-sql-mi/) - The auditing of Microsoft support operations capability enables you to audit Microsoft support operations when you need to access your servers and/or databases during a support request to your audit logs destination (Public Preview).
+- [Distributed database transactions spanning multiple Azure SQL Managed Instances](https://azure.microsoft.com/updates/distributed-database-transactions-spanning-multiple-azure-sql-managed-instances/) - Distributed database transactions spanning multiple Azure SQL Managed Instances have been added to enable frictionless migration of existing applications, as well as development of modern multi-tenant applications relying on vertically or horizontally partitioned database architecture (Public Preview).
+- [Configurable Backup Storage Redundancy option for Azure SQL Managed Instance](https://azure.microsoft.com/updates/configurable-backup-storage-redundancy-option-for-azure-sql-managed-instance-2/) - Locally redundant storage (LRS) and zone-redundant storage (ZRS) options have been added to backup storage redundancy, providing more flexibility and choice.
+- [Backup storage cost savings for Azure SQL Database and Managed Instance](https://azure.microsoft.com/updates/backup-storage-cost-savings-for-azure-sql-database-and-managed-instance/) - User can set the PITR backup retention period & automated compression of backups for databases with transparent data encryption(TDE) is now up to 30 percent more efficient in backup storage space consumption.
+- [Azure AD authentication features for Azure SQL MI](https://azure.microsoft.com/updates/azure-ad-authentication-features-for-azure-sql-db-azure-synapse-analytics-and-azure-sql-managed-instance/) - hese features help automate user creation using Azure AD applications and allow individual Azure AD guest users to be created in SQL Managed Instance (Public Preview).
+- [Global virtual network peering support for Azure SQL Managed Instance](https://azure.microsoft.com/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/)
+- [Hosting catalog databases for all supported versions of SSRS in Azure SQL Managed Instance](https://azure.microsoft.com/updates/hosting-catalog-databases-for-all-supported-versions-of-ssrs-in-azure-sql-managed-instance/) - Azure SQL Managed Instance can host catalog databases for all supported versions of SQL Server Reporting Services (SSRS).
- [Major performance improvements for Azure SQL Database Managed Instances](https://techcommunity.microsoft.com/t5/azure-sql/announcing-major-performance-improvements-for-azure-sql-database/ba-p/1701256) -- [Enhanced management experience for Azure SQL Managed Instance](https://azure.microsoft.com/en-us/updates/enhanced-management-experience-for-azure-sql-managed-instance/)
+- [Enhanced management experience for Azure SQL Managed Instance](https://azure.microsoft.com/updates/enhanced-management-experience-for-azure-sql-managed-instance/)
- [Machine Learning on Azure SQL Managed Instance in preview](https://techcommunity.microsoft.com/t5/azure-sql/announcing-major-performance-improvements-for-azure-sql-database/ba-p/1701256) - Machine Learning Services with support for R and Python languages now include preview support on Azure SQL Managed Instance (Public Preview).-- [User-initiated failover for application fault resiliency in Azure SQL Managed Instance is now generally available](https://azure.microsoft.com/en-us/updates/userinitiated-failover-for-application-fault-resiliency-in-azure-sql-managed-instance-is-now-generally-available/) - User-initiated failover is now generally available, providing you with the capability to manually initiate an automatic failover using PowerShell, CLI commands, and API calls.
+- [User-initiated failover for application fault resiliency in Azure SQL Managed Instance is now generally available](https://azure.microsoft.com/updates/userinitiated-failover-for-application-fault-resiliency-in-azure-sql-managed-instance-is-now-generally-available/) - User-initiated failover is now generally available, providing you with the capability to manually initiate an automatic failover using PowerShell, CLI commands, and API calls.
### SQL Managed Instance H2 2019 updates
azure-sql Features Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
Previously updated : 07/13/2021 Last updated : 08/12/2021 # Features comparison: Azure SQL Database and Azure SQL Managed Instance
The following table lists the major features of SQL Server and provides informat
| [Operators](/sql/t-sql/language-elements/operators-transact-sql) | Most - see individual operators |Yes - see [T-SQL differences](../managed-instance/transact-sql-tsql-differences-sql-server.md) | | [Polybase](/sql/relational-databases/polybase/polybase-guide) | No. You can query data in the files placed on Azure Blob Storage using `OPENROWSET` function or use [an external table that references a serverless SQL pool in Synapse Analytics](https://devblogs.microsoft.com/azure-sql/read-azure-storage-files-using-synapse-sql-external-tables/). | No. You can query data in the files placed on Azure Blob Storage using `OPENROWSET` function, [a linked server that references a serverless SQL pool in Synapse Analytics](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance/), or an external table (in public preview) that references [a serverless SQL pool in Synapse Analytics](https://devblogs.microsoft.com/azure-sql/read-azure-storage-files-using-synapse-sql-external-tables/) or SQL Server. | | [Query Notifications](/sql/relational-databases/native-client/features/working-with-query-notifications) | No | Yes |
-| [Machine Learning Services](/sql/advanced-analytics/what-is-sql-server-machine-learning)(_Formerly R Services_)| Yes, in [public preview](/sql/advanced-analytics/what-s-new-in-sql-server-machine-learning-services) | No |
+| [Machine Learning Services](/sql/advanced-analytics/what-is-sql-server-machine-learning) (_Formerly R Services_)| Yes, in [public preview](/sql/advanced-analytics/what-s-new-in-sql-server-machine-learning-services) | Yes. See [Machine Learning Services in Azure SQL Managed Instance](../managed-instance/machine-learning-services-overview.md) |
| [Recovery models](/sql/relational-databases/backup-restore/recovery-models-sql-server) | Only Full Recovery that guarantees high availability is supported. Simple and Bulk Logged recovery models are not available. | Only Full Recovery that guarantees high availability is supported. Simple and Bulk Logged recovery models are not available. | | [Resource governor](/sql/relational-databases/resource-governor/resource-governor) | No | Yes | | [RESTORE statements](/sql/t-sql/statements/restore-statements-for-restoring-recovering-and-managing-backups-transact-sql) | No | Yes, with mandatory `FROM URL` options for the backups files placed on Azure Blob Storage. See [Restore differences](../managed-instance/transact-sql-tsql-differences-sql-server.md#restore-statement) |
The Azure platform provides a number of PaaS capabilities that are added as an a
| [Point in time database restore](/sql/relational-databases/backup-restore/restore-a-sql-server-database-to-a-point-in-time-full-recovery-model) | Yes - all service tiers other than hyperscale - see [SQL Database recovery](recovery-using-backups.md#point-in-time-restore) | Yes - see [SQL Database recovery](recovery-using-backups.md#point-in-time-restore) | | Resource pools | Yes, as [Elastic pools](elastic-pool-overview.md) | Yes. A single instance of SQL Managed Instance can have multiple databases that share the same pool of resources. In addition, you can deploy multiple instances of SQL Managed Instance in [instance pools (preview)](../managed-instance/instance-pools-overview.md) that can share the resources. | | Scaling up or down (online) | Yes, you can either change DTU or reserved vCores or max storage with the minimal downtime. | Yes, you can change reserved vCores or max storage with the minimal downtime. |
-| [SQL Alias](/sql/database-engine/configure-windows/create-or-delete-a-server-alias-for-use-by-a-client) | No, use [DNS Alias](dns-alias-overview.md) | No, use [Clicongf](https://techcommunity.microsoft.com/t5/Azure-Database-Support-Blog/Lesson-Learned-33-How-to-make-quot-cliconfg-quot-to-work-with/ba-p/369022) to set up alias on the client machines. |
+| [SQL Alias](/sql/database-engine/configure-windows/create-or-delete-a-server-alias-for-use-by-a-client) | No, use [DNS Alias](dns-alias-overview.md) | No, use [Cliconfg](https://techcommunity.microsoft.com/t5/Azure-Database-Support-Blog/Lesson-Learned-33-How-to-make-quot-cliconfg-quot-to-work-with/ba-p/369022) to set up alias on the client machines. |
| [SQL Analytics](../../azure-monitor/insights/azure-sql.md) | Yes | Yes | | [SQL Data Sync](sql-data-sync-sql-server-configure.md) | Yes | No | | [SQL Server Analysis Services (SSAS)](/sql/analysis-services/analysis-services) | No, [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) is a separate Azure cloud service. | No, [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) is a separate Azure cloud service. |
For more information about Azure SQL Database and Azure SQL Managed Instance, se
- [What is Azure SQL Database?](sql-database-paas-overview.md) - [What is Azure SQL Managed Instance?](../managed-instance/sql-managed-instance-paas-overview.md)-- [What is an Azure SQL Managed Instance pool?](../managed-instance/instance-pools-overview.md)
+- [What is an Azure SQL Managed Instance pool?](../managed-instance/instance-pools-overview.md)
azure-sql Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure SQL Database description: Sample Azure Resource Graph queries for Azure SQL Database showing use of resource types and tables to access Azure SQL Database related resources and properties. Previously updated : 08/04/2021 Last updated : 08/09/2021
azure-sql Connect Application Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/connect-application-instance.md
There are two options for connecting virtual networks:
Peering is preferable because it uses the Microsoft backbone network, so from the connectivity perspective, there is no noticeable difference in latency between virtual machines in a peered virtual network and in the same virtual network. Virtual network peering is to supported between the networks in the same region. Global virtual network peering is also supported with the limitation described in the note below. > [!IMPORTANT]
-> [On 9/22/2020 support for global virtual network peering for newly created virtual clusters was announced](https://azure.microsoft.com/en-us/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/). It means that global virtual network peering is supported for SQL managed instances created in empty subnets after the announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL managed instances peering support is limited to the networks in the same region due to the [constraints of global virtual network peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). See also the relevant section of the [Azure Virtual Networks frequently asked questions](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) article for more details. To be able to use global virtual network peering for SQL managed instances from virtual clusters created before the announcement date, consider configuring [maintenance window](../database/maintenance-window.md) on the instances, as it will move the instances into new virtual clusters that support global virtual network peering.
+> [On 9/22/2020 support for global virtual network peering for newly created virtual clusters was announced](https://azure.microsoft.com/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/). It means that global virtual network peering is supported for SQL managed instances created in empty subnets after the announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL managed instances peering support is limited to the networks in the same region due to the [constraints of global virtual network peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). See also the relevant section of the [Azure Virtual Networks frequently asked questions](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) article for more details. To be able to use global virtual network peering for SQL managed instances from virtual clusters created before the announcement date, consider configuring [maintenance window](../database/maintenance-window.md) on the instances, as it will move the instances into new virtual clusters that support global virtual network peering.
## Connect from on-premises
azure-sql Log Replay Service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/log-replay-service-migrate.md
Azure Blob Storage is used as intermediary storage for backup files between SQL
In migrating databases to a managed instance by using LRS, you can use the following approaches to upload backups to Blob Storage: - Using SQL Server native [BACKUP TO URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url) functionality-- Using [AzCopy](../../storage/common/storage-use-azcopy-v10.md) or [Azure Storage Explorer](https://azure.microsoft.com/en-us/features/storage-explorer) to upload backups to a blob container
+- Using [AzCopy](../../storage/common/storage-use-azcopy-v10.md) or [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer) to upload backups to a blob container
- Using Storage Explorer in the Azure portal ### Make backups from SQL Server directly to Blob Storage
azure-sql Sql Server To Sql Managed Instance Assessment Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-sql-managed-instance-assessment-rules.md
The Filestream feature, which allows you to store unstructured data such as text
**Recommendation** Upload the unstructured files to Azure Blob storage and store metadata related to these files (name, type, URL location, storage key etc.) in Azure SQL Managed Instance. You may have to re-engineer your application to enable streaming blobs to and from Azure SQL Managed Instance. Alternatively, migrate to SQL Server on Azure Virtual Machine.
-More information: [Streaming Blobs To and From SQL Azure blog](https://azure.microsoft.com/en-in/blog/streaming-blobs-to-and-from-sql-azure/)
+More information: [Streaming Blobs To and From SQL Azure blog](https://azure.microsoft.com/blog/streaming-blobs-to-and-from-sql-azure/)
## Heterogeneous MS DTC<a id="MIHeterogeneousMSDTCTransactSQL"></a>
azure-sql Availability Group Load Balancer Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-load-balancer-portal-configure.md
Azure calls the back-end address pool *backend pool*. In this case, the back-end
4. On **Add backend pool**, under **Name**, type a name for the back-end pool.
-5. Under **Virtual machines**, select **Add a virtual machine**.
+5. Under **Virtual machines**, select **Add a virtual machine**. Only add the primary IP address of the VM, do not add any secondary IP addresses.
6. Under **Choose virtual machines**, select **Choose an availability set**, and then specify the availability set that the SQL Server virtual machines belong to.
azure-sql Failover Cluster Instance Premium File Share Manually Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-premium-file-share-manually-configure.md
You can configure a virtual network name, or a distributed network name for a fa
- Microsoft Distributed Transaction Coordinator (MSDTC) is not supported on Windows Server 2016 and earlier. - Filestream isn't supported for a failover cluster with a premium file share. To use filestream, deploy your cluster by using [Storage Spaces Direct](failover-cluster-instance-storage-spaces-direct-manually-configure.md) or [Azure shared disks](failover-cluster-instance-azure-shared-disks-manually-configure.md) instead. - Only registering with the SQL IaaS Agent extension in [lightweight management mode](sql-server-iaas-agent-extension-automate-management.md#management-modes) is supported. -- Database Snapshots are not currently supported with [Azure Files due to sparse files limitations](/rest/api/storageservices/features-not-supported-by-the-azure-file-service).
+- Database Snapshots are not currently supported with [Azure Files due to sparse files limitations](/rest/api/storageservices/features-not-supported-by-the-azure-file-service).
+- Databases that use the in-memory OLTP feature are not supported on a failover cluster instance deployed with a premium file share. If your business requires in-memory OLTP, consider deploying your FCI with [Azure shared disks](failover-cluster-instance-azure-shared-disks-manually-configure.md) or [Storage Spaces Direct](failover-cluster-instance-storage-spaces-direct-manually-configure.md) instead.
## Next steps
azure-sql Failover Cluster Instance Vnn Azure Load Balancer Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-vnn-azure-load-balancer-configure.md
Use the [Azure portal](https://portal.azure.com) to create the load balancer:
1. Associate the backend pool with the availability set that contains the VMs.
-1. Under **Target network IP configurations**, select **VIRTUAL MACHINE** and choose the virtual machines that will participate as cluster nodes. Be sure to include all virtual machines that will host the FCI.
+1. Under **Target network IP configurations**, select **VIRTUAL MACHINE** and choose the virtual machines that will participate as cluster nodes. Be sure to include all virtual machines that will host the FCI. Only add the primary IP address of each VM, do not add any secondary IP addresses.
1. Select **OK** to create the backend pool.
azure-sql Sql Agent Extension Manually Register Single Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md
To register your SQL Server VM with the SQL IaaS Agent extension, you must first
1. Open the Azure portal and go to **All Services**. 1. Go to **Subscriptions** and select the subscription of interest.
-1. On the **Subscriptions** page, go to **extensions**.
-1. Enter **sql** in the filter to bring up the SQL-related extensions.
+1. On the **Subscriptions** page, select **Resource providers** under **Settings**.
+1. Enter **sql** in the filter to bring up the SQL-related resource providers.
1. Select **Register**, **Re-register**, or **Unregister** for the **Microsoft.SqlVirtualMachine** provider, depending on your desired action. ![Modify the provider](./media/sql-agent-extension-manually-register-single-vm/select-resource-provider-sql.png)
azure-video-analyzer Deploy Iot Edge Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/deploy-iot-edge-linux-on-windows.md
In this article, you'll learn how to deploy Azure Video Analyzer on an edge devi
The following depicts the overall flow of the document and in 5 simple steps you should be all set up to run Azure Video Analyzer on a Windows device that has EFLOW:
-![Diagram of IoT Edge for Linux on Windows (E FLOW).](./media/deploy-iot-edge-linux-on-windows/eflow-updated.png)
+![Diagram of IoT Edge for Linux on Windows (E FLOW).](./media/deploy-iot-edge-linux-on-windows/eflow.png)
1. [Install EFLOW](../../iot-edge/how-to-install-iot-edge-on-windows.md) on your Windows device using PowerShell.
azure-video-analyzer Record Stream Inference Data With Video https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/record-stream-inference-data-with-video.md
Next, browse to the src/cloud-to-device-console-app folder. Here you'll see the
} ```
-Here, `skipSamplesWithoutAnnotation` is set to `false` because the extension node needs to pass through all frames, whether or not they have inference results, to the downstream object tracker node. The object tracker is capable of tracking objects over 15 frames, approximately. If the live video is at a frame rate of 30 frames/sec, that means at least two frames in every second should be sent to the HTTP server for inferencing. Your AI model has a maximum FPS for processing, which is the highest value that `maximumSamplesPerSecond` should be set to.
+Here, `skipSamplesWithoutAnnotation` is set to `false` because the extension node needs to pass through all frames, whether or not they have inference results, to the downstream object tracker node. The object tracker is capable of tracking objects over 15 frames, approximately. Your AI model has a maximum FPS for processing, which is the highest value that `maximumSamplesPerSecond` should be set to.
## Run the sample program
azure-video-analyzer Track Objects Live Video https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/track-objects-live-video.md
Open the URL for the pipeline topology in a browser, and examine the settings fo
} ```
-Here, `skipSamplesWithoutAnnotation` is set to `false` because the extension node needs to pass through all frames, whether or not they have inference results, to the downstream object tracker node. The object tracker is capable of tracking objects over 15 frames, approximately. If the live video is at a frame rate of 30 frames/sec, that means at least two frames in every second should be sent to the HTTP server for inferencing. Your AI model has a maximum FPS for processing, which is the highest value that `maximumSamplesPerSecond` should be set to.
+Here, `skipSamplesWithoutAnnotation` is set to `false` because the extension node needs to pass through all frames, whether or not they have inference results, to the downstream object tracker node. The object tracker is capable of tracking objects over 15 frames, approximately. Your AI model has a maximum FPS for processing, which is the highest value that `maximumSamplesPerSecond` should be set to.
## Run the sample program
azure-video-analyzer Use Line Crossing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/use-line-crossing.md
Open the URL for the pipeline topology in a browser, and examine the settings fo
} ```
-Here, `skipSamplesWithoutAnnotation` is set to `false` because the extension node needs to pass through all frames, whether or not they have inference results, to the downstream object tracker node. The object tracker is capable of tracking objects over 15 frames, approximately. If the live video is at a frame rate of 30 frames/sec, that means at least two frames in every second should be sent to the HTTP server for inferencing. Your AI model has a maximum FPS for processing, which is the highest value that `maximumSamplesPerSecond` should be set to.
+Here, `skipSamplesWithoutAnnotation` is set to `false` because the extension node needs to pass through all frames, whether or not they have inference results, to the downstream object tracker node. The object tracker is capable of tracking objects over 15 frames, approximately. Your AI model has a maximum FPS for processing, which is the highest value that `maximumSamplesPerSecond` should be set to.
Also look at the line crossing node parameter placeholders `linecrossingName` and `lineCoordinates`. We have provided default values for these parameters but you overwrite them using the operations.json file. Look at how we pass other parameters from the operations.json file to a topology (i.e. rtsp url).
azure-vmware Attach Disk Pools To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/attach-disk-pools-to-azure-vmware-solution-hosts.md
You'll attach to a disk pool surfaced through an iSCSI target as the VMware data
3. Create and attach an iSCSI datastore in the Azure VMware Solution private cloud cluster using `Microsoft.StoragePool` provided iSCSI target:
- ```azurecli
+ ```bash
az vmware datastore disk-pool-volume create --name iSCSIDatastore1 --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud --target-id /subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/ResourceGroup1/providers/Microsoft.StoragePool/diskPools/mpio-diskpool/iscsiTargets/mpio-iscsi-target --lun-name lun0 ```
azure-vmware Install Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/install-vmware-hcx.md
Title: Install VMware HCX in Azure VMware Solution description: Install VMware HCX in your Azure VMware Solution private cloud. Previously updated : 07/30/2021 Last updated : 08/12/2021 # Install and activate VMware HCX in Azure VMware Solution
In this step, you'll download the VMware HCX Connector OVA file, and then you'll
After deploying the VMware HCX Connector OVA on-premises and starting the appliance, you're ready to activate it. First, you'll need to get a license key from the Azure VMware Solution portal, and then you'll activate it in VMware HCX Manager. Finally, youΓÇÖll need a key for each on-premises HCX connector deployed.
-1. In the Azure VMware Solution portal, go to **Manage** > **Connectivity**, select the **HCX** tab, and select **Add**.
+1. In your Azure VMware Solution private cloud, select **Manage** > **Add-ons** > **Migration using HCX**. Then copy the **Activation key**.
-1. Use the **admin** credentials to sign in to the on-premises VMware HCX Manager at `https://HCXManagerIP:9443`. Make sure to include the `9443` port number with the VMware HCX Manager IP address.
+ :::image type="content" source="media/tutorial-vmware-hcx/hcx-activation-key.png" alt-text="Screenshot showing the activation key.":::
+
+1. Sign in to the on-premises VMware HCX Manager at `https://HCXManagerIP:9443` with the **admin** credentials. Make sure to include the `9443` port number with the VMware HCX Manager IP address.
>[!TIP] >You defined the **admin** user password during the VMware HCX Manager OVA file deployment.
After deploying the VMware HCX Connector OVA on-premises and starting the applia
>[!TIP] >The vCenter server is where you deployed the VMware HCX Connector in your datacenter.
-1. 8. In **Configure SSO/PSC**, provide your Platform Services Controller's FQDN or IP address, and select **Continue**.
+1. In **Configure SSO/PSC**, provide your Platform Services Controller's FQDN or IP address, and select **Continue**.
>[!NOTE] >Typically, it's the same as your vCenter FQDN or IP address.
You can uninstall HCX Advanced through the portal, which removes the existing pa
1. Ensure that L2 extensions are no longer needed or the networks have been "unstretched" to the destination.
-1. 3. For workloads using MON, ensure that youΓÇÖve removed the default gateways. Otherwise, it may result in workloads not being able to communicate or function.
+1. For workloads using MON, ensure that youΓÇÖve removed the default gateways. Otherwise, it may result in workloads not being able to communicate or function.
+
+1. In your Azure VMware Solution private cloud, select **Manage** > **Add-ons**.
-1. In your Azure VMware Solution private cloud, select **Manage** > **Add-ons** > **Uninstall**.
+1. Select **Get started** for **HCX Workload Mobility**, then select **Uninstall**.
1. Enter **yes** to confirm the uninstall.
azure-web-pubsub Howto Develop Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/howto-develop-create-instance.md
This quickstart shows you how to create Azure Web PubSub instance from Azure por
## Try the newly created instance > [!div class="nextstepaction"]
-> [Try the instance from the browser](./quickstart-live-demo.md)
+> [Try the instance from the browser](./quickstart-live-demo.md#try-the-instance-with-an-online-demo)
> [!div class="nextstepaction"]
-> [Try the instance with Azure CLI](./quickstart-cli-try.md)
+> [Try the instance with Azure CLI](./quickstart-cli-try.md#play-with-the-instance)
## Next steps
azure-web-pubsub Quickstart Cli Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/quickstart-cli-create.md
The [Azure command-line interface (Azure CLI)](/cli/azure) is a set of commands
## Try the newly created instance > [!div class="nextstepaction"]
-> [Try the newly created instance using CLI](./quickstart-cli-try.md)
-
+> [Try the newly created instance using CLI](./quickstart-cli-try.md#play-with-the-instance)
> [!div class="nextstepaction"]
-> [Try the newly created instance from browser](./quickstart-live-demo.md)
+> [Try the newly created instance from the browser](./quickstart-live-demo.md#try-the-instance-with-an-online-demo)
## Clean up resources
azure-web-pubsub Quickstart Cli Try https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/quickstart-cli-try.md
Last updated 08/06/2021
# Quickstart: Connect to the Azure Web PubSub instance from CLI
-Previously we talked about how to create the Web PubSub instance [using Azure CLI](./quickstart-cli-create.md) or [from the portal](./howto-develop-create-instance.md). After the instance is successfully created, Azure CLI also provides a set of commands to connect to the instance and publish messages to the connected clients.
+This quickstart shows you how to connect to the Azure Web PubSub instance and publish messages to the connected clients using the [Azure command-line interface (Azure CLI)](/cli/azure).
+++
+- This quickstart requires version 2.22.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Create a resource group
++
+## Create a Web PubSub instance
+ ## Play with the instance
azure-web-pubsub Quickstart Live Demo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/quickstart-live-demo.md
Last updated 04/26/2021
# Quickstart: Connect to the Azure Web PubSub instance from the browser
-Previously we talked about how to create the Web PubSub instance [from the portal](./howto-develop-create-instance.md) or [using Azure CLI](./quickstart-cli-create.md). This quickstart shows you how to get started easily with a [Pub/Sub live demo](https://azure.github.io/azure-webpubsub/demos/clientpubsub.html).
+This quickstart shows you how to get started easily with a [Pub/Sub live demo](https://azure.github.io/azure-webpubsub/demos/clientpubsub.html).
++ [!INCLUDE [try a live demo](includes/try-live-demo.md)]
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/quickstart-serverless.md
description: A quickstart for using Azure Web PubSub service and Azure Functions
-+ Last updated 03/11/2021
-# Quickstart: Create a serverless simple chat application with Azure Functions and Azure Web PubSub service
+# Quickstart: Create a serverless chat with Azure Functions and Azure Web PubSub service
The Azure Web PubSub service helps you build real-time messaging web applications using WebSockets and the publish-subscribe pattern easily. Azure Functions is a serverless platform that lets you run your code without managing any infrastructure. In this quickstart, learn how to use Azure Web PubSub service and Azure Functions to build a serverless application with real-time messaging and the publish-subscribe pattern.
While the service is deploying, let's switch to working with code. Clone the [sa
Open the */samples/functions/js/simplechat* folder in the cloned repository. Edit *local.settings.json* to add service connection string. In *local.settings.json*, you need to make these changes and then save the file.
- - Replace the place holder *<connection-string>* to the real one copied from **Azure portal** for **`WebPubSubConnectionString`** setting.
- - For **`AzureWebJobsStorage`** setting, this is required due to [Azure Functions requires an Azure Storage account](../azure-functions/storage-considerations.md).
+ - Replace the place holder `<connection-string>` to the real one copied from **Azure portal** for **`WebPubSubConnectionString`** setting.
+ - **`AzureWebJobsStorage`** setting is required due to [Azure Functions requires an Azure Storage account](../azure-functions/storage-considerations.md).
- If you have [Azure Storage Emulator](https://go.microsoft.com/fwlink/?linkid=717179&clcid=0x409) run in local, keep the original settings of "UseDevelopmentStorage=true". - If you have an Azure storage connection string, replace the value with it. - JavaScript functions are organized into folders. In each folder are two files: `function.json` defines the bindings that are used in the function, and `index.js` is the body of the function. There are several triggered functions in this function app:
- - **login** - This is the HTTP triggered function. It uses the *webPubSubConnection* input binding to generate and return valid service connection information.
- - **messages** - This is the `WebPubSubTrigger` triggered function. Receives a chat message in the request body and uses the `WebPubSub` output binding to broadcast the message to all connected client applications.
- - **connect** and **connected** - These are the `WebPubSubTrigger` triggered functions. Handle the connect and connected events.
+ - **login** - This function is the HTTP triggered function. It uses the *webPubSubConnection* input binding to generate and return valid service connection information.
+ - **messages** - This function is the `WebPubSubTrigger` triggered function. Receives a chat message in the request body and uses the `WebPubSub` output binding to broadcast the message to all connected client applications.
+ - **connect** and **connected** - These functions are the `WebPubSubTrigger` triggered functions. Handle the connect and connected events.
- In the terminal, ensure that you are in the */samples/functions/js/simplechat* folder. Install the extensions and run the function app.
While the service is deploying, let's switch to working with code. Clone the [sa
Open the */samples/functions/csharp/simplechat* folder in the cloned repository. Edit *local.settings.json* to add service connection string. In *local.settings.json*, you need to make these changes and then save the file.
- - Replace the place holder *<connection-string>* to the real one copied from **Azure portal** for **`WebPubSubConnectionString`** setting.
- - For **`AzureWebJobsStorage`** setting, this is required due to [Azure Functions requires an Azure Storage account](../azure-functions/storage-considerations.md).
+ - Replace the place holder `<connection-string>` to the real one copied from **Azure portal** for **`WebPubSubConnectionString`** setting.
+ - **`AzureWebJobsStorage`** setting is required because of [Azure Functions requires an Azure Storage account](../azure-functions/storage-considerations.md).
- If you have [Azure Storage Emulator](https://go.microsoft.com/fwlink/?linkid=717179&clcid=0x409) run in local, keep the original settings of "UseDevelopmentStorage=true". - If you have an Azure storage connection string, replace the value with it. - C# functions are organized by file Functions.cs. There are several triggered functions in this function app:
- - **login** - This is the HTTP triggered function. It uses the *webPubSubConnection* input binding to generate and return valid service connection information.
- - **connected** - This is the `WebPubSubTrigger` triggered function. Receives a chat message in the request body and broadcast the message to all connected client applications with multiple tasks.
- - **broadcast** - This is the `WebPubSubTrigger` triggered function. Receives a chat message in the request body and broadcast the message to all connected client applications with single task.
- - **connect** and **disconnect** - These are the `WebPubSubTrigger` triggered functions. Handle the connect and disconnect events.
+ - **login** - This function is the HTTP triggered function. It uses the *webPubSubConnection* input binding to generate and return valid service connection information.
+ - **connected** - This function is the `WebPubSubTrigger` triggered function. Receives a chat message in the request body and broadcast the message to all connected client applications with multiple tasks.
+ - **broadcast** - This function is the `WebPubSubTrigger` triggered function. Receives a chat message in the request body and broadcast the message to all connected client applications with single task.
+ - **connect** and **disconnect** - These functions are the `WebPubSubTrigger` triggered functions. Handle the connect and disconnect events.
- In the terminal, ensure that you are in the */samples/functions/csharp/simplechat* folder. Install the extensions and run the function app.
While the service is deploying, let's switch to working with code. Clone the [sa
- Set `Event Handler` in Azure Web PubSub service. Go to **Azure portal** -> Find your Web PubSub resource -> **Settings**. Add a new hub settings mapping to the one function in use as below. Replace the {ngrok-id} to yours.
- - Hub Name: simplechat
+ - Hub Name: `simplechat`
- URL Template: **http://{ngrok-id}.ngrok.io/runtime/webhooks/webpubsub** - User Event Pattern: * - System Events: connect, connected, disconnected. ## Run the web application
While the service is deploying, let's switch to working with code. Clone the [sa
1. Type a message and press enter. The application sends the message to the *messages* function in the Azure Function app, which then uses the Web PubSub output binding to broadcast the message to all connected clients. If everything is working correctly, the message will appear in the application.
-1. Open another instance of the web application in a different browser window. You will see that any messages sent will appear in all instances of the application.
+1. Open another instance of the web application in a different browser window. You'll see that any messages sent will appear in all instances of the application.
## Clean up resources
If you're not going to continue to use this app, delete all resources created by
## Next steps
-In this quickstart, you learned how to run a serverless simple chat application. Now, you could start to build your own application.
+In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application.
> [!div class="nextstepaction"] > [Quick start: Create a simple chatroom with Azure Web PubSub](https://azure.github.io/azure-webpubsub/getting-started/create-a-chat-app/js-handle-events)
azure-web-pubsub Quickstart Use Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/quickstart-use-sdk.md
+
+ Title: Quickstart - Publish messages using the service SDK for the Azure Web PubSub instance
+description: Quickstart showing how to use the service SDK
++++ Last updated : 08/06/2021++
+# Quickstart: Publish messages using the service SDK for the Azure Web PubSub instance
+
+This quickstart shows you how to publish messages to the clients using service SDK.
+++
+- This quickstart requires version 2.22.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Create a resource group
++
+## Create a Web PubSub instance
++
+## Get the ConnectionString for future use
++
+Copy the fetched **ConnectionString** and it will be used later when using service SDK as the value of `<connection_string>`.
+
+## Connect to the instance
++
+## Publish messages using service SDK
+
+Now let's use Azure Web PubSub SDK to publish a message to the connected client.
+
+### Prerequisites
+
+# [C#](#tab/csharp)
+
+* [.NET Core 2.1 or above](https://dotnet.microsoft.com/download)
+
+# [JavaScript](#tab/javascript)
+
+* [Node.js 12.x or above](https://nodejs.org)
+
+# [Python](#tab/python)
+* [Python](https://www.python.org/)
+
+# [Java](#tab/java)
+- [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above.
+- [Apache Maven](https://maven.apache.org/download.cgi).
+++
+### Set up the project to publish messages
+
+# [C#](#tab/csharp)
+
+1. Add a new project `publisher` and add the SDK package `package Azure.Messaging.WebPubSub`.
+
+ ```bash
+ mkdir publisher
+ cd publisher
+ dotnet new console
+ dotnet add package Azure.Messaging.WebPubSub --prerelease
+ ```
+
+2. Update the `Program.cs` file to use the `WebPubSubServiceClient` class and send messages to the clients.
+
+ ```csharp
+ using System;
+ using System.Threading.Tasks;
+ using Azure.Messaging.WebPubSub;
+
+ namespace publisher
+ {
+ class Program
+ {
+ static async Task Main(string[] args)
+ {
+ if (args.Length != 3) {
+ Console.WriteLine("Usage: publisher <connectionString> <hub> <message>");
+ return;
+ }
+ var connectionString = args[0];
+ var hub = args[1];
+ var message = args[2];
+
+ var serviceClient = new WebPubSubServiceClient(connectionString, hub);
+
+ // Send messages to all the connected clients
+ // You can also try SendToConnectionAsync to send messages to the specific connection
+ await serviceClient.SendToAllAsync(message);
+ }
+ }
+ }
+ ```
+
+ The `SendToAllAsync()` call simply sends a message to all connected clients in the hub.
+
+3. Run the below command, replacing `<connection_string>` with the **ConnectionString** fetched in [previous step](#get-the-connectionstring-for-future-use):
+
+ ```bash
+ dotnet run "<connection_string>" "myHub1" "Hello World"
+ ```
+
+4. You can see that the previous CLI client received the message.
+
+ ```json
+ {"type":"message","from":"server","dataType":"text","data":"Hello World"}
+ ```
+
+# [JavaScript](#tab/javascript)
+
+1. First let's create a new folder `publisher` for this project and install required dependencies:
+
+ ```bash
+ mkdir publisher
+ cd publisher
+ npm init -y
+ npm install --save @azure/web-pubsub
+
+ ```
+2. Now let's use Azure Web PubSub SDK to publish a message to the service. Create a `publish.js` file with the below code:
+
+ ```javascript
+ const { WebPubSubServiceClient } = require('@azure/web-pubsub');
+
+ if (process.argv.length !== 5) {
+ console.log('Usage: node publish <connection-string> <hub-name> <message>');
+ return 1;
+ }
+
+ let serviceClient = new WebPubSubServiceClient(process.argv[2], process.argv[3]);
+
+ // by default it uses `application/json`, specify contentType as `text/plain` if you want plain-text
+ serviceClient.sendToAll(process.argv[4], { contentType: "text/plain" });
+ ```
+
+ The `sendToAll()` call simply sends a message to all connected clients in a hub.
+
+3. Run the below command, replacing `<connection_string>` with the **ConnectionString** fetched in [previous step](#get-the-connectionstring-for-future-use):
+
+ ```bash
+ node publish "<connection_string>" "myHub1" "Hello World"
+ ```
+
+4. You can see that the previous CLI client received the message.
+
+ ```json
+ {"type":"message","from":"server","dataType":"text","data":"Hello World"}
+ ```
+
+# [Python](#tab/python)
+
+1. First let's create a new folder `publisher` for this project and install required dependencies:
+ * When using bash
+ ```bash
+ mkdir publisher
+ cd publisher
+ # Create venv
+ python -m venv env
+
+ # Active venv
+ ./env/Scripts/activate
+
+ # Or call .\env\Scripts\activate when you are using CMD
+
+ pip install azure-messaging-webpubsubservice
+
+ ```
+2. Now let's use Azure Web PubSub SDK to publish a message to the service. Create a `publish.py` file with the below code:
+
+ ```python
+ import sys
+ from azure.messaging.webpubsubservice import (
+ WebPubSubServiceClient
+ )
+ from azure.messaging.webpubsubservice.rest import *
+
+ if len(sys.argv) != 4:
+ print('Usage: python publish.py <connection-string> <hub-name> <message>')
+ exit(1)
+
+ connection_string = sys.argv[1]
+ hub_name = sys.argv[2]
+ message = sys.argv[3]
+
+ service_client = WebPubSubServiceClient.from_connection_string(connection_string)
+ res = service_client.send_request(build_send_to_all_request(hub_name, content=message, content_type='text/plain'))
+ # res should be <HttpResponse: 202 Accepted>
+ print(res)
+
+ ```
+
+ The `build_send_to_all_request()` build a message and use `send_request()` call to sends the message to all connected clients in a hub.
+
+3. Run the below command, replacing `<connection_string>` with the **ConnectionString** fetched in [previous step](#get-the-connectionstring-for-future-use):
+
+ ```bash
+ python publish.py "<connection_string>" "myHub1" "Hello World"
+ ```
+
+4. You can see that the previous CLI client received the message.
+
+ ```json
+ {"type":"message","from":"server","dataType":"text","data":"Hello World"}
+ ```
+
+# [Java](#tab/java)
+
+1. First let's use Maven to create a new console app `webpubsub-quickstart-publisher` and switch into the *webpubsub-quickstart-publisher* folder:
+ ```console
+ mvn archetype:generate --define interactiveMode=n --define groupId=com.webpubsub.quickstart --define artifactId=webpubsub-quickstart-publisher --define archetypeArtifactId=maven-archetype-quickstart --define archetypeVersion=1.4
+ cd webpubsub-quickstart-publisher
+ ```
+
+2. Let's add Azure Web PubSub SDK dependency into the `dependencies` node of `pom.xml`:
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-webpubsub</artifactId>
+ <version>1.0.0-beta.2</version>
+ </dependency>
+ ```
+
+2. Now let's use Azure Web PubSub SDK to publish a message to the service. Let's navigate to the */src/main/java/com/webpubsub/quickstart* directory, open the *App.java* file in your editor and replace code with the below:
+
+ ```java
+ package com.webpubsub.quickstart;
+
+ import com.azure.messaging.webpubsub.*;
+ import com.azure.messaging.webpubsub.models.*;
+
+ /**
+ * Quickstart - Publish messages using Azure Web PubSub service SDK
+ *
+ */
+ public class App
+ {
+ public static void main( String[] args )
+ {
+ if (args.length != 3) {
+ System.out.println("Expecting 3 arguments: <connection-string> <hub-name> <message>");
+ return;
+ }
+
+ WebPubSubServiceClient client = new WebPubSubClientBuilder()
+ .connectionString(args[0])
+ .hub(args[1])
+ .buildClient();
+ client.sendToAll(args[2], WebPubSubContentType.TEXT_PLAIN);
+ }
+ }
+
+ ```
+
+ The `sendToAll()` call simply sends a message to all connected clients in a hub.
+
+3. Navigate to the directory containing the *pom.xml* file and compile the project by using the following `mvn` command.
+
+ ```console
+ mvn compile
+ ```
+4. Then build the package
+
+ ```console
+ mvn package
+ ```
+5. Run the following `mvn` command to execute the app, replacing `<connection_string>` with the **ConnectionString** fetched in [previous step](#get-the-connectionstring-for-future-use):
+
+ ```console
+ mvn exec:java -Dexec.mainClass="com.webpubsub.quickstart.App" -Dexec.cleanupDaemonThreads=false -Dexec.args="'<connection_string>' 'myHub1' 'Hello World'"
+ ```
+
+4. You can see that the previous CLI client received the message.
+
+ ```json
+ {"type":"message","from":"server","dataType":"text","data":"Hello World"}
+ ```
+++
+## Next steps
+
+This quickstart provides you a basic idea of how to connect to the Web PubSub service and how to publish messages to the connected clients.
+
backup Disk Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-overview.md
Azure Backup uses [incremental snapshots](../virtual-machines/disks-incremental-
Incremental snapshots are always stored on standard storage, irrespective of the storage type of parent-managed disks, and are charged based on the pricing of standard storage. For example, incremental snapshots of a Premium SSD-Managed Disk are stored on standard storage. By default, they are stored on ZRS in regions that support ZRS. Otherwise, they are stored on locally redundant storage (LRS). The per GiB pricing of both the options, LRS and ZRS, is the same.
-The snapshots created by Azure Backup are stored in the resource group within your Azure subscription and incur Snapshot Storage charges. ForTo more details about the snapshot pricing, see [Managed Disk Pricing](https://azure.microsoft.com/en-us/pricing/details/managed-disks/). Because the snapshots aren't copied to the Backup Vault, Azure Backup doesn't charge a Protected Instance fee and Backup Storage cost doesn't apply.
+The snapshots created by Azure Backup are stored in the resource group within your Azure subscription and incur Snapshot Storage charges. ForTo more details about the snapshot pricing, see [Managed Disk Pricing](https://azure.microsoft.com/pricing/details/managed-disks/). Because the snapshots aren't copied to the Backup Vault, Azure Backup doesn't charge a Protected Instance fee and Backup Storage cost doesn't apply.
During a backup operation, the Azure Backup service creates a Storage Account in the Snapshot Resource Group, where the snapshots are stored. Managed diskΓÇÖs incremental snapshots are ARM resources created on Resource group and not in Storage Account.
baremetal-infrastructure High Availability Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/high-availability-features.md
Data Guard offers advantages over storage-level replication:
- Prevents logical intra-block corruptions and lost-write corruptions. It also eliminates the risk of mistakes made by storage administrators from replicating to the standby. Redo can be delayed for a pre-determined period, so user errors aren't immediately replicated to the standby.
-## Azure NetApp Files snapshots
+## BareMetal snapshot recovery
-The NetApp Files storage solution used in BareMetal allows you to create snapshots of volumes. Snapshots allow you to revert a file system to a specific point in time quickly. Snapshot technologies allow recovery time objective (RTO) times that are a fraction of the time needed to restore a database backup.
+The BareMetal storage solution using NetApp that's offered in the Infrastructure allows you to create snapshots of volumes. Snapshots allow you to revert a file system to a specific point in time quickly. Snapshot technologies allow recovery time objective (RTO) times that are a fraction of the time needed to restore a database backup.
Snapshot functionality for Oracle databases is available through Azure NetApp SnapCenter. SnapCenter enables snapshots for backup, SnapVault gives you offline vaulting, and Snap Clone enables self-service restore and other operations. For more information, see [SnapCenter integration for Oracle on BareMetal Infrastructure](netapp-snapcenter-integration-oracle-baremetal.md).
bastion Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/diagnostic-logs.md
To access your diagnostics logs, you can directly use the storage account that y
1. Navigate to your storage account resource, then to **Containers**. You see the **insights-logs-bastionauditlogs** blob created in your storage account blob container. ![diagnostics settings](./media/diagnostic-logs/1-navigate-to-logs.png)
-2. As you navigate to inside the container, you see various folders in your blog. These folders indicate the resource hierarchy for your Azure Bastion resource.
+2. As you navigate to inside the container, you see various folders in your blob. These folders indicate the resource hierarchy for your Azure Bastion resource.
![add diagnostic setting](./media/diagnostic-logs/2-resource-h.png) 3. Navigate to the full hierarchy of your Azure Bastion resource whose diagnostics logs you wish to access/view. The 'y=', 'm=', 'd=', 'h=' and 'm=' indicate the year, month, day, hour, and minute respectively for the resource logs.
batch Managed Identity Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/managed-identity-pools.md
var poolParameters = new Pool(name: "yourPoolName")
Identity = new BatchPoolIdentity { Type = PoolIdentityType.UserAssigned,
- UserAssignedIdentities = new Dictionary<string, BatchPoolIdentityUserAssignedIdentitiesValue>
+ UserAssignedIdentities = new Dictionary<string, UserAssignedIdentities>
{ ["Your Identity Resource Id"] =
- new BatchPoolIdentityUserAssignedIdentitiesValue()
+ new UserAssignedIdentities()
} } };
cdn Cdn Msft Http Debug Headers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-msft-http-debug-headers.md
X-Cache: TCP_HIT | This header is returned when the content is served from the C
X-Cache: TCP_REMOTE_HIT | This header is returned when the content is served from the CDN regional cache (Origin shield layer) X-Cache: TCP_MISS | This header is returned when there is a cache miss, and the content is served from the Origin. X-Cache: PRIVATE_NOSTORE | This header is returned when the request cannot be cached as Cache-Control response header is set to either private or no-store.
-X-Cache: CONFIG_NOCACHE | This header is returned when the request when request is configured not to cache in the CDN profile.
+X-Cache: CONFIG_NOCACHE | This header is returned when the request is configured not to cache in the CDN profile.
For additional information on HTTP headers supported in Azure CDN, see [Front Door to backend](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend).
certification Program Requirements Edge Managed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/program-requirements-edge-managed.md
The Edge Managed certification requires that all requirements from the [Azure Ce
| **OS** | [Tier1 and Tier2 OS](../iot-edge/support.md) | | **Validation Type** | Automated | | **Validation** | AICS validates the deploy-ability of the installed IoT Edge RT. **1.** User needs to specify specific OS (OS not on the list of Tier1/2 are not accepted) **2.** AICS generates its config.yaml and deploys canonical [simulated temp sensor edge module](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azure-iot.simulated-temperature-sensor?tab=Overview) **3.** AICS validates that docker compatible container subsystem (Moby) is installed on the device **4.** Test result is determined based on successful deployment of the simulated temp sensor edge module and functionality of docker compatible container subsystem |
-| **Resources** | **a)** [AICS blog](https://azure.microsoft.com/en-in/blog/expanding-azure-iot-certification-service-to-support-azure-iot-edge-device-certification/), **b)** [Certification steps](./overview.md) (has all the additional resources), **c)** [Requirements](./program-requirements-azure-certified-device.md) |
+| **Resources** | **a)** [AICS blog](https://azure.microsoft.com/blog/expanding-azure-iot-certification-service-to-support-azure-iot-edge-device-certification/), **b)** [Certification steps](./overview.md) (has all the additional resources), **c)** [Requirements](./program-requirements-azure-certified-device.md) |
| **Azure Recommended:** | N/A | ### Capability Template:
The Edge Managed certification requires that all requirements from the [Azure Ce
| **OS** | [Tier1 and Tier2 OS](../iot-edge/support.md) | | **Validation Type** | Manual / Lab Verified | | **Validation** | OEM must ship the physical device to IoT administration (HCL). HCL performs manual validation on the physical device to check: **1.** EdgeRT is using Moby subsystem (allowed redistribution version). Not docker **2.** Pick the latest edge module to validate ability to deploy edge. |
-| **Resources** | **a)** [AICS blog](https://azure.microsoft.com/en-in/blog/expanding-azure-iot-certification-service-to-support-azure-iot-edge-device-certification/), **b)** [Certification steps](./overview.md) , **c)** [Requirements](./program-requirements-azure-certified-device.md) |
+| **Resources** | **a)** [AICS blog](https://azure.microsoft.com/blog/expanding-azure-iot-certification-service-to-support-azure-iot-edge-device-certification/), **b)** [Certification steps](./overview.md) , **c)** [Requirements](./program-requirements-azure-certified-device.md) |
| **Azure Recommended:** | N/A |
cloud-services-extended-support In Place Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-overview.md
For more information, see [Overview of Platform-supported migration of IaaS reso
- Dynamic Public IP addresses - DNS Name - Network Traffic Rules-- Hypernet virtual network ## Supported configurations / migration scenarios These are top scenarios involving combinations of resources, features, and Cloud Services. This list is not exhaustive.
cloud-services-extended-support Sample Update Cloud Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/sample-update-cloud-service.md
These samples cover various ways to update an existing Azure Cloud Service (extended support) deployment. ## Add an extension to existing Cloud Service
+Below set of commands adds a RDP extension to already existing cloud service named ContosoCS that belongs to the resource group named ContosOrg.
```powershell # Create RDP extension object $rdpExtension = New-AzCloudServiceRemoteDesktopExtensionObject -Name "RDPExtension" -Credential $credential -Expiration $expiration -TypeHandlerVersion "1.2.1"
-# Get existing Cloud Service
+# Get existing cloud service
$cloudService = Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS"
-# Add RDP extension to existing Cloud Service extension object
-$cloudService.ExtensionProfileExtension = $cloudService.ExtensionProfileExtension + $rdpExtension
-# Update Cloud Service
+# Add RDP extension to existing cloud service extension object
+$cloudService.ExtensionProfile.Extension = $cloudService.ExtensionProfile.Extension + $rdpExtension
+# Update cloud service
$cloudService | Update-AzCloudService ``` ## Remove all extensions from a Cloud Service
+Below set of commands removes all extensions from existing cloud service named ContosoCS that belongs to the resource group named ContosOrg.
```powershell
-# Get existing Cloud Service
+# Get existing cloud service
$cloudService = Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS" # Set extension to empty list
-$cloudService.ExtensionProfileExtension = @()
-# Update Cloud Service
+$cloudService.ExtensionProfile.Extension = @()
+# Update cloud service
$cloudService | Update-AzCloudService ``` ## Remove the remote desktop extension from Cloud Service
+Below set of commands removes RDP extension from existing cloud service named ContosoCS that belongs to the resource group named ContosOrg.
```powershell
-# Get existing Cloud Service
+# Get existing cloud service
$cloudService = Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS" # Remove extension by name RDPExtension
-$cloudService.ExtensionProfileExtension = $cloudService.ExtensionProfileExtension | Where-Object { $_.Name -ne "RDPExtension" }
-# Update Cloud Service
+$cloudService.ExtensionProfile.Extension = $cloudService.ExtensionProfile.Extension | Where-Object { $_.Name -ne "RDPExtension" }
+# Update cloud service
$cloudService | Update-AzCloudService ``` ## Scale-out / scale-in role instances
+Below set of commands shows how to scale-out and scale-in role instance count for cloud service named ContosoCS that belongs to the resource group named ContosOrg.
```powershell
-# Get existing Cloud Service
+# Get existing cloud service
$cloudService = Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS" # Scale-out all role instance count by 1
-$cloudService.RoleProfileRole | ForEach-Object {$_.SkuCapacity += 1}
+$cloudService.RoleProfile.Role | ForEach-Object {$_.SkuCapacity += 1}
# Scale-in ContosoFrontend role instance count by 1
-$role = $cloudService.RoleProfileRole | Where-Object {$_.Name -eq "ContosoFrontend"}
- $role.SkuCapacity -= 1
+$role = $cloudService.RoleProfile.Role | Where-Object {$_.Name -eq "ContosoFrontend"}
+$role.SkuCapacity -= 1
-# Update Cloud Service configuration as per the new role instance count
+# Update cloud service configuration as per the new role instance count
$cloudService.Configuration = $configuration
-# Update Cloud Service
+# Update cloud service
$cloudService | Update-AzCloudService ```
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Audio files can have silence at the beginning and end of the recording. If possi
[!INCLUDE [supported-audio-formats](includes/supported-audio-formats.md)] > [!TIP]
-> DonΓÇÖt even have any real audio? You can also upload a text (.txt) file (select type **Transcript (automatic audio synthesis)** when uploading data) with some testing sentences, and audio pair for each spoken sentence will be automatically synthesized.
->
+> DonΓÇÖt even have any real audio? You can also upload a text (.txt) file by selecting type **Transcript (automatic audio synthesis)** as **Testing** data to get a basic sense of current accuracy levels, and audio pair for each spoken utterance will be automatically synthesized using [Text-to-speech](text-to-speech.md).
+>
+> Note that the synthesized audios are typically **NOT** recommended to use as **Training** data, as the text-to-speech voices are too good to reflect the real acoustic aspects.
+>
> The maximum file size is 500KB. We will synthesize one audio for each line, and the maximum size of each line is 65535 bytes. > [!NOTE]
cognitive-services Ingestion Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/ingestion-client.md
The Ingestion Client is a tool released on [GitHub](https://github.com/Azure-Sam
## Architecture
-The tool helps those customers that want to get an idea of the quality of the transcript without making development investments up front. The tool connects a few resources to transcribe audio files that land in the dedicated [Azure Storage container](https://azure.microsoft.com/en-us/product-categories/storage/).
+The tool helps those customers that want to get an idea of the quality of the transcript without making development investments up front. The tool connects a few resources to transcribe audio files that land in the dedicated [Azure Storage container](https://azure.microsoft.com/product-categories/storage/).
Internally, the tool uses our V3.0 Batch API or SDK, and follows best practices to handle scale-up, retries and failover. The following schematic describes the resources and connections.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
Neural voices can be used to make interactions with chatbots and voice assistant
| English (United Kingdom) | `en-GB` | Female | `en-GB-MiaNeural` | General | | English (United Kingdom) | `en-GB` | Male | `en-GB-RyanNeural` | General | | English (United States) | `en-US` | Female | `en-US-AriaNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Female | `en-US-JennyNeural` | General |
-| English (United States) | `en-US` | Male | `en-US-GuyNeural` | General |
+| English (United States) | `en-US` | Female | `en-US-JennyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Male | `en-US-GuyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Female | `en-US-AmberNeural` <sup>New</sup> | General |
+| English (United States) | `en-US` | Female | `en-US-AshleyNeural` <sup>New</sup> | General |
+| English (United States) | `en-US` | Female | `en-US-CoraNeural` <sup>New</sup> | General |
+| English (United States) | `en-US` | Female | `en-US-ElizabethNeural` <sup>New</sup> | General |
+| English (United States) | `en-US` | Female | `en-US-MichelleNeural` <sup>New</sup> | General |
+| English (United States) | `en-US` | Female | `en-US-MonicaNeural` <sup>New</sup> | General |
+| English (United States) | `en-US` | Kid | `en-US-AnaNeural` <sup>New</sup> | General |
+| English (United States) | `en-US` | Male | `en-US-BrandonNeural` <sup>New</sup> | General |
+| English (United States) | `en-US` | Male | `en-US-ChristopherNeural` <sup>New</sup> | General |
+| English (United States) | `en-US` | Male | `en-US-JacobNeural` <sup>New</sup> | General |
+| English (United States) | `en-US` | Male | `en-US-EricNeural` <sup>New</sup> | General |
| Estonian (Estonia) | `et-EE` | Female | `et-EE-AnuNeural` | General | | Estonian (Estonia) | `et-EE` | Male | `et-EE-KertNeural` | General | | Finnish (Finland) | `fi-FI` | Female | `fi-FI-NooraNeural` | General |
Below neural voices are in public preview.
| Language | Locale | Gender | Voice name | Style support | |-||--|-||
-| English (United States) | `en-US` | Female | `en-US-AmberNeural` <sup>New</sup> | General |
-| English (United States) | `en-US` | Female | `en-US-AshleyNeural` <sup>New</sup> | General |
-| English (United States) | `en-US` | Female | `en-US-CoraNeural` <sup>New</sup> | General |
-| English (United States) | `en-US` | Female | `en-US-ElizabethNeural` <sup>New</sup> | General |
| English (United States) | `en-US` | Female | `en-US-JennyMultilingualNeural` <sup>New</sup> | General,multi-lingual capabilities available [using SSML](speech-synthesis-markup.md#create-an-ssml-document) |
-| English (United States) | `en-US` | Female | `en-US-MichelleNeural` <sup>New</sup> | General |
-| English (United States) | `en-US` | Female | `en-US-MonicaNeural` <sup>New</sup> | General |
-| English (United States) | `en-US` | Kid | `en-US-AnaNeural` <sup>New</sup> | General |
-| English (United States) | `en-US` | Male | `en-US-BrandonNeural` <sup>New</sup> | General |
-| English (United States) | `en-US` | Male | `en-US-ChristopherNeural` <sup>New</sup> | General |
-| English (United States) | `en-US` | Male | `en-US-JacobNeural` <sup>New</sup> | General |
-| English (United States) | `en-US` | Male | `en-US-EricNeural` <sup>New</sup> | General |
> [!IMPORTANT] > Voices in public preview are only available in 3 service regions: East US, West Europe and Southeast Asia.
cognitive-services Migrate V2 To V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/migrate-v2-to-v3.md
This change requires calling the `GET` for the collection in a loop until all el
A detailed description on how to create batches of transcriptions can be found in [Batch transcription How-to](./batch-transcription.md). The v3 transcription API lets you set specific transcription options explicitly. All (optional) configuration properties can now be set in the `properties` property.
-Version v3 also supports multiple input files, so it requires a list of URLs rather than a single URL as v2 did. The v2 property name `recordingsUrl` is now `contentUrls` in v3. The functionality of analyzing sentiment in transcriptions has been removed in v3. See Microsoft Cognitive Service [Text Analysis](https://azure.microsoft.com/en-us/services/cognitive-services/text-analytics/) for sentiment analysis options.
+Version v3 also supports multiple input files, so it requires a list of URLs rather than a single URL as v2 did. The v2 property name `recordingsUrl` is now `contentUrls` in v3. The functionality of analyzing sentiment in transcriptions has been removed in v3. See Microsoft Cognitive Service [Text Analysis](https://azure.microsoft.com/services/cognitive-services/text-analytics/) for sentiment analysis options.
The new property `timeToLive` under `properties` can help prune the existing completed entities. The `timeToLive` specifies a duration after which a completed entity will be deleted automatically. Set it to a high value (for example `PT12H`) when the entities are continuously tracked, consumed, and deleted and therefore usually processed long before 12 hours have passed.
container-registry Github Action Scan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/github-action-scan.md
Get started with the [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions) by creating a workflow to build and scan a container image.
-With GitHub Actions, you can speed up your CI/CD process by building, scanning, and pushing images to a public or private [Container Registry](https://azure.microsoft.com/en-in/services/container-registry/) from your workflows.
+With GitHub Actions, you can speed up your CI/CD process by building, scanning, and pushing images to a public or private [Container Registry](https://azure.microsoft.com/services/container-registry/) from your workflows.
In this article, we'll make use of the [Container image scan](https://github.com/marketplace/actions/container-image-scan) from the [GitHub Marketplace](https://github.com/marketplace).
container-registry Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Container Registry description: Sample Azure Resource Graph queries for Azure Container Registry showing use of resource types and tables to access Azure Container Registry related resources and properties. Previously updated : 08/04/2021 Last updated : 08/09/2021
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/best-practice-dotnet.md
Watch the video below to learn more about using the .NET SDK from a Cosmos DB en
## Checklist |Checked | Topic |Details/Links | ||||
-| <input type="checkbox" unchecked /> | SDK Version | Always using the [latest version](sql-api-sdk-dotnet-standard.md) of the Cosmos DB SDK available for optimal performance. |
-| <input type="checkbox" unchecked /> | Singleton Client | Use a [single instance](/dotnet/api/microsoft.azure.cosmos.cosmosclient?view=azure-dotnet&preserve-view=true) of `CosmosClient` for the lifetime of your application for [better performance](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage). |
-| <input type="checkbox" unchecked /> | Regions | Make sure to run your application in the same [Azure region](distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](distribute-data-globally.md). For production workloads, enable [automatic failover](how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover will not succeed due to lack of region connectivity. To learn how to add multiple regions using the .NET SDK visit [here](tutorial-global-distribution-sql-api.md) |
-| <input type="checkbox" unchecked /> | Availability and Failovers | Set the [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions?view=azure-dotnet&preserve-view=true) or [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion?view=azure-dotnet&preserve-view=true) in the v3 SDK, and the [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations?view=azure-dotnet&preserve-view=true) in the v2 SDK using the [preferred regions list](./tutorial-global-distribution-sql-api.md?tabs=dotnetv3%2capi-async#preferred-locations). During failovers, write operations are sent to the current write region and all reads are sent to the first region within your preferred regions list. For more information about regional failover mechanics see the [availability troubleshooting guide](troubleshoot-sdk-availability.md). |
-| <input type="checkbox" unchecked /> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is very high. |
-| <input type="checkbox" unchecked /> | Hosting | Use [Windows 64-bit host](performance-tips.md#hosting) processing for best performance, whenever possible. |
-| <input type="checkbox" unchecked /> | Connectivity Modes | Use [Direct mode](sql-sdk-connection-modes.md) for the best performance. For instructions on how to do this, see the [V3 SDK documentation](performance-tips-dotnet-sdk-v3-sql.md#networking) or the [V2 SDK documentation](performance-tips.md#networking).|
-| <input type="checkbox" unchecked /> | Networking | If using a virtual machine to run your application, enable [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-powershell.md) on your VM to help with bottlenecks due to high traffic and reduce latency or CPU jitter. You might also want to consider using a higher end Virtual Machine where the max CPU usage is under 70%. |
-| <input type="checkbox" unchecked /> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. |
-| <input type="checkbox" unchecked /> | Use Async/Await | Avoid blocking calls: `Task.Result`, `Task.Wait`, and `Task.GetAwaiter().GetResult()`. The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times. |
-| <input type="checkbox" unchecked /> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use both `RequestTimeout` and `CancellationToken` parameters. For more details on timeouts with Cosmos DB [visit](troubleshoot-dot-net-sdk-request-timeout.md) |
-| <input type="checkbox" unchecked /> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK will not retry on writes for transient failures as writes are not idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](troubleshoot-dot-net-sdk.md#retry-logics) |
-| <input type="checkbox" unchecked /> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `ReadDatabaseAsync` or `ReadDocumentCollectionAsync` and `CreateDatabaseQuery` or `CreateDocumentCollectionQuery` will result in metadata calls to the service, which consume from the system-reserved RU limit. `CreateIfNotExist` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. |
-|<input type="checkbox" unchecked /> | Bulk Support | In scenarios where you may not need to optimize for latency, we recommend enabling [Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) for dumping large volumes of data. |
-| <input type="checkbox" unchecked /> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you are not aware of the number of partitions, start by using `int.MaxValue` which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
-| <input type="checkbox" unchecked /> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage) intervals. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries. |
-| <input type="checkbox" unchecked /> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths). Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit](performance-tips-dotnet-sdk-v3-sql.md#indexing-policy) |
-| <input type="checkbox" unchecked /> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. |
-| <input type="checkbox" unchecked /> | Increase the number of threads/tasks | Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of concurrency of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the [.NET Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB. |
-| <input type="checkbox" unchecked /> | Enabling Query Metrics | For additional logging of your backend query executions, you can enable SQL Query Metrics using our .NET SDK. For instructions on how to collect SQL Query Metrics [visit](profile-sql-api-query.md) |
-| <input type="checkbox" unchecked /> | SDK Logging | Use SDK logging to capture additional diagnostics information and troubleshoot latency issues. Log the [diagnostics string](/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring?view=azure-dotnet&preserve-view=true) in the V2 SDK or [`Diagnostics`](/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics?view=azure-dotnet&preserve-view=true) in v3 SDK for more detailed cosmos diagnostic information for the current request to the service. As an example use case, capture Diagnostics on any exception and on completed operations if the `Diagnostics.ElapsedTime` is greater than a designated threshold value (i.e. if you have an SLA of 10 seconds, then capture diagnostics when `ElapsedTime` > 10 seconds ). It is advised to only use these diagnostics during performance testing. |
+|<input type="checkbox"/> | SDK Version | Always using the [latest version](sql-api-sdk-dotnet-standard.md) of the Cosmos DB SDK available for optimal performance. |
+| <input type="checkbox"/> | Singleton Client | Use a [single instance](/dotnet/api/microsoft.azure.cosmos.cosmosclient?view=azure-dotnet&preserve-view=true) of `CosmosClient` for the lifetime of your application for [better performance](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage). |
+| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](distribute-data-globally.md). For production workloads, enable [automatic failover](how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover will not succeed due to lack of region connectivity. To learn how to add multiple regions using the .NET SDK visit [here](tutorial-global-distribution-sql-api.md) |
+| <input type="checkbox"/> | Availability and Failovers | Set the [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions?view=azure-dotnet&preserve-view=true) or [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion?view=azure-dotnet&preserve-view=true) in the v3 SDK, and the [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations?view=azure-dotnet&preserve-view=true) in the v2 SDK using the [preferred regions list](./tutorial-global-distribution-sql-api.md?tabs=dotnetv3%2capi-async#preferred-locations). During failovers, write operations are sent to the current write region and all reads are sent to the first region within your preferred regions list. For more information about regional failover mechanics see the [availability troubleshooting guide](troubleshoot-sdk-availability.md). |
+| <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is very high. |
+| <input type="checkbox"/> | Hosting | Use [Windows 64-bit host](performance-tips.md#hosting) processing for best performance, whenever possible. |
+| <input type="checkbox"/> | Connectivity Modes | Use [Direct mode](sql-sdk-connection-modes.md) for the best performance. For instructions on how to do this, see the [V3 SDK documentation](performance-tips-dotnet-sdk-v3-sql.md#networking) or the [V2 SDK documentation](performance-tips.md#networking).|
+|<input type="checkbox"/> | Networking | If using a virtual machine to run your application, enable [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-powershell.md) on your VM to help with bottlenecks due to high traffic and reduce latency or CPU jitter. You might also want to consider using a higher end Virtual Machine where the max CPU usage is under 70%. |
+|<input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. |
+|<input type="checkbox"/> | Use Async/Await | Avoid blocking calls: `Task.Result`, `Task.Wait`, and `Task.GetAwaiter().GetResult()`. The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times. |
+|<input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use both `RequestTimeout` and `CancellationToken` parameters. For more details on timeouts with Cosmos DB [visit](troubleshoot-dot-net-sdk-request-timeout.md) |
+|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK will not retry on writes for transient failures as writes are not idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](troubleshoot-dot-net-sdk.md#retry-logics) |
+|<input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `ReadDatabaseAsync` or `ReadDocumentCollectionAsync` and `CreateDatabaseQuery` or `CreateDocumentCollectionQuery` will result in metadata calls to the service, which consume from the system-reserved RU limit. `CreateIfNotExist` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. |
+|&#10003; | Bulk Support | In scenarios where you may not need to optimize for latency, we recommend enabling [Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) for dumping large volumes of data. |
+| <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you are not aware of the number of partitions, start by using `int.MaxValue` which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
+| <input type="checkbox"/> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage) intervals. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries. |
+| <input type="checkbox"/> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths). Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit](performance-tips-dotnet-sdk-v3-sql.md#indexing-policy) |
+| <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. |
+| <input type="checkbox"/> | Increase the number of threads/tasks | Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of concurrency of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the [.NET Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB. |
+| <input type="checkbox"/> | Enabling Query Metrics | For additional logging of your backend query executions, you can enable SQL Query Metrics using our .NET SDK. For instructions on how to collect SQL Query Metrics [visit](profile-sql-api-query.md) |
+| <input type="checkbox"/> | SDK Logging | Use SDK logging to capture additional diagnostics information and troubleshoot latency issues. Log the [diagnostics string](/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring?view=azure-dotnet&preserve-view=true) in the V2 SDK or [`Diagnostics`](/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics?view=azure-dotnet&preserve-view=true) in v3 SDK for more detailed cosmos diagnostic information for the current request to the service. As an example use case, capture Diagnostics on any exception and on completed operations if the `Diagnostics.ElapsedTime` is greater than a designated threshold value (i.e. if you have an SLA of 10 seconds, then capture diagnostics when `ElapsedTime` > 10 seconds ). It is advised to only use these diagnostics during performance testing. |
## Best practices when using Gateway mode Increase `System.Net MaxConnections` per host when you use Gateway mode. Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for `ServicePointManager.DefaultConnectionLimit` is 50. To change the value, you can set `Documents.Client.ConnectionPolicy.MaxConnectionLimit` to a higher value.
cosmos-db Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/data-residency.md
# How to meet data residency requirements in Azure Cosmos DB
-In Azure Cosmos DB, you can configure your data and backups to remain in a single region to meet the[ residency requirements.](https://azure.microsoft.com/en-us/global-infrastructure/data-residency/)
+In Azure Cosmos DB, you can configure your data and backups to remain in a single region to meet the[ residency requirements.](https://azure.microsoft.com/global-infrastructure/data-residency/)
## Residency requirements for data
cosmos-db Import Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/import-data.md
Title: 'Tutorial: Database migration tool for Azure Cosmos DB' description: 'Tutorial: Learn how to use the open-source Azure Cosmos DB data migration tools to import data to Azure Cosmos DB from various sources including MongoDB, SQL Server, Table storage, Amazon DynamoDB, CSV, and JSON files. CSV to JSON conversion.'-+
While the import tool includes a graphical user interface (dtui.exe), it can als
## <a id="Install"></a>Installation
-The migration tool source code is available on GitHub in [this repository](https://github.com/azure/azure-documentdb-datamigrationtool). You can download and compile the solution locally then run either:
+### Download executable package
-* **Dtui.exe**: Graphical interface version of the tool
-* **Dt.exe**: Command-line version of the tool
+ * Download a zip of the latest signed **dt.exe** and **dtui.exe** Windows binaries [here](https://github.com/Azure/azure-documentdb-datamigrationtool/releases/tag/1.8.3)
+ * Unzip into any directory on your computer and open the extracted directory to find the binaries
+
+### Build from source
+
+ The migration tool source code is available on GitHub in [this repository](https://github.com/azure/azure-documentdb-datamigrationtool). You can download and compile the solution locally then run either:
+
+ * **Dtui.exe**: Graphical interface version of the tool
+ * **Dt.exe**: Command-line version of the tool
## Select data source
cosmos-db Upgrade Mongodb Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/upgrade-mongodb-version.md
If you are upgrading from version 3.2, you will need to replace the existing end
:::image type="content" source="./media/upgrade-mongodb-version/select-upgrade.png" alt-text="Review upgrade guidance and select upgrade." border="true":::
-1. After you start the upgrade, the **Feature** menu is greyed out and the status is set to *Pending*. The upgrade takes around 15 minutes to complete. This process will not affect the existing functionality or operations of your database account. After it's complete, the **Update MongoDB server version** status will show the upgraded version. Please [contact support](https://azure.microsoft.com/en-us/support/create-ticket/) if there was an issue processing your request.
+1. After you start the upgrade, the **Feature** menu is greyed out and the status is set to *Pending*. The upgrade takes around 15 minutes to complete. This process will not affect the existing functionality or operations of your database account. After it's complete, the **Update MongoDB server version** status will show the upgraded version. Please [contact support](https://azure.microsoft.com/support/create-ticket/) if there was an issue processing your request.
1. The following are some considerations after upgrading your account:
cosmos-db Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Cosmos DB description: Sample Azure Resource Graph queries for Azure Cosmos DB showing use of resource types and tables to access Azure Cosmos DB related resources and properties. Previously updated : 08/04/2021 Last updated : 08/09/2021
cosmos-db Sql Api Sdk Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-node.md
|Download SDK | [NPM](https://www.npmjs.com/package/@azure/cosmos) |API Documentation | [JavaScript SDK reference documentation](/javascript/api/%40azure/cosmos/) |SDK installation instructions | [Installation instructions](https://github.com/Azure/azure-sdk-for-js)
-|Contribute to SDK | [GitHub](https://github.com/Azure/azure-cosmos-js/tree/master)
+|Contribute to SDK | [GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main)
| Samples | [Node.js code samples](sql-api-nodejs-samples.md) | Getting started tutorial | [Get started with the JavaScript SDK](sql-api-nodejs-get-started.md) | Web app tutorial | [Build a Node.js web application using Azure Cosmos DB](sql-api-nodejs-application.md)
Microsoft provides notification at least **12 months** in advance of retiring an
[!INCLUDE [cosmos-db-sdk-faq](includes/cosmos-db-sdk-faq.md)] ## See also
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
+To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-dot-net-sdk-slow-request.md
Consider the following when developing your application:
* Avoid High CPU. Make sure to look at Max CPU and not average, which is the default for most logging systems. Anything above roughly 40% can increase the latency.
-## Capture the diagnostics
+## <a name="capture-diagnostics"></a>Capture the diagnostics
All the responses in the SDK including `CosmosException` have a Diagnostics property. This property records all the information related to the single request including if there were retries or any transient failures.
if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpa
## Diagnostics in version 3.19 and higher The JSON structure has breaking changes with each version of the SDK. This makes it unsafe to be parsed. The JSON represents a tree structure of the request going through the SDK. This covers a few key things to look at:
-### CPU history
+### <a name="cpu-history"></a>CPU history
High CPU utilization is the most common cause for slow requests. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries where the requests might do multiple connections for a single query. If the error contains `TransportException` information, it might contain also `CPU History`:
CPU count: 8)
The client application that uses the SDK should be scaled up or out.
-### HttpResponseStats
+### <a name="httpResponseStats"></a>HttpResponseStats
HttpResponseStats are request going to [gateway](sql-sdk-connection-modes.md). Even in Direct mode the SDK gets all the meta data information from the gateway. If the request is slow, first verify all the suggestions above don't yield results.
Single store result for a single request
] ```
-### StoreResult
+### <a name="storeResult"></a>StoreResult
StoreResult represents a single request to Azure Cosmos DB using Direct mode with TCP protocol. If it is still slow different patterns point to different issues:
Single store result for a single request
| SLA Violated | StorePhysicalAddress have the same partition ID but different replica IDs with no failure status code | Likely an issue with the Cosmos DB service | | SLA Violated | StorePhysicalAddress are random with no failure status code | Points to an issue with the machine |
-RntbdRequestStats show the time for the different stages of sending and receiving a request.
-
-* ChannelAcquisitionStarted: The time to get or create a new connection. New connections can be created for numerous different regions. For example, a connection was unexpectedly closed or too many requests were getting sent through the existing connections so a new connection is being created.
-* Pipelined time is large points to possibly a large request.
-* Transit time is large, which leads to a networking issue. Compare this number to the `BELatencyInMs`. If the BELatencyInMs is small, then the time was spent on the network and not on the Azure Cosmos DB service.
- Multiple StoreResults for single request: * Strong and bounded staleness consistency will always have at least two store results * Check the status code of each StoreResult. The SDK retries automatically on multiple different [transient failures](troubleshoot-dot-net-sdk-request-timeout.md). The SDK is constantly being improved to cover more scenarios.
+### <a name="rntbdRequestStats"></a>RntbdRequestStats
+Show the time for the different stages of sending and receiving a request in the transport layer.
+
+* ChannelAcquisitionStarted: The time to get or create a new connection. New connections can be created for numerous different regions. For example, a connection was unexpectedly closed or too many requests were getting sent through the existing connections so a new connection is being created.
+* Pipelined time is large points to possibly a large request.
+* Transit time is large, which leads to a networking issue. Compare this number to the `BELatencyInMs`. If the BELatencyInMs is small, then the time was spent on the network and not on the Azure Cosmos DB service.
+* Received time is large this points to a thread starvation issue. This the time between having the response and returning the result.
+ ```json "StoreResult": { "ActivityId": "a3d325c1-f4e9-405b-820c-bab4d329ee4c",
Multiple StoreResults for single request:
} ``` - ### Failure rate violates the Azure Cosmos DB SLA Contact [Azure Support](https://aka.ms/azure-support).
cost-management-billing Mca Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/mca-faq.md
- Title: Microsoft Customer Agreement FAQ - Azure
-description: Get answers to frequently asked questions about signing the Microsoft Customer Agreement.
--
-tags: billing
Previously updated : 07/26/2021------
-# Microsoft Customer Agreement frequently asked questions (FAQ)
-
-This article provides answers to frequently asked questions about the Microsoft Customer Agreement.
-
-## Can I sign the Microsoft Customer Agreement today?
--- **If you work through a Microsoft seller**, the Microsoft Customer Agreement is currently available in Argentina, Australia, Austria, Canada, Chile, Finland, France, Germany, Greece, Iceland, Ireland, Italy, Luxembourg, Netherlands, Norway, Portugal, Puerto Rico, South Africa, Spain, Sweden, Switzerland, United Kingdom, United States, and Uruguay.-- **If you are a new customer purchasing Azure services directly from Azure.com,** today the Microsoft Customer Agreement is available in the countries below all of which are transacted in US dollars (USD). In 2021, we began the rollout for the new customers in other regions with the goal of making Microsoft Customer Agreement available worldwide.-- The Microsoft Customer Agreement is also available through Cloud Solution Providers (CSP) around the world. You can find a partner in the CSP program [**here**](https://www.microsoft.com/solution-providers/home).-
- | Countries/regions | Countries/regions | Countries/regions | Countries/regions |
- |-|||--|
- | Afghanistan | C├┤te d'Ivoire | Moldova | Sri Lanka |
- | Albania | Curaçao | Mongolia | Tajikistan |
- | Algeria | Dominican Republic | Montenegro | Tanzania |
- | Angola | Egypt | Morocco | Thailand |
- | Armenia | Ethiopia | Namibia | Trinidad and Tobago |
- | Azerbaijan | Georgia | Nepal | Tunisia |
- | Bahrain | Ghana | Nicaragua | Turkmenistan |
- | Bangladesh | Guatemala | Nigeria | Virgin Islands of the United States |
- | Barbados | Honduras | Oman | Uganda |
- | Belarus | Iraq | Pakistan | Ukraine |
- | Belize | Israel | Palestinian Authority | United Arab Emirates |
- | Bermuda | Jamaica | Panama | United States |
- | Bolivia | Jordan | Paraguay | Uruguay |
- | Bosnia and Herzegovina | Kazakhstan | Peru | Uzbekistan |
- | Botswana | Kenya | Philippines | Venezuela |
- | Brunei Darussalam | Kuwait | Puerto Rico | Vietnam |
- | Cameroon | Kyrgyzstan | Qatar | Yemen |
- | Cabo Verde | Lebanon | Rwanda | Zambia |
- | Cayman Islands | Libya | Saint Kitts and Nevis | Zimbabwe |
- | Chile | Macao | Senegal | Ecuador |
- | Colombia | Macedonia (FYRO) | Serbia | El Salvador |
- | Costa Rica | Mauritius | Singapore | |
-
-## What if I am an existing customer purchasing Azure directly from Azure.com?
-
-Microsoft will share a notice directly with you at least 30 days before the transition to the Microsoft Customer Agreement.
-
-## Which Azure Services are available through the Microsoft Customer Agreement?
-
-All Azure services are available through Microsoft Customer Agreement. Once you accept the Microsoft Customer Agreement, you get the benefit of more free enterprise-grade management tools including new invoice and cost management capabilities through the same portal where you manage your Azure services.
-
-## How is Azure priced under the Microsoft Customer Agreement?
-
-Azure is priced in US dollars (USD) worldwide under the Microsoft Customer Agreement.
-
-If you transact in one of the other supported currencies listed below, your monthly cost is first calculated in US dollars (USD). For payment, the total is then converted to the local currency.
-
-| Code | Currency |
-||--|
-| AUD | Australian Dollar |
-| BRL | Brazilian Real |
-| GBP | British Pound |
-| CAD | Canadian Dollar |
-| CNY | Chinese Yuan |
-| DKK | Danish Krone |
-| EUR | Euro |
-| INR | Indian Rupee |
-| JPY | Japanese Yen |
-| KRW | Korean Won |
-| NZD | New Zealand Dollar |
-| NOK | Norwegian Krone |
-| RUB | Russian Ruble |
-| SEK | Swedish Krona |
-| CHF | Swiss Franc |
-| TWD | Taiwan Dollar |
-
-## What exchange rate will be used and how does it work with my bill?
-
-We use Thomson Reuters benchmark rates that are captured at the end of the previous month and go into effect on the first day of the next calendar month. This rate applies to all transactions during the upcoming month.
-
-For example, the exchange rate for January transactions is first captured in the final days of December. This rate will be applied to all Azure purchases made in January and all Azure consumption in January. The January exchange rate and local currency billed amount will appear in the January invoice, which is available at the beginning of February.
-
-## Under the Microsoft Customer Agreement, in which currency will my payment be processed?
-
-The legal address you provide at the time of signing will determine your billing geography and currency based on the table below:
-
-| **Country/Region** | **Billing currency** |
-|-|-|
-| Afghanistan | US Dollar (\$) |
-| Albania | US Dollar (\$) |
-| Algeria | US Dollar (\$) |
-| Angola | US Dollar (\$) |
-| Argentina | US Dollar (\$) |
-| Armenia | US Dollar (\$) |
-| Australia | Australian Dollar (\$) |
-| Austria | Euro (Γé¼) |
-| Azerbaijan | US Dollar (\$) |
-| Bahamas | US Dollar (\$) |
-| Bahrain | US Dollar (\$) |
-| Bangladesh | US Dollar (\$) |
-| Barbados | US Dollar (\$) |
-| Belarus | US Dollar (\$) |
-| Belgium | Euro (Γé¼) |
-| Belize | US Dollar (\$) |
-| Bermuda | US Dollar (\$) |
-| Bolivia | US Dollar (\$) |
-| Bosnia and Herzegovina | US Dollar (\$) |
-| Botswana | US Dollar (\$) |
-| Brazil | Brazilian Real (R\$) |
-| Brunei Darussalam | US Dollar (\$) |
-| Bulgaria | Euro (Γé¼) |
-| Republic of Cabo Verde | US Dollar (\$) |
-| Cameroon | US Dollar (\$) |
-| Canada | Canadian Dollar (\$) |
-| Cayman Islands | US Dollar (\$) |
-| Chile | US Dollar (\$) |
-| China | Chinese yuan (¥) |
-| Colombia | US Dollar (\$) |
-| Republic of the Congo | US Dollar (\$) |
-| Costa Rica | US Dollar (\$) |
-| C├┤te d'Ivoire | US Dollar (\$) |
-| Croatia | Euro (Γé¼) |
-| Curaçao | US Dollar (\$) |
-| Cyprus | Euro (Γé¼) |
-| Czech Republic | Euro (Γé¼) |
-| Denmark | Danish Krone (kr) |
-| Dominican Republic | US Dollar (\$) |
-| Ecuador | US Dollar (\$) |
-| Egypt | US Dollar (\$) |
-| El Salvador | US Dollar (\$) |
-| Estonia | Euro (Γé¼) |
-| Ethiopia | US Dollar (\$) |
-| Faroe Islands | Danish Krone (kr) |
-| Fiji | Australian Dollar (\$) |
-| Finland | Euro (Γé¼) |
-| France | Euro (Γé¼) |
-| Georgia | US Dollar (\$) |
-| Germany | Euro (Γé¼) |
-| Ghana | US Dollar (\$) |
-| Greece | Euro (Γé¼) |
-| Guatemala | US Dollar (\$) |
-| Honduras | US Dollar (\$) |
-| Hong Kong | US Dollar (\$) |
-| Hungary | Euro (Γé¼) |
-| Iceland | Euro (Γé¼) |
-| India | Indian Rupee (₹) |
-| Indonesia | US Dollar (\$) |
-| Iraq | US Dollar (\$) |
-| Ireland | Euro (Γé¼) |
-| Israel | US Dollar (\$) |
-| Italy | Euro (Γé¼) |
-| Jamaica | US Dollar (\$) |
-| Japan | Japanese Yen (¥) |
-| Jordan | US Dollar (\$) |
-| Kazakhstan | US Dollar (\$) |
-| Kenya | US Dollar (\$) |
-| Korea | Korean Won (₩) |
-| Kuwait | US Dollar (\$) |
-| Kyrgyzstan | US Dollar (\$) |
-| Latvia | Euro (Γé¼) |
-| Lebanon | US Dollar (\$) |
-| Libya | US Dollar (\$) |
-| Liechtenstein | Swiss Franc. (chf) |
-| Lithuania | Euro (Γé¼) |
-| Luxembourg | Euro (Γé¼) |
-| Macao | US Dollar (\$) |
-| Macedonia (FYRO) | US Dollar (\$) |
-| Malaysia | US Dollar (\$) |
-| Malta | Euro (Γé¼) |
-| Mauritius | US Dollar (\$) |
-| Mexico | US Dollar (\$) |
-| Moldova | US Dollar (\$) |
-| Monaco | Euro (Γé¼) |
-| Mongolia | US Dollar (\$) |
-| Montenegro | US Dollar (\$) |
-| Morocco | US Dollar (\$) |
-| Namibia | US Dollar (\$) |
-| Nepal | US Dollar (\$) |
-| Netherlands | Euro (Γé¼) |
-| New Zealand | New Zealand Dollar (\$) |
-| Nicaragua | US Dollar (\$) |
-| Nigeria | US Dollar (\$) |
-| Norway | Norwegian Krone (kr) |
-| Oman | US Dollar (\$) |
-| Pakistan | US Dollar (\$) |
-| Palestinian Authority | US Dollar (\$) |
-| Panama | US Dollar (\$) |
-| Paraguay | US Dollar (\$) |
-| Peru | US Dollar (\$) |
-| Philippines | US Dollar (\$) |
-| Poland | Euro (Γé¼) |
-| Portugal | Euro (Γé¼) |
-| Puerto Rico | US Dollar (\$) |
-| Qatar | US Dollar (\$) |
-| Romania | Euro (Γé¼) |
-| Russia | Russian Ruble (руб) |
-| Rwanda | US Dollar (\$) |
-| Saint Kitts and Nevis | US Dollar (\$) |
-| Saudi Arabia | US Dollar (\$) |
-| Senegal | US Dollar (\$) |
-| Serbia | US Dollar (\$) |
-| Singapore | US Dollar (\$) |
-| Slovakia | Euro (Γé¼) |
-| Slovenia | Euro (Γé¼) |
-| South Africa | US Dollar (\$) |
-| Spain | Euro (Γé¼) |
-| Sri Lanka | US Dollar (\$) |
-| Sweden | Swedish Krona (kr) |
-| Switzerland | Swiss Franc. (chf) |
-| Taiwan | Taiwanese Dollar (NT\$) |
-| Tajikistan | US Dollar (\$) |
-| Tanzania | US Dollar (\$) |
-| Thailand | US Dollar (\$) |
-| Trinidad and Tobago | US Dollar (\$) |
-| Tunisia | US Dollar (\$) |
-| Turkey | US Dollar (\$) |
-| Turkmenistan | US Dollar (\$) |
-| Uganda | US Dollar (\$) |
-| Ukraine | US Dollar (\$) |
-| United Arab Emirates | US Dollar (\$) |
-| United Kingdom | British Pound (£) |
-| United States | US Dollar (\$) |
-| Uruguay | US Dollar (\$) |
-| Uzbekistan | US Dollar (\$) |
-| Venezuela | US Dollar (\$) |
-| Vietnam | US Dollar (\$) |
-| Virgin Islands of the United States | US Dollar (\$) |
-| Yemen | US Dollar (\$) |
-| Zambia | US Dollar (\$) |
-| Zimbabwe | US Dollar (\$) |
-
-If you purchase Azure services through a Microsoft partner, contact them for questions regarding your payment currency.
-
-## How will I get my invoice?
--- If youΓÇÖre buying directly from Azure.com, you will find your invoice details in the **Cost Management and Billing** service in the [Azure portal](https://portal.azure.com).-- If you are buying through a Microsoft sales representative, you will receive your invoices from Microsoft. You can also access your invoices in the **Cost Management and Billing** service in the [Azure portal](https://portal.azure.com).-- If you are purchasing through a cloud solution provider under the Microsoft Customer Agreement, you will receive your invoice from the partner.-
-## Where can I review the Microsoft Customer Agreement?
-
-You can review the Microsoft Customer Agreement at [Microsoft Licensing](https://www.microsoft.com/licensing/docs/customeragreement).
-
-## How do these changes affect my organization if we already have a Microsoft Enterprise Agreement?
-
-Azure usage and purchases made under an existing Enterprise Agreement are not affected by this change.
data-factory Tutorial Managed Virtual Network On Premise Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-managed-virtual-network-on-premise-sql-server.md
data factory from the resources list.
4. Select + **New** under **Managed private endpoints**. 5. Select the **Private Link Service** tile from the list and select **Continue**. 6. Enter the name of private endpoint and select **myPrivateLinkService** in private link service list.
-7. Add FQDN of your target on-premises SQL Server and NAT IPs of your private link Service.
-
- :::image type="content" source="./media/tutorial-managed-virtual-network/link-service-nat-ip.png" alt-text="Screenshot that shows the NAT IP in the linked service." lightbox="./media/tutorial-managed-virtual-network/link-service-nat-ip-expanded.png":::
+7. Add FQDN of your target on-premises SQL Server.
- :::image type="content" source="./media/tutorial-managed-virtual-network/private-endpoint.png" alt-text="Screenshot that shows the private endpoint settings.":::
+ :::image type="content" source="./media/tutorial-managed-virtual-network/private-endpoint-6.png" alt-text="Screenshot that shows the private endpoint settings.":::
8. Create private endpoint.
data-factory Tutorial Managed Virtual Network Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-managed-virtual-network-sql-managed-instance.md
data factory from the resources list.
5. Select the **Private Link Service** tile from the list and select **Continue**. 6. Enter the name of private endpoint and select **myPrivateLinkService** in private link service list.
-7. Add FQDN of your target SQL Managed Instance and NAT IPs of your private link Service.
+7. Add FQDN of your target SQL Managed Instance.
:::image type="content" source="./media/tutorial-managed-virtual-network/sql-mi-host.png" alt-text="Screenshot that shows SQL MI host." lightbox="./media/tutorial-managed-virtual-network/sql-mi-host-expanded.png":::
- :::image type="content" source="./media/tutorial-managed-virtual-network/link-service-nat-ip.png" alt-text="Screenshot that shows the NAT IP in the linked service." lightbox="./media/tutorial-managed-virtual-network/link-service-nat-ip-expanded.png":::
- :::image type="content" source="./media/tutorial-managed-virtual-network/private-endpoint-2.png" alt-text="Screenshot that shows the private endpoint settings.":::
+ :::image type="content" source="./media/tutorial-managed-virtual-network/private-endpoint-5.png" alt-text="Screenshot that shows the private endpoint settings.":::
8. Create private endpoint.
databox-online Azure Stack Edge Mini R Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy.md
Previously updated : 05/11/2021 Last updated : 08/12/2021 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Mini R so I can use it to transfer data to Azure.
Follow these steps to configure the network for your device.
2. If a zero day update is needed, you can do that here by configuring a data port with a wired connection. For more instructions on how to set up a wired connection for this device, see [Cable your device](azure-stack-edge-mini-r-deploy-install.md#cable-the-device). After the update is over, you can remove the wired connection.
-3. Create certificates for Wi-Fi and signing chain. Both the signing chain and the Wi-Fi certificates must be DER format with a *.cer* file extension. For instructions, see [Create certificates](azure-stack-edge-gpu-manage-certificates.md).
+3. Create certificates for Wi-Fi and signing chain. Both the signing chain and the Wi-Fi certificates must be DER format with a *.cer* file extension. For instructions, see [Create certificates](azure-stack-edge-gpu-manage-certificates.md). This step is optional if you're using a Wi-Fi profile instead of certificates for authentication.
-4. In the local web UI, go to **Get started**. On the **Security** tile, select **Certificates** and then select **Configure**.
+ > [!NOTE]
+ > If you're using password-based authentication on your personal Wi-Fi network, you can skip the certificate steps. Just configure the Wi-Fi port and then upload your Wi-Fi profile.</br></br>
+ > To find out about Wi-Fi profiles for a WPA2 - Personal network and learn how to export your Wi-Fi profile, see [Use Wi-Fi profiles](azure-stack-edge-mini-r-use-wifi-profiles.md).
- [![Local web UI "Certificates" page](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/get-started-1.png)](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/get-started-1.png#lightbox)
+4. Add the certificates to your device:
+
+ 1. In the local web UI, go to **Get started**. On the **Security** tile, select **Certificates** and then select **Configure**.
+
+ [![Local web UI "Certificates" page](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/get-started-1.png)](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/get-started-1.png#lightbox)
1. Select **+ Add certificate**.
Follow these steps to configure the network for your device.
5. Go back to **Get started**.
-5. On the **Network** tile, select **Configure**.
+5. Configure the Wi-Fi port. On the **Network** tile, select **Configure**.
On your physical device, there are five network interfaces. PORT 1 and PORT 2 are 1-Gbps network interfaces. PORT 3 and PORT 4 are all 10-Gbps network interfaces. The fifth port is the Wi-Fi port.
ddos-protection Ddos Rapid Response https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-rapid-response.md
You should only engage DRR if:
5. Complete additional details and submit the support request.
-DRR follows the Azure Rapid Response support model. Refer to [Support scope and responsiveness](https://azure.microsoft.com/en-us/support/plans/response/) for more information on Rapid Response.
+DRR follows the Azure Rapid Response support model. Refer to [Support scope and responsiveness](https://azure.microsoft.com/support/plans/response/) for more information on Rapid Response.
To learn more, read the [DDoS Protection Standard documentation](./ddos-protection-overview.md).
defender-for-iot How To Investigate Cis Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/how-to-investigate-cis-benchmark.md
Title: Investigate CIS benchmark recommendation description: Perform basic and advanced investigations based on OS baseline recommendations. Previously updated : 05/26/2021 Last updated : 08/11/2021
-# Investigate OS baseline (based on CIS benchmark) recommendation
+# Investigate OS baseline (based on CIS benchmark) recommendation
Perform basic and advanced investigations based on OS baseline recommendations.
To query your IoT security events in Log Analytics for recommendations:
1. Select **Show Operation system (OS) baseline rules details** from the **Recommendation details** quick view page to see the details of a specific device.
- :::image type="content" source="media/how-to-investigate-cis-benchmark/recommendation-details.png" alt-text="See the details of a specific device.":::
+ :::image type="content" source="media/how-to-investigate-cis-benchmark/recommendation-details.png" alt-text="See the details of a specific device.":::
To query your IoT security events in Log Analytics workspace directly:
To query your IoT security events in Log Analytics workspace directly:
:::image type="content" source="media/how-to-investigate-cis-benchmark/logs.png" alt-text="Select logs from the left side pane.":::
-1. Select **Investigate the alerts** or, select the **Investigate the alerts in Log Analytics** option from any security recommendation, or alert.
+1. Select **Investigate the alerts** or, select the **Investigate the alerts in Log Analytics** option from any security recommendation, or alert.
-## Useful queries to investigate the OS baseline resources:
+## Useful queries to investigate the OS baseline resources
> [!Note]
-> Make sure to Replace `<device-id>` with the name(s) you gave your device in each of the following queries.
-
+> Make sure to Replace `<device-id>` with the name(s) you gave your device in each of the following queries.
### Retrieve the latest information -- **Device fleet failure**: Run the following query to retrieve the latest information about checks that failed across the device fleet: -
- ```azurecli
- let lastDates = SecurityIoTRawEvent |
-
- where RawEventName == "OSBaseline" |
-
- summarize TimeStamp=max(TimeStamp) by DeviceId;
-
- lastDates | join kind=inner (SecurityIoTRawEvent) on TimeStamp, DeviceId |
-
- extend event = parse_json(EventDetails) |
-
- where event.Result == "FAIL" |
-
- project DeviceId, event.CceId, event.Description
+- **Device fleet failure**: Run the following query to retrieve the latest information about checks that failed across the device fleet:
+
+ ```kusto
+ let lastDates = SecurityIoTRawEvent |
+ where RawEventName == "Baseline" |
+ summarize TimeStamp=max(TimeStamp) by DeviceId;
+ lastDates | join kind=inner (SecurityIoTRawEvent) on TimeStamp, DeviceId |
+ extend event = parse_json(EventDetails) |
+ where event.BaselineCheckResult == "FAIL" |
+ project DeviceId, event.BaselineCheckId, event.BaselineCheckDescription
```
-
+ - **Specific device failure** - Run the following query to retrieve the latest information about checks that failed on a specific device:
- ```azurecli
- let LastEvents = SecurityIoTRawEvent |
-
- where RawEventName == "OSBaseline" |
-
- where DeviceId == "<device-id>" |
-
- top 1 by TimeStamp desc |
-
- project IoTRawEventId;
-
- LastEvents | join kind=leftouter SecurityIoTRawEvent on IoTRawEventId |
-
- extend event = parse_json(EventDetails) |
-
- where event.Result == "FAIL" |
-
- project DeviceId, event.CceId, event.Description
+ ```kusto
+ let id = SecurityIoTRawEvent |
+ extend IoTRawEventId = extractjson("$.EventId", EventDetails, typeof(string)) |
+ where TimeGenerated <= now() |
+ where RawEventName == "Baseline" |
+ where DeviceId == "<device-id>" |
+ summarize arg_max(TimeGenerated, IoTRawEventId) |
+ project IoTRawEventId;
+ SecurityIoTRawEvent |
+ extend IoTRawEventId = extractjson("$.EventId", EventDetails, typeof(string)), extraDetails = todynamic(EventDetails) |
+ where IoTRawEventId == toscalar(id) |
+ where extraDetails.BaselineCheckResult == "FAIL" |
+ project DeviceId, CceId = extraDetails.BaselineCheckId, Description = extraDetails.BaselineCheckDescription
``` -- **Specific device error** - Run this query to retrieve the latest information about checks that have an error on a specific device: -
- ```azurecli
- let LastEvents = SecurityIoTRawEvent |
-
- where RawEventName == "OSBaseline" |
-
- where DeviceId == "<device-id>" |
-
- top 1 by TimeStamp desc |
-
- project IoTRawEventId;
-
- LastEvents | join kind=leftouter SecurityIoTRawEvent on IoTRawEventId |
-
- extend event = parse_json(EventDetails) |
-
- where event.Result == "ERROR" |
-
- project DeviceId, event.CceId, event.Description
+- **Specific device error** - Run this query to retrieve the latest information about checks that have an error on a specific device:
+
+ ```kusto
+ let id = SecurityIoTRawEvent |
+ extend IoTRawEventId = extractjson("$.EventId", EventDetails, typeof(string)) |
+ where TimeGenerated <= now() |
+ where RawEventName == "Baseline" |
+ where DeviceId == "<device-id>" |
+ summarize arg_max(TimeGenerated, IoTRawEventId) |
+ project IoTRawEventId;
+ SecurityIoTRawEvent |
+ extend IoTRawEventId = extractjson("$.EventId", EventDetails, typeof(string)), extraDetails = todynamic(EventDetails) |
+ where IoTRawEventId == toscalar(id) |
+ where extraDetails.BaselineCheckResult == "ERROR" |
+ project DeviceId, CceId = extraDetails.BaselineCheckId, Description = extraDetails.BaselineCheckDescription
```
-
+ - **Update device list for device fleet that failed a specific check** - Run this query to retrieve updated list of devices (across the device fleet) that failed a specific check: 
-
- ```azurecli
- let lastDates = SecurityIoTRawEvent |
-
- where RawEventName == "OSBaseline" |
-
- summarize TimeStamp=max(TimeStamp) by DeviceId;
-
- lastDates | join kind=inner (SecurityIoTRawEvent) on TimeStamp, DeviceId |
-
- extend event = parse_json(EventDetails) |
-
- where event.Result == "FAIL" |
-
- where event.CceId contains "6.2.8" |
-
- project DeviceId;
+
+ ```kusto
+ let lastDates = SecurityIoTRawEvent |
+ where RawEventName == "Baseline" |
+ summarize TimeStamp=max(TimeStamp) by DeviceId;
+ lastDates | join kind=inner (SecurityIoTRawEvent) on TimeStamp, DeviceId |
+ extend event = parse_json(EventDetails) |
+ where event.BaselineCheckResult == "FAIL" |
+ where event.BaselineCheckId contains "6.2.8" |
+ project DeviceId;
```
-
+ ## Next steps [Investigate security recommendations](quickstart-investigate-security-recommendations.md).
-
dms Tutorial Mysql Azure Mysql Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-mysql-azure-mysql-offline-portal.md
You can use Azure Database Migration Service to perform a one-time full database
> [!IMPORTANT] > For online migrations, you can use open-source tools such as [MyDumper/MyLoader](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md). - > [!NOTE] > For a PowerShell-based scriptable version of this migration experience, see [scriptable offline migration to Azure Database for MySQL](./migrate-mysql-to-azure-mysql-powershell.md).
After the service is created, locate it within the Azure portal, open it, and th
![Create a new migration project](media/tutorial-mysql-to-azure-mysql-offline-portal/08-02-dms-portal-new-project.png)
-3. On the **New migration project** screen, specify a name for the project, in the **Source server type** selection box, select **MySQL**, in the **Target server type** selection box, select **Azure Database For MySQL** and in the **Migration activity type** selection box, select **Data migration \[preview\]**. Select **Create and run activity**.
+3. On the **New migration project** screen, specify a name for the project, in the **Source server type** selection box, select **MySQL**, in the **Target server type** selection box, select **Azure Database For MySQL** and in the **Migration activity type** selection box, select **Data migration**. Select **Create and run activity**.
![Create Database Migration Service Project](media/tutorial-mysql-to-azure-mysql-offline-portal/09-dms-portal-project-mysql-create.png)
event-grid Install K8s Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/install-k8s-extension.md
The operation that installs an Event Grid service instance on a Kubernetes clust
Before proceeding with the installation of Event Grid, make sure the following prerequisites are met. 1. A cluster running on one of the [supported Kubernetes distributions](#supported-kubernetes-distributions).
-1. [An Azure subscription](https://azure.microsoft.com/en-us/free/).
+1. [An Azure subscription](https://azure.microsoft.com/free/).
1. [PKI Certificates](#pki-certificate-requirements) to be used for establishing an HTTPS connection with the Event Grid broker. 1. [Connect your cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md).
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
Previously updated : 02/10/2021 Last updated : 08/12/2021 # ExpressRoute partners and peering locations
The following table shows connectivity locations and the service providers for e
| **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | 10G, 100G | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport | | **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | 10G, 100G | | | **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GEANT, InterCloud, Interxion, Megaport, Orange, Telia Carrier, T-Systems |
-| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | Deutsche Telekom AG, Equinix |
+| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | Deutsche Telekom AG, Equinix, NTT Global DataCenters EMEA |
| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | 10G, 100G | Colt, Equinix, Megaport, Swisscom | | **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon | | **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, iAdvantage, Megaport, PCCW Global Limited, SingTel |
The following table shows connectivity locations and the service providers for e
| **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | TIME dotCom | | **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | n/a | 10G, 100G | CenturyLink Cloud Connect, Megaport, PacketFabric | | **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink, Colt, Equinix, euNetworks, InterCloud, Internet Solutions - Cloud Connect, Interxion, Jisc, Level 3 Communications, Megaport, MTN, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telehouse - KDDI, Telenor, Telia Carrier, Verizon, Vodafone, Zayo |
-| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | 10G, 100G | BICS, British Telecom, CenturyLink Cloud Connect, Colt, GTT, IX Reach, Equinix, JISC, Megaport, SES, Sohonet, Telehouse - KDDI |
+| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | 10G, 100G | BICS, British Telecom, CenturyLink Cloud Connect, Colt, GTT, IX Reach, Equinix, JISC, Megaport, NTT Global DataCenters EMEA, SES, Sohonet, Telehouse - KDDI |
| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | 10G, 100G | CoreSite, Equinix*, Megaport, Neutrona Networks, NTT, Zayo</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* | | **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | 10G, 100G | Equinix | | **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | 10G, 100G | Interxion, Megaport |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
Previously updated : 02/10/2021 Last updated : 08/12/2021
The following table shows locations by service provider. If you want to view ava
| **[NOS](https://www.nos.pt/empresas/corporate/cloud/cloud/Pages/nos-cloud-connect.aspx)** |Supported |Supported |Amsterdam2 | | **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** |Supported |Supported |Amsterdam, Hong Kong SAR, London, Los Angeles, Osaka, Singapore, Sydney, Tokyo, Washington DC | | **[NTT EAST](https://business.ntt-east.co.jp/service/crossconnect/)** |Supported |Supported |Tokyo |
-| **[NTT Global DataCenters EMEA](https://hello.global.ntt/)** |Supported |Supported |Amsterdam2, Berlin |
+| **[NTT Global DataCenters EMEA](https://hello.global.ntt/)** |Supported |Supported |Amsterdam2, Berlin, Frankfurt, London2 |
| **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** |Supported |Supported |Osaka | | **[Ooredoo Cloud Connect](https://www.ooredoo.qa/portal/OoredooQatar/cloud-connect-expressroute)** |Supported |Supported |Marseille | | **[Optus](https://www.optus.com.au/enterprise/)** |Supported |Supported |Melbourne, Sydney |
firewall Protect Windows Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/protect-windows-virtual-desktop.md
Title: Use Azure Firewall to protect Azure Virtual Desktop
-description: Learn how to use Azure Firewall to protect Windows Virtual Desktop deployments
+description: Learn how to use Azure Firewall to protect Azure Virtual Desktop deployments
Previously updated : 05/06/2020 Last updated : 08/09/2021 # Use Azure Firewall to protect Azure Virtual Desktop deployments
-Windows Virtual Desktop is a desktop and app virtualization service that runs on Azure. When an end user connects to a Windows Virtual Desktop environment, their session is run by a host pool. A host pool is a collection of Azure virtual machines that register to Windows Virtual Desktop as session hosts. These virtual machines run in your virtual network and are subject to the virtual network security controls. They need outbound Internet access to the Windows Virtual Desktop service to operate properly and might also need outbound Internet access for end users. Azure Firewall can help you lock down your environment and filter outbound traffic.
+Azure Virtual Desktop is a desktop and app virtualization service that runs on Azure. When an end user connects to an Azure Virtual Desktop environment, their session is run by a host pool. A host pool is a collection of Azure virtual machines that register to Azure Virtual Desktop as session hosts. These virtual machines run in your virtual network and are subject to the virtual network security controls. They need outbound Internet access to the Azure Virtual Desktop service to operate properly and might also need outbound Internet access for end users. Azure Firewall can help you lock down your environment and filter outbound traffic.
-[ ![Windows Virtual Desktop architecture](media/protect-windows-virtual-desktop/windows-virtual-desktop-architecture-diagram.png) ](media/protect-windows-virtual-desktop/windows-virtual-desktop-architecture-diagram.png#lightbox)
+[ ![Azure Virtual Desktop architecture](media/protect-windows-virtual-desktop/windows-virtual-desktop-architecture-diagram.png) ](media/protect-windows-virtual-desktop/windows-virtual-desktop-architecture-diagram.png#lightbox)
Follow the guidelines in this article to provide additional protection for your Azure Virtual Desktop host pool using Azure Firewall.
Follow the guidelines in this article to provide additional protection for your
- A deployed Azure Virtual Desktop environment and host pool.
+ - An Azure Firewall deployed with at least one Firewall Manager Policy
- For more information, see [Tutorial: Create a host pool by using the Azure Marketplace](../virtual-desktop/create-host-pools-azure-marketplace.md) and [Create a host pool with an Azure Resource Manager template](../virtual-desktop/virtual-desktop-fall-2019/create-host-pools-arm-template.md).
+ For more information, see [Tutorial: Create a host pool by using the Azure portal](../virtual-desktop/create-host-pools-azure-marketplace.md)
-To learn more about Windows Virtual Desktop environments see [Windows Virtual Desktop environment](../virtual-desktop/environment-setup.md).
+To learn more about Azure Virtual Desktop environments see [Azure Virtual Desktop environment](../virtual-desktop/environment-setup.md).
## Host pool outbound access to Azure Virtual Desktop
-The Azure virtual machines you create for Azure Virtual Desktop must have access to several Fully Qualified Domain Names (FQDNs) to function properly. Azure Firewall provides a Azure Virtual Desktop FQDN Tag to simplify this configuration. Use the following steps to allow outbound Azure Virtual Desktop platform traffic:
+The Azure virtual machines you create for Azure Virtual Desktop must have access to several Fully Qualified Domain Names (FQDNs) to function properly. Azure Firewall provides an Azure Virtual Desktop FQDN Tag to simplify this configuration. Use the following steps to allow outbound Azure Virtual Desktop platform traffic:
-- Deploy Azure Firewall and configure your Azure Virtual Desktop host pool subnet User Defined Route (UDR) to route default traffic (0.0.0.0/0) via the Azure Firewall. Your default route now points to the firewall.-- Create an application rule collection and add a rule to enable the *WindowsVirtualDesktop* FQDN tag. The source IP address range is the host pool virtual network, the protocol is **https**, and the destination is **WindowsVirtualDesktop**.
+You will need to create an Azure Firewall Policy and create Rule Collections for Network Rules and Applications Rules. Give the Rule Collection a priority and an allow or deny action.
-- The set of required storage and service bus accounts for your Azure Virtual Desktop host pool is deployment specific, so it isn't yet captured in the WindowsVirtualDesktop FQDN tag. You can address this in one of the following ways:
+### Create network rules
- - Allow https access from your host pool subnet to *xt.blob.core.windows.net, *eh.servicebus.windows.net. These wildcard FQDNs enable the required access, but are less restrictive.
- - Use the following log analytics query to list the exact required FQDNs after deployment of WVD hostpool, and then allow them explicitly in your firewall application rules:
- ```
- AzureDiagnostics
- | where Category == "AzureFirewallApplicationRule"
- | search "Deny"
- | search "gsm*eh.servicebus.windows.net" or "gsm*xt.blob.core.windows.net"
- | parse msg_s with Protocol " request from " SourceIP ":" SourcePort:int " to " FQDN ":" *
- | project TimeGenerated,Protocol,FQDN
- ```
+| Name | Source type | Source | Protocol | Destination ports | Destination type | Destination
+ | | | | | |
+| Rule Name | IP Address | VNet or Subnet IP Address | 80 | TCP | IP Address | 169.254.169.254, 168.63.129.16
+| Rule Name | IP Address | VNet or Subnet IP Address | 443 | TCP | Service Tag | AzureCloud, WindowsVirtualDesktop
+| Rule Name | IP Address | VNet or Subnet IP Address | 52 | TCP, UDP | IP Address | *
-- Create a network rule collection add the following rules:
- - Allow DNS ΓÇô allow traffic from your ADDS private IP address to * for TCP and UDP ports 53.
- - Allow KMS ΓÇô allow traffic from your Azure Virtual Desktop virtual machines to Windows Activation Service TCP port 1688. For more information about the destination IP addresses, see [Windows activation fails in forced tunneling scenario](/troubleshoot/azure/virtual-machines/custom-routes-enable-kms-activation#solution).
+### Create application rules
+
+| Name | Source type | Source | Protocol | TLS inspection (optional) | Destination type | Destination
+ | | | | | |
+| Rule Name | IP Address | VNet or Subnet IP Address | Https:443 | | FQDN Tag | WindowsVirtualDesktop, WindowsUpdate, Windows Diagnostics, MicrosoftActiveProtectionService |
+| Rule Name | IP Address | VNet or Subnet IP Address | Https:1688 | | FQDN | kms.core.windows.net
> [!NOTE]
-> Some deployments may not need DNS rules, for example Azure Active Directory Domain controllers forward DNS queries to Azure DNS at 168.63.129.16.
+> Some deployments might not need DNS rules. For example, Azure Active Directory Domain controllers forward DNS queries to Azure DNS at 168.63.129.16.
+
+## Host pool outbound access to the internet
-## Host pool outbound access to the Internet
+Depending on your organization needs, you might want to enable secure outbound internet access for your end users. If the list of allowed destinations is well-defined (for example, for [Microsoft 365 access](/microsoft-365/enterprise/microsoft-365-ip-web-service)), you can use Azure Firewall application and network rules to configure the required access. This routes end-user traffic directly to the internet for best performance. If you need to allow network connectivity for Windows 365 or Intune, see [Network requirments for Windows 365](/windows-365/requirements-network#allow-network-connectivity) and [Network endpoints for Intune](/mem/intune/fundamentals/intune-endpoints).
-Depending on your organization needs, you may want to enable secure outbound Internet access for your end users. In cases where the list of allowed destinations is well-defined (for example, [Microsoft 365 access](/microsoft-365/enterprise/microsoft-365-ip-web-service)) you can use Azure Firewall application and network rules to configure the required access. This routes end-user traffic directly to the Internet for best performance.
+If you want to filter outbound user internet traffic by using an existing on-premises secure web gateway, you can configure web browsers or other applications running on the Azure Virtual Desktop host pool with an explicit proxy configuration. For example, see [How to use Microsoft Edge command-line options to configure proxy settings](/deployedge/edge-learnmore-cmdline-options-proxy-settings). These proxy settings only influence your end-user internet access, allowing the Azure Virtual Desktop platform outbound traffic directly via Azure Firewall.
-If you want to filter outbound user Internet traffic using an existing on-premises secure web gateway, you can configure web browsers or other applications running on the Azure Virtual Desktop host pool with an explicit proxy configuration. For example, see [How to use Microsoft Edge command-line options to configure proxy settings](/deployedge/edge-learnmore-cmdline-options-proxy-settings). These proxy settings only influence your end-user Internet access, allowing the Azure Virtual Desktop platform outbound traffic directly via Azure Firewall.
+## Control user access to the web
+
+Admins can allow or deny user access to different website categories. Add a rule to your Application Collection from your specific IP address to web categories you want to allow or deny. Review all the [web categories](web-categories.md).
## Additional considerations
-You may need to configure additional firewall rules, depending on your requirements:
+You might need to configure additional firewall rules, depending on your requirements:
- NTP server access
- By default, virtual machines running Windows connect to time.windows.com over UDP port 123 for time synchronization. Create a network rule to allow this access, or for a time server that you use in your environment.
-
+ By default, virtual machines running Windows connect to `time.windows.com` over UDP port 123 for time synchronization. Create a network rule to allow this access, or for a time server that you use in your environment.
## Next steps
frontdoor Front Door Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-overview.md
Key features included with Front Door:
## Pricing
-For pricing information, see [Front Door Pricing](https://azure.microsoft.com/pricing/details/frontdoor/). See [SLA for Azure Front Door](https://azure.microsoft.com/en-us/support/legal/sla/frontdoor/v1_0/).
+For pricing information, see [Front Door Pricing](https://azure.microsoft.com/pricing/details/frontdoor/). See [SLA for Azure Front Door](https://azure.microsoft.com/support/legal/sla/frontdoor/v1_0/).
## What's new?
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/resource-graph-samples.md
+
+ Title: Azure Resource Graph sample queries for management groups
+description: Sample Azure Resource Graph queries for management groups showing use of resource types and tables to access management group details.
Last updated : 08/09/2021+++
+# Azure Resource Graph sample queries for management groups
+
+This page is a collection of [Azure Resource Graph](../resource-graph/overview.md) sample queries
+for management groups. For a complete list of Azure Resource Graph samples, see
+[Resource Graph samples by Category](../resource-graph/samples/samples-by-category.md) and
+[Resource Graph samples by Table](../resource-graph/samples/samples-by-table.md).
+
+## Sample queries
++
+## Next steps
+
+- Learn more about the [query language](../resource-graph/concepts/query-language.md).
+- Learn more about how to [explore resources](../resource-graph/concepts/explore-resources.md).
+- See samples of [Starter language queries](../resource-graph/samples/starter.md).
+- See samples of [Advanced language queries](../resource-graph/samples/advanced.md).
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/effects.md
The following operations are supported by Modify:
_Indexed_ unless the target resource is a resource group. - Add or replace the value of managed identity type (`identity.type`) of virtual machines and virtual machine scale sets.-- Add or replace the values of certain aliases (preview).
+- Add or replace the values of certain aliases.
- Use `Get-AzPolicyAlias | Select-Object -ExpandProperty 'Aliases' | Where-Object { $_.DefaultMetadata.Attributes -eq 'Modifiable' }` in Azure PowerShell **4.6.0** or higher to get a list of aliases that can be used with Modify.
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Policy description: Sample Azure Resource Graph queries for Azure Policy showing use of resource types and tables to access Azure Policy related resources and properties. Previously updated : 08/04/2021 Last updated : 08/09/2021
governance Query Language https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/concepts/query-language.md
Title: Understand the query language description: Describes Resource Graph tables and the available Kusto data types, operators, and functions usable with Azure Resource Graph. Previously updated : 08/05/2021 Last updated : 08/11/2021 # Understanding the Azure Resource Graph query language
To support the "Open Query" portal experience, Azure Resource Graph Explorer has
## Query scope
-The scope of the subscriptions from which resources are returned by a query depend on the method of
-accessing Resource Graph. Azure CLI and Azure PowerShell populate the list of subscriptions to
-include in the request based on the context of the authorized user. The list of subscriptions can be
-manually defined for each with the **subscriptions** and **Subscription** parameters, respectively.
-In REST API and all other SDKs, the list of subscriptions to include resources from must be
-explicitly defined as part of the request.
-
-As a **preview**, REST API version `2020-04-01-preview` adds a property to scope the query to a
-[management group](../../management-groups/overview.md). This preview API also makes the
-subscription property optional. If a management group or a subscription list isn't defined, the
-query scope is all resources, which includes
-[Azure Lighthouse](../../../lighthouse/overview.md) delegated
-resources, that the authenticated user can access. The new `managementGroupId` property takes the
-management group ID, which is different from the name of the management group. When
-`managementGroupId` is specified, resources from the first 5,000 subscriptions in or under the
-specified management group hierarchy are included. `managementGroupId` can't be used at the same
-time as `subscriptions`.
+The scope of the subscriptions or [management groups](../../management-groups/overview.md) from
+which resources are returned by a query defaults to a list of subscriptions based on the context of
+the authorized user. If a management group or a subscription list isn't defined, the query scope is
+all resources, which includes [Azure Lighthouse](../../../lighthouse/overview.md) delegated
+resources.
+
+The list of subscriptions or management groups to query can be manually defined to change the scope
+of the results. For example, the REST API `managementGroups` property takes the management group ID,
+which is different from the name of the management group. When `managementGroups` is specified,
+resources from the first 5,000 subscriptions in or under the specified management group hierarchy
+are included. `managementGroups` can't be used at the same time as `subscriptions`.
Example: Query all resources within the hierarchy of the management group named 'My Management Group' with ID 'myMG'.
Group' with ID 'myMG'.
- REST API URI ```http
- POST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2020-04-01-preview
+ POST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-03-01
``` - Request Body
Group' with ID 'myMG'.
```json { "query": "Resources | summarize count()",
- "managementGroupId": "myMG"
+ "managementGroups": ["myMG"]
} ```
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/reference/supported-tables-resources.md
Title: Supported Azure Resource Manager resource types description: Provide a list of the Azure Resource Manager resource types supported by Azure Resource Graph and Change History. Previously updated : 08/04/2021 Last updated : 08/09/2021
For sample queries for this table, see [Resource Graph sample queries for policy
For sample queries for this table, see [Resource Graph sample queries for resourcecontainers](../samples/samples-by-table.md#resourcecontainers). - microsoft.management/managementgroups
+ - Sample query: [Count of subscriptions per management group](../samples/samples-by-category.md#count-of-subscriptions-per-management-group)
+ - Sample query: [List all management group ancestors for a specified management group](../samples/samples-by-category.md#list-all-management-group-ancestors-for-a-specified-management-group)
- microsoft.resources/subscriptions (Subscriptions)
+ - Sample query: [Count of subscriptions per management group](../samples/samples-by-category.md#count-of-subscriptions-per-management-group)
- Sample query: [Key vaults with subscription name](../samples/samples-by-category.md#key-vaults-with-subscription-name)
+ - Sample query: [List all management group ancestors for a specified subscription](../samples/samples-by-category.md#list-all-management-group-ancestors-for-a-specified-subscription)
+ - Sample query: [List all subscriptions under a specified management group](../samples/samples-by-category.md#list-all-subscriptions-under-a-specified-management-group)
- Sample query: [Remove columns from results](../samples/samples-by-category.md#remove-columns-from-results) - Microsoft.Resources/subscriptions/resourceGroups (Resource groups) - Sample query: [Find storage accounts with a specific case-insensitive tag on the resource group](../samples/samples-by-category.md#find-storage-accounts-with-a-specific-case-insensitive-tag-on-the-resource-group)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.compute/virtualmachines/runcommands - Microsoft.Compute/virtualMachineScaleSets (Virtual machine scale sets) - Sample query: [Get virtual machine scale set capacity and size](../samples/samples-by-category.md#get-virtual-machine-scale-set-capacity-and-size)-- microsoft.confidentialledger/ledgers
+- Microsoft.ConfidentialLedger/ledgers (Confidential Ledgers)
- Microsoft.Confluent/organizations (Confluent organizations) - Microsoft.ConnectedCache/cacheNodes (Connected Cache Resources) - Microsoft.ConnectedVehicle/platformAccounts (Connected Vehicle Platforms)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.customproviders/resourceproviders - microsoft.d365customerinsights/instances - Microsoft.Dashboard/grafana (Grafana Workspaces)-- Microsoft.DataBox/jobs (Data Box)
+- Microsoft.DataBox/jobs (Azure Data Box)
- Microsoft.DataBoxEdge/dataBoxEdgeDevices (Azure Stack Edge / Data Box Gateway) - Microsoft.Databricks/workspaces (Azure Databricks Services) - Microsoft.DataCatalog/catalogs (Data Catalog)
For sample queries for this table, see [Resource Graph sample queries for resour
- Sample query: [List Azure Arc-enabled custom locations with VMware or SCVMM enabled](../samples/samples-by-category.md#list-azure-arc-enabled-custom-locations-with-vmware-or-scvmm-enabled) - microsoft.falcon/namespaces - Microsoft.Fidalgo/devcenters (Fidalgo DevCenters)
+- microsoft.fidalgo/machinedefinitions
- Microsoft.Fidalgo/projects (Fidalgo Projects) - Microsoft.Fidalgo/projects/environments (Fidalgo Environments)-- microsoft.fluidrelay/fluidrelayservers
+- Microsoft.FluidRelay/fluidRelayServers (FluidRelay Servers)
- microsoft.footprintmonitoring/profiles - microsoft.gaming/titles - Microsoft.Genomics/accounts (Genomics accounts)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes - microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes/attacheddatanetworks - microsoft.mobilenetwork/sims-- Microsoft.MobileNetwork/sims/simProfiles (Sims)
+- microsoft.mobilenetwork/sims/simprofiles
- Microsoft.NetApp/netAppAccounts (NetApp accounts) - microsoft.netapp/netappaccounts/backuppolicies - Microsoft.NetApp/netAppAccounts/capacityPools (Capacity pools)
governance Samples By Category https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/samples/samples-by-category.md
Title: List of sample Azure Resource Graph queries by category description: List sample queries for Azure Resource-Graph. Categories include Tags, Azure Advisor, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 08/04/2021 Last updated : 08/09/2021
Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature
[!INCLUDE [azure-resource-graph-samples-cat-general](../../../../includes/resource-graph/samples/bycat/general.md)]
+## Management groups
++ ## Networking [!INCLUDE [azure-resource-graph-samples-cat-networking](../../../../includes/resource-graph/samples/bycat/networking.md)]
governance Samples By Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/samples/samples-by-table.md
Title: List of sample Azure Resource Graph queries by table description: List sample queries for Azure Resource-Graph. Tables include Resources, ResourceContainers, PolicyResources, and more. Previously updated : 08/04/2021 Last updated : 08/09/2021
hdinsight Create Cluster Error Dictionary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/create-cluster-error-dictionary.md
If you plan to use network security groups to control network traffic, take
### Error
-"The Managed Identity does not have permissions on the storage account. Please verify that 'Storage Blob Data Owner' role is assigned to the Managed Identity for the storage account. Storage: /subscriptions/ \<Subscription ID\> /resourceGroups/\< Resource Group Name\> /providers/Microsoft.Storage/storageAccounts/ \<Storage Account Name\>, Managed Identity: /subscriptions/ \<Subscription ID\> /resourceGroups/ /\< Resource Group Name\> /providers/Microsoft.ManagedIdentity/userAssignedIdentities/ \<User Managed Identity Name\>"
+"The Managed Identity does not have permissions on the storage account. Please verify that 'Storage Blob Data Owner' role is assigned to the Managed Identity for the storage account. Storage: /subscriptions/ \<Subscription ID\> /resourceGroups/\<Resource Group Name\> /providers/Microsoft.Storage/storageAccounts/ \<Storage Account Name\>, Managed Identity: /subscriptions/ \<Subscription ID\> /resourceGroups/ /\<Resource Group Name\> /providers/Microsoft.ManagedIdentity/userAssignedIdentities/ \<User Managed Identity Name\>"
### Cause
For more information, see [Set up permissions for the managed identity on the Da
### Error
-"The security rules in the Network Security Group /subscriptions/\<SubscriptionID\>/resourceGroups/<Resource Group name\> default/providers/Microsoft.Network/networkSecurityGroups/\<Network Security Group Name\> configured with subnet /subscriptions/\<SubscriptionID\>/resourceGroups/\<Resource Group name\> RG-westeurope-vnet-tomtom-default/providers/Microsoft.Network/virtualNetworks/\<Virtual Network Name\>/subnets/\<Subnet Name\> does not allow required inbound and/or outbound connectivity. For more information, please visit [Plan a virtual network for Azure HDInsight](./hdinsight-plan-virtual-network-deployment.md), or contact support."
+"The security rules in the Network Security Group /subscriptions/\<SubscriptionID\>/resourceGroups/\<Resource Group name\> default/providers/Microsoft.Network/networkSecurityGroups/\<Network Security Group Name\> configured with subnet /subscriptions/\<SubscriptionID\>/resourceGroups/\<Resource Group name\> RG-westeurope-vnet-tomtom-default/providers/Microsoft.Network/virtualNetworks/\<Virtual Network Name\>/subnets/\<Subnet Name\> does not allow required inbound and/or outbound connectivity. For more information, please visit [Plan a virtual network for Azure HDInsight](./hdinsight-plan-virtual-network-deployment.md), or contact support."
### Cause
If you are using custom VNet network security group (NSGs) and user-defined rout
-## Error Code: Deployments failed due to policy violation: 'Resource '<Resource URI>' was disallowed by policy. Policy identifiers: '[{"policyAssignment":{"name":"<Policy Name> ","id":"/providers/Microsoft.Management/managementGroups/<Management Group Name> providers/Microsoft.Authorization/policyAssignments/<Policy Name>"},"policyDefinition": <Policy Definition>
+## Error Code: Deployments failed due to policy violation: 'Resource '\<Resource URI\>' was disallowed by policy. Policy identifiers: '[{"policyAssignment":{"name":"\<Policy Name\> ","id":"/providers/Microsoft.Management/managementGroups/\<Management Group Name\> providers/Microsoft.Authorization/policyAssignments/\<Policy Name\>"},"policyDefinition": \<Policy Definition\>
### Cause
hdinsight Identity Broker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/domain-joined/identity-broker.md
In the HDInsight ID Broker setup, custom apps and clients that connect to the ga
* AppId: 7865c1d2-f040-46cc-875f-831a1ef6a28a * Permission: (name: Cluster.ReadWrite, id: 8f89faa0-ffef-4007-974d-4989b39ad77d)
-After you acquire the OAuth token, use it in the authorization header of the HTTP request to the cluster gateway (for example, https://<clustername>-int.azurehdinsight.net). A sample curl command to Apache Livy API might look like this example:
+After you acquire the OAuth token, use it in the authorization header of the HTTP request to the cluster gateway (for example, https://\<clustername\>-int.azurehdinsight.net). A sample curl command to Apache Livy API might look like this example:
```bash curl -k -v -H "Authorization: Bearer Access_TOKEN" -H "Content-Type: application/json" -X POST -d '{ "file":"wasbs://mycontainer@mystorageaccount.blob.core.windows.net/data/SparkSimpleTest.jar", "className":"com.microsoft.spark.test.SimpleFile" }' "https://<clustername>-int.azurehdinsight.net/livy/batches" -H "X-Requested-By:<username@domain.com>"
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/convert-data.md
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
| -- | -- | -- | | inputData | Data to be converted. | A valid JSON String| | inputDataType | Data type of input. | ```HL7v2```, ``Ccda`` |
-| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/en-us/services/container-registry/). It is the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For **HL7v2** default templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br><br>For **C-CDA** default templates: ``microsofthealth/ccdatemplates:default`` <br>\<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
+| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It is the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For **HL7v2** default templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br><br>For **C-CDA** default templates: ``microsofthealth/ccdatemplates:default`` <br>\<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
| rootTemplate | The root template to use while transforming the data. | For **HL7v2**:<br>```ADT_A01```, ```OML_O21```, ```ORU_R01```, ```VXU_V04```<br><br> For **C-CDA**:<br>```CCD```, `ConsultationNote`, `DischargeSummary`, `HistoryandPhysical`, `OperativeNote`, `ProcedureNote`, `ProgressNote`, `ReferralNote`, `TransferSummary` | > [!WARNING]
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/data-transformation/convert-data.md
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
| Parameter Name | Description | Accepted values | | -- | -- | -- |
-| inputData | Data to be converted. | A valid JSON String|
+| inputData | Data to be converted. | For `Hl7v2`: string <br> For `Ccda`: XML|
| inputDataType | Data type of input. | ```HL7v2```, ``Ccda`` |
-| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It is the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For **HL7v2** default templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br><br>For **C-CDA** default templates: ``microsofthealth/ccdatemplates:default`` <br>\<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
+| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It is the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For default templates: <br> For **HL7v2** default templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br>For **C-CDA** default templates: ``microsofthealth/ccdatemplates:default`` <br><br>For customized templates: <br> \<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
| rootTemplate | The root template to use while transforming the data. | For **HL7v2**:<br>```ADT_A01```, ```OML_O21```, ```ORU_R01```, ```VXU_V04```<br><br> For **C-CDA**:<br>```CCD```, `ConsultationNote`, `DischargeSummary`, `HistoryandPhysical`, `OperativeNote`, `ProcedureNote`, `ProgressNote`, `ReferralNote`, `TransferSummary` | > [!WARNING]
In the table below, you'll find the IP address for the Azure region where the FH
Make a call to the $convert-data API specifying your template reference in the templateCollectionReference parameter.
-`<RegistryServer>/<imageName>@<imageDigest>`
+`<RegistryServer>/<imageName>@<imageDigest>`
healthcare-apis Get Healthcare Apis Access Token Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/get-healthcare-apis-access-token-cli.md
Previously updated : 08/03/2019 Last updated : 08/12/2021
In this article, you'll learn how to obtain an access token for the FHIR service
The FHIR service uses a `resource` or `Audience` with URI equal to the URI of the FHIR server `https://<FHIR ACCOUNT NAME>.azurehealthcareapis.com`. You can obtain a token and store it in a variable (named `$token`) with the following command: ```azurecli-interactive
-$token=$(az account get-access-token --resource=https://<FHIR ACCOUNT NAME>.azurehealthcareapis.com --query accessToken --output tsv)
+token=$(az account get-access-token --resource=https://<FHIR ACCOUNT NAME>.azurehealthcareapis.com --query accessToken --output tsv)
``` ## Use with FHIR service
iot-accelerators Iot Accelerators Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/iot-accelerators-permissions.md
Title: Use the Azure IoT Solutions site - Azure | Microsoft Docs description: Describes how to use the AzureIoTSolutions.com website to deploy your solution accelerator. -+
iot-accelerators Overview Iot Industrial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/overview-iot-industrial.md
Last updated 11/26/2018
-+ # What is industrial IoT (IIoT)
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-architecture.md
Last updated 12/19/2020
-+ # Azure IoT Central architecture
iot-central Concepts Get Connected https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-get-connected.md
Last updated 1/15/2020
-+ # This article applies to operators and device developers.
iot-central Howto Create And Manage Applications Csp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-and-manage-applications-csp.md
Last updated 12/11/2020 -+ # Create and manage an Azure IoT Central application from the CSP portal
iot-central Howto Create Custom Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-custom-analytics.md
Title: Extend Azure IoT Central with custom analytics | Microsoft Docs description: As a solution developer, configure an IoT Central application to do custom analytics and visualizations. This solution uses Azure Databricks.--++ Last updated 03/15/2021 -+ # Solution developer
iot-central Howto Create Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-custom-rules.md
Title: Extend Azure IoT Central with custom rules and notifications | Microsoft Docs description: As a solution developer, configure an IoT Central application to send email notifications when a device stops sending telemetry. This solution uses Azure Stream Analytics, Azure Functions, and SendGrid.--++ Last updated 02/09/2021 -+ # Solution developer
iot-central Howto Customize Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-customize-ui.md
Last updated 12/06/2019
-+ #Customer intent: As an administrator, I want to customize the themes and help links within Central so that my company's brand is represented within the app.
iot-central Howto Show Hide Chat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-show-hide-chat.md
Last updated 08/23/2019
-+ # Toggle live chat
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-admin.md
Title: Azure IoT Central administrator guide description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This article provides an overview of the administrator role in IoT Central. --++ Last updated 03/25/2021
iot-central Concept Continuous Patient Monitoring Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/healthcare/concept-continuous-patient-monitoring-architecture.md
Title: Continuous patient monitoring architecture in Azure IoT Central | Microsoft Docs description: Tutorial - Learn about a continuous patient monitoring solution architecture.--++ Last updated 12/11/2020
iot-central Overview Iot Central Healthcare https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/healthcare/overview-iot-central-healthcare.md
Title: What are the Azure IoT Central healthcare solutions | Microsoft Docs description: Learn to build healthcare solution using Azure IoT Central application templates.--++ Last updated 09/24/2019
iot-central Tutorial Continuous Patient Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/healthcare/tutorial-continuous-patient-monitoring.md
Title: Tutorial - Create a continuous patient monitoring app with Azure IoT Central | Microsoft Docs description: In this tutorial, you learn to build a continuous patient monitoring application using Azure IoT Central application templates.--++ Last updated 09/24/2019
iot-central Tutorial Health Data Triage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/healthcare/tutorial-health-data-triage.md
Title: Tutorial - Create a health data triage dashboard with Azure IoT Central | Microsoft Docs description: Tutorial - Learn to build a health data triage dashboard using Azure IoT Central application templates.--++ Last updated 12/11/2020
iot-develop Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/concepts-architecture.md
-+ # IoT Plug and Play architecture
iot-develop Quickstart Devkit Mxchip Az3166 Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-mxchip-az3166-iot-hub.md
You can use the **Termite** app to monitor communication and confirm that your d
1. Start **Termite**. > [!TIP]
- > If you are unable to connect Termite to your devkit, install the [ST-LINK driver](https://my.st.com/content/ccc/resource/technical/software/driver/files/stsw-link009.zip) and try again. See [Troubleshooting](troubleshoot-embedded-device-quickstarts.md) for additional steps.
+ > If you are unable to connect Termite to your devkit, install the [ST-LINK driver](https://www.st.com/en/development-tools/stsw-link009.html) and try again. See [Troubleshooting](troubleshoot-embedded-device-quickstarts.md) for additional steps.
1. Select **Settings**. 1. In the **Serial port settings** dialog, check the following settings and update if needed: * **Baud rate**: 115,200
As a next step, explore the following articles to learn more about using the IoT
> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md) > [!IMPORTANT]
-> Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
+> Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-develop Quickstart Devkit Mxchip Az3166 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-mxchip-az3166.md
You can use the **Termite** app to monitor communication and confirm that your d
1. Start **Termite**. > [!TIP]
- > If you are unable to connect Termite to your devkit, install the [ST-LINK driver](https://my.st.com/content/ccc/resource/technical/software/driver/files/stsw-link009.zip) and try again. See [Troubleshooting](troubleshoot-embedded-device-quickstarts.md) for additional steps.
+ > If you are unable to connect Termite to your devkit, install the [ST-LINK driver](https://www.st.com/en/development-tools/stsw-link009.html) and try again. See [Troubleshooting](troubleshoot-embedded-device-quickstarts.md) for additional steps.
1. Select **Settings**. 1. In the **Serial port settings** dialog, check the following settings and update if needed: * **Baud rate**: 115,200
iot-develop Quickstart Devkit Stm B L475e https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-stm-b-l475e.md
You can use the **Termite** app to monitor communication and confirm that your d
1. Start **Termite**. > [!TIP]
- > If you are unable to connect Termite to your devkit, install the [ST-LINK driver](https://my.st.com/content/ccc/resource/technical/software/driver/files/stsw-link009.zip) and try again. See [Troubleshooting](troubleshoot-embedded-device-quickstarts.md) for additional steps.
+ > If you are unable to connect Termite to your devkit, install the [ST-LINK driver](https://www.st.com/en/development-tools/stsw-link009.html) and try again. See [Troubleshooting](troubleshoot-embedded-device-quickstarts.md) for additional steps.
1. Select **Settings**. 1. In the **Serial port settings** dialog, check the following settings and update if needed: * **Baud rate**: 115,200
iot-dps Concepts Symmetric Key Attestation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/concepts-symmetric-key-attestation.md
Last updated 04/23/2021
-+
iot-dps How To Control Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/how-to-control-access.md
Title: Security endpoints in IoT Device Provisioning Service | Microsoft Docs description: Concepts - how to control access to IoT Device Provisioning Service (DPS) for backend apps. Includes information about security tokens. -+
iot-dps Quick Create Simulated Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/quick-create-simulated-device-symm-key.md
Last updated 01/14/2020
-+ #Customer intent: As a new IoT developer, I want to connect a device to an IoT Hub using the C SDK so that I can learn how secure provisioning works with symmetric keys.
iot-dps Quick Create Simulated Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/quick-create-simulated-device.md
Last updated 11/08/2019
-+ #Customer intent: As a new IoT developer, I want simulate a TPM device using the C SDK so that I can learn how secure provisioning works.
iot-dps Tutorial Set Up Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/tutorial-set-up-device.md
Last updated 11/12/2019
-+
iot-edge About Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/about-iot-edge.md
Title: What is Azure IoT Edge | Microsoft Docs description: Overview of the Azure IoT Edge service -+ # this is the PM responsible
iot-edge Deploy Modbus Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/deploy-modbus-gateway.md
Title: Translate modbus protocols with gateways - Azure IoT Edge | Microsoft Docs description: Allow devices that use Modbus TCP to communicate with Azure IoT Hub by creating an IoT Edge gateway device -+
iot-edge Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/development-environment.md
Title: Azure IoT Edge development environment | Microsoft Docs description: Learn about the supported systems and first-party development tools that will help you create IoT Edge modules -+ Last updated 01/04/2019
iot-edge How To Access Built In Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-access-built-in-metrics.md
Title: Access built-in metrics - Azure IoT Edge description: Remote access to built-in metrics from the IoT Edge runtime components -+ Last updated 06/25/2021
iot-edge How To Access Host Storage From Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-access-host-storage-from-module.md
Title: Use IoT Edge device local storage from a module - Azure IoT Edge | Microsoft Docs description: Use environment variables and create options to enable module access to IoT Edge device local storage. -+ Last updated 08/14/2020
iot-edge How To Add Custom Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-add-custom-metrics.md
Title: How to add custom metrics - Azure IoT Edge description: Augment built-in metrics with scenario-specific metrics from custom modules -+ Last updated 06/08/2021
iot-edge How To Authenticate Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-authenticate-downstream-device.md
Title: Authenticate downstream devices - Azure IoT Edge | Microsoft Docs description: How to authenticate downstream devices or leaf devices to IoT Hub, and route their connection through Azure IoT Edge gateway devices. -+ Last updated 10/15/2020
iot-edge How To Auto Provision Simulated Device Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-simulated-device-linux.md
Title: Provision device with a virtual TPM on Linux VM - Azure IoT Edge description: Use a simulated TPM on a Linux VM to test Azure Device Provisioning Service for Azure IoT Edge -+ Last updated 04/09/2021
iot-edge How To Auto Provision Simulated Device Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-simulated-device-windows.md
Title: Automatically provision Windows devices with DPS - Azure IoT Edge | Microsoft Docs description: Use a simulated device on your Windows machine to test automatic device provisioning for Azure IoT Edge with Device Provisioning Service -+ Last updated 4/3/2020
iot-edge How To Auto Provision Symmetric Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-symmetric-keys.md
Title: Provision device using symmetric key attestation - Azure IoT Edge description: Use symmetric key attestation to test automatic device provisioning for Azure IoT Edge with Device Provisioning Service -+ Last updated 07/21/2021
iot-edge How To Auto Provision X509 Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-x509-certs.md
Title: Automatically provision devices with DPS using X.509 certificates - Azure IoT Edge | Microsoft Docs description: Use X.509 certificates to test automatic device provisioning for Azure IoT Edge with Device Provisioning Service -+ Last updated 06/18/2021
iot-edge How To Collect And Transport Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-collect-and-transport-metrics.md
Title: Collect and transport metrics - Azure IoT Edge description: Use Azure Monitor to remotely monitor IoT Edge's built-in metrics -+ Last updated 06/09/2021
iot-edge How To Configure Api Proxy Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-configure-api-proxy-module.md
Title: Configure API proxy module - Azure IoT Edge | Microsoft Docs description: Learn how to customize the API proxy module for IoT Edge gateway hierarchies. -+ Last updated 11/10/2020
iot-edge How To Connect Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-connect-downstream-device.md
Title: Connect downstream devices - Azure IoT Edge | Microsoft Docs description: How to configure downstream or leaf devices to connect to Azure IoT Edge gateway devices. -+ Last updated 10/15/2020
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
Title: Connect downstream IoT Edge devices - Azure IoT Edge | Microsoft Docs description: How to configure an IoT Edge device to connect to Azure IoT Edge gateway devices. -+ Last updated 03/01/2021
iot-edge How To Continuous Integration Continuous Deployment Classic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-continuous-integration-continuous-deployment-classic.md
Title: Continuous integration and continuous deployment to Azure IoT Edge devices (classic editor) - Azure IoT Edge description: Set up continuous integration and continuous deployment using the classic editor - Azure IoT Edge with Azure DevOps, Azure Pipelines -+ Last updated 08/26/2020
iot-edge How To Continuous Integration Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-continuous-integration-continuous-deployment.md
Title: Continuous integration and continuous deployment to Azure IoT Edge devices - Azure IoT Edge description: Set up continuous integration and continuous deployment using YAML - Azure IoT Edge with Azure DevOps, Azure Pipelines -+ Last updated 08/20/2019
iot-edge How To Create Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-create-alerts.md
Title: Get notified about issues using alerts - Azure IoT Edge description: Use Azure Monitor alert rules to monitor at scale -+ Last updated 06/08/2021
iot-edge How To Create Test Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-create-test-certificates.md
Title: Create test certificates - Azure IoT Edge | Microsoft Docs description: Create test certificates and learn how to install them on an Azure IoT Edge device to prepare for production deployment. -+ Last updated 06/02/2020
iot-edge How To Create Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-create-transparent-gateway.md
Title: Create transparent gateway device - Azure IoT Edge | Microsoft Docs description: Use an Azure IoT Edge device as a transparent gateway that can process information from downstream devices -+ Last updated 03/01/2021
iot-edge How To Create Virtual Switch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-create-virtual-switch.md
Title: Create virtual switch for Azure IoT Edge for Linux on Windows | Microsoft Docs description: Installations for creating a virtual switch for Azure IoT Edge for Linux on Windows -+
iot-edge How To Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-deploy-at-scale.md
Title: Deploy modules at scale in Azure portal - Azure IoT Edge
description: Use the Azure portal to create automatic deployments for groups of IoT Edge devices keywords: -+ Last updated 10/13/2020
iot-edge How To Deploy Cli At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-deploy-cli-at-scale.md
Title: Deploy modules at scale using Azure CLI - Azure IoT Edge
description: Use the IoT extension for Azure CLI to create automatic deployments for groups of IoT Edge devices keywords: -+ Last updated 10/13/2020
iot-edge How To Deploy Modules Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-deploy-modules-cli.md
Title: Deploy modules from the Azure CLI command line - Azure IoT Edge description: Use the Azure CLI with the Azure IoT Extension to push an IoT Edge module from your IoT Hub to your IoT Edge device, as configured by a deployment manifest. -+ Last updated 10/13/2020
iot-edge How To Deploy Modules Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-deploy-modules-portal.md
Title: Deploy modules from Azure portal - Azure IoT Edge description: Use your IoT Hub in the Azure portal to push an IoT Edge module from your IoT Hub to your IoT Edge device, as configured by a deployment manifest. -+ Last updated 10/13/2020
iot-edge How To Deploy Modules Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-deploy-modules-vscode.md
Title: Deploy modules from Visual Studio Code - Azure IoT Edge description: Use Visual Studio Code with the Azure IoT Tools to push an IoT Edge module from your IoT Hub to your IoT Edge device, as configured by a deployment manifest. -+ Last updated 10/13/2020
iot-edge How To Deploy Vscode At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-deploy-vscode-at-scale.md
Title: Deploy modules at scale using Visual Studio Code - Azure IoT Edge
description: Use the IoT extension for Visual Studio Code to create automatic deployments for groups of IoT Edge devices. keywords: -+ Last updated 1/8/2020
iot-edge How To Edgeagent Direct Method https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-edgeagent-direct-method.md
Title: Built-in edgeAgent direct methods - Azure IoT Edge description: Monitor and manage an IoT Edge deployment using built-in direct methods in the IoT Edge agent runtime module -+ Last updated 03/02/2020
iot-edge How To Explore Curated Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-explore-curated-visualizations.md
Title: Explore curated visualizations - Azure IoT Edge description: Use Azure workbooks to visualize and explore IoT Edge built-in metrics -+ Last updated 06/08/2021
iot-edge How To Install Iot Edge Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge-kubernetes.md
Title: How to install IoT Edge on Kubernetes | Microsoft Docs description: Learn on how to install IoT Edge on Kubernetes using a local development cluster environment -+ Last updated 04/26/2019
iot-edge How To Install Iot Edge On Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge-on-windows.md
Title: Install Azure IoT Edge for Linux on Windows | Microsoft Docs description: Azure IoT Edge installation instructions on Windows devices -+
iot-edge How To Install Iot Edge Windows On Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge-windows-on-windows.md
Title: Install Azure IoT Edge for Windows | Microsoft Docs description: Install Azure IoT Edge for Windows containers on Windows devices -+ # this is the PM responsible
iot-edge How To Install Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge.md
Title: Install Azure IoT Edge | Microsoft Docs description: Azure IoT Edge installation instructions on Windows or Linux devices -+ # this is the PM responsible
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-manage-device-certificates.md
Title: Manage device certificates - Azure IoT Edge | Microsoft Docs description: Create test certificates, install, and manage them on an Azure IoT Edge device to prepare for production deployment. -+ Last updated 03/01/2021
iot-edge How To Monitor Iot Edge Deployments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-monitor-iot-edge-deployments.md
Title: Monitor IoT Edge deployments - Azure IoT Edge description: High-level monitoring including edgeHub and edgeAgent reported properties and automatic deployment metrics. -+ Last updated 04/21/2020
iot-edge How To Monitor Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-monitor-module-twins.md
Title: Monitor module twins - Azure IoT Edge description: How to interpret device twins and module twins to determine connectivity and health. -+ Last updated 05/29/2020
iot-edge How To Register Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-register-device.md
Title: Register a new device - Azure IoT Edge | Microsoft Docs description: Register a single IoT Edge device in IoT Hub for manual provisioning with either symmetric keys or X.509 certificates -+ # this is the PM responsible
iot-edge How To Retrieve Iot Edge Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-retrieve-iot-edge-logs.md
Title: Retrieve IoT Edge logs - Azure IoT Edge description: IoT Edge module log retrieval and upload to Azure Blob Storage. -+ Last updated 11/12/2020
iot-edge How To Troubleshoot Monitoring And Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-troubleshoot-monitoring-and-faq.md
Title: Monitoring troubleshooting and FAQ - Azure IoT Edge description: Troubleshooting Azure Monitor integration and FAQ -+ Last updated 06/09/2021
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-update-iot-edge.md
Title: Update IoT Edge version on devices - Azure IoT Edge | Microsoft Docs
description: How to update IoT Edge devices to run the latest versions of the security daemon and the IoT Edge runtime keywords: -+ Last updated 06/15/2021
iot-edge How To Use Create Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-use-create-options.md
Title: Write createOptions for modules - Azure IoT Edge | Microsoft Docs
description: How to use createOptions in the deployment manifest to configure modules at runtime keywords: -+ Last updated 04/01/2020
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-vs-code-develop-module.md
keywords:
Previously updated : 08/07/2019 Last updated : 08/11/2021
Unless you're developing your module in C, you also need the Python-based [Azure
```cmd pip install --upgrade iotedgehubdev ```
-
+ > [!NOTE] > Currently, iotedgehubdev uses a docker-py library that is not compatible with Python 3.8. >
When you're ready to customize the template with your own code, use the [Azure I
If you're developing in C#, Node.js, or Java, your module requires use of a **ModuleClient** object in the default module code so that it can start, run, and route messages. You'll also use the default input channel **input1** to take action when the module receives messages.
-### Set up IoT Edge simulator for IoT Edge solution
+### Set up IoT Edge simulator
-On your development machine, you can start an IoT Edge simulator instead of installing the IoT Edge security daemon so that you can run your IoT Edge solution.
+IoT Edge modules need an IoT Edge environment to run and debug. You can use an IoT Edge simulator on your development machine instead of running the full IoT Edge security daemon and runtime. You can either simulate a device to debug solutions with multiple modules, or simulate a single module application.
+
+Option 1: simulate an IoT Edge solution:
1. In the **Explorer** tab on the left side, expand the **Azure IoT Hub** section. Right-click on your IoT Edge device ID, and then select **Setup IoT Edge Simulator** to start the simulator with the device connection string. 1. You can see the IoT Edge Simulator has been successfully set up by reading the progress detail in the integrated terminal.
-### Set up IoT Edge simulator for single module app
+Option 2: simulate a single IoT Edge module:
-To set up and start the simulator, run the command **Azure IoT Edge: Start IoT Edge Hub Simulator for Single Module** from the Visual Studio Code command palette. When prompted, use the value **input1** from the default module code (or the equivalent value from your code) as the input name for your application. The command triggers the **iotedgehubdev** CLI and then starts the IoT Edge simulator and a testing utility module container. You can see the outputs below in the integrated terminal if the simulator has been started in single module mode successfully. You can also see a `curl` command to help send message through. You will use it later.
+1. In the Visual Studio Code command palette, run the command **Azure IoT Edge: Start IoT Edge Hub Simulator for Single Module**.
+1. Provide the names of any inputs that you want to test with your module. If you're using the default sample code, use the value **input1**.
+1. The command triggers the **iotedgehubdev** CLI and then starts the IoT Edge simulator and a testing utility module container. You can see the outputs below in the integrated terminal if the simulator has been started in single module mode successfully. You can also see a `curl` command to help send message through. You will use it later.
![Set up IoT Edge simulator for single module app](media/how-to-develop-csharp-module/start-simulator-for-single-module.png)
To set up and start the simulator, run the command **Azure IoT Edge: Start IoT E
### Debug module in launch mode
+After the simulator has been started successfully, you can debug your module code.
+ 1. Prepare your environment for debugging according to the requirements of your development language, set a breakpoint in your module, and select the debug configuration to use: - **C#** - In the Visual Studio Code integrated terminal, change the directory to the ***&lt;your module name&gt;*** folder, and then run the following command to build .NET Core application.
To set up and start the simulator, run the command **Azure IoT Edge: Start IoT E
- Open the file `Program.cs` and add a breakpoint.
- - Navigate to the Visual Studio Code Debug view by selecting **View > Debug**. Select the debug configuration ***&lt;your module name&gt;* Local Debug (.NET Core)** from the dropdown.
+ - Navigate to the Visual Studio Code Debug view by selecting the debug icon from the menu on the left or by typing `Ctrl+Shift+D`. Select the debug configuration ***&lt;your module name&gt;* Local Debug (.NET Core)** from the dropdown.
> [!NOTE] > If your .NET Core `TargetFramework` is not consistent with your program path in `launch.json`, you'll need to manually update the program path in `launch.json` to match the `TargetFramework` in your .csproj file so that Visual Studio Code can successfully launch this program.
To set up and start the simulator, run the command **Azure IoT Edge: Start IoT E
- Open the file `app.js` and add a breakpoint.
- - Navigate to the Visual Studio Code Debug view by selecting **View > Debug**. Select the debug configuration ***&lt;your module name&gt;* Local Debug (Node.js)** from the dropdown.
+ - Navigate to the Visual Studio Code Debug view by selecting the debug icon from the menu on the left or by typing `Ctrl+Shift+D`. Select the debug configuration ***&lt;your module name&gt;* Local Debug (Node.js)** from the dropdown.
- **Java** - Open the file `App.java` and add a breakpoint.
- - Navigate to the Visual Studio Code Debug view by selecting **View > Debug**. Select the debug configuration ***&lt;your module name&gt;* Local Debug (Java)** from the dropdown.
+ - Navigate to the Visual Studio Code Debug view by selecting the debug icon from the menu on the left or by typing `Ctrl+Shift+D`. Select the debug configuration ***&lt;your module name&gt;* Local Debug (Java)** from the dropdown.
1. Click **Start Debugging** or press **F5** to start the debug session.
iot-edge Iot Edge As Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/iot-edge-as-gateway.md
Title: Gateways for downstream devices - Azure IoT Edge | Microsoft Docs description: Use Azure IoT Edge to create a transparent, opaque, or proxy gateway device that sends data from multiple downstream devices to the cloud or processes it locally. -+ Last updated 03/23/2021
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/iot-edge-certs.md
Title: Certificates for device security - Azure IoT Edge | Microsoft Docs description: Azure IoT Edge uses certificate to validate devices, modules, and leaf devices and establish secure connections between them. -+ Last updated 08/12/2020
iot-edge Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/iot-edge-for-linux-on-windows.md
Title: What is Azure IoT Edge for Linux on Windows | Microsoft Docs description: Overview of you can run Linux IoT Edge modules on Windows 10 devices -+ # this is the PM responsible
iot-edge Iot Edge Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/iot-edge-modules.md
Title: Learn how modules run logic on your devices - Azure IoT Edge | Microsoft Docs description: Azure IoT Edge modules are containerized units of logic that can be deployed and managed remotely so that you can run business logic on IoT Edge devices -+ Last updated 03/21/2019
iot-edge Iot Edge Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/iot-edge-runtime.md
Title: Learn how the runtime manages devices - Azure IoT Edge | Microsoft Docs description: Learn how the IoT Edge runtime manages modules, security, communication, and reporting on your devices -+ Last updated 11/10/2020
iot-edge Iot Edge Security Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/iot-edge-security-manager.md
description: Manages the IoT Edge device security stance and the integrity of se
keywords: security, secure element, enclave, TEE, IoT Edge -+ Last updated 08/30/2019
iot-edge Module Composition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/module-composition.md
Title: Deploy module & routes with deployment manifests - Azure IoT Edge description: Learn how a deployment manifest declares which modules to deploy, how to deploy them, and how to create message routes between them. -+ Last updated 10/08/2020
iot-edge Module Deployment Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/module-deployment-monitoring.md
Title: Automatic deployment for device groups - Azure IoT Edge | Microsoft Docs description: Use automatic deployments in Azure IoT Edge to manage groups of devices based on shared tags -+ Last updated 01/30/2020
iot-edge Module Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/module-development.md
Title: Develop modules for Azure IoT Edge | Microsoft Docs description: Develop custom modules for Azure IoT Edge that can communicate with the runtime and IoT Hub -+ Last updated 11/10/2020
iot-edge Module Edgeagent Edgehub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/module-edgeagent-edgehub.md
Title: Properties of the agent and hub module twins - Azure IoT Edge description: Review the specific properties and their values for the edgeAgent and edgeHub module twins -+ Last updated 04/16/2021
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/production-checklist.md
Title: Prepare to deploy your solution in production - Azure IoT Edge description: Learn how to take your Azure IoT Edge solution from development to production, including setting up your devices with the appropriate certificates and making a deployment plan for future code updates. -+ Last updated 03/01/2021
iot-edge Quickstart Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/quickstart-linux.md
Title: Quickstart create an Azure IoT Edge device on Linux | Microsoft Docs description: In this quickstart, learn how to create an IoT Edge device on Linux and then deploy prebuilt code remotely from the Azure portal. -+ Last updated 04/07/2021
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
Title: PowerShell functions for Azure IoT Edge for Linux on Windows | Microsoft Docs description: Reference information for Azure IoT Edge for Linux on Windows PowerShell functions to deploy, provision, and status IoT Edge for Linux on Windows virtual machines. -+ Last updated 06/18/2021
iot-edge Reference Windows Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/reference-windows-scripts.md
Title: Scripts for Azure IoT Edge with Windows containers | Microsoft Docs description: Reference information for IoT Edge PowerShell scripts to install, uninstall, or update on Windows devices -+ Last updated 10/06/2020
iot-edge Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/security.md
Title: Security framework - Azure IoT Edge | Microsoft Docs description: Learn about the security, authentication, and authorization standards that were used to develop Azure IoT Edge and should be considered as you design your solution -+ Last updated 08/30/2019
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/support.md
Title: Supported operating systems, container engines - Azure IoT Edge description: Learn which operating systems can run the Azure IoT Edge daemon and runtime, and supported container engines for your production devices -+ Last updated 06/09/2021
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/troubleshoot-common-errors.md
Title: Common errors - Azure IoT Edge | Microsoft Docs description: Use this article to resolve common issues encountered when deploying an IoT Edge solution -+ Last updated 03/01/2021
iot-edge Troubleshoot In Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/troubleshoot-in-portal.md
Title: Troubleshoot from the Azure portal - Azure IoT Edge | Microsoft Docs description: Use the troubleshooting page in the Azure portal to monitor IoT Edge devices and modules -+ Last updated 05/26/2021
iot-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/troubleshoot.md
Title: Troubleshoot - Azure IoT Edge | Microsoft Docs description: Use this article to learn standard diagnostic skills for Azure IoT Edge, like retrieving component status and logs -+ Last updated 05/04/2021
iot-edge Tutorial C Module Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-c-module-windows.md
Title: Tutorial - Develop C modules for Windows by using Azure IoT Edge
description: This tutorial shows you how to create IoT Edge modules with C code and deploy them to Windows devices that are running IoT Edge. -+ Last updated 05/28/2019
iot-edge Tutorial C Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-c-module.md
Title: Tutorial develop C module for Linux - Azure IoT Edge | Microsoft Docs
description: This tutorial shows you how to create an IoT Edge module with C code and deploy it to a Linux device running IoT Edge -+ Last updated 07/30/2020
iot-edge Tutorial Csharp Module Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-csharp-module-windows.md
Title: Tutorial - Develop C# modules for Windows by using Azure IoT Edge
description: This tutorial shows you how to create IoT Edge modules with C# code and deploy them to Windows devices that are running IoT Edge. -+ Last updated 08/03/2020
iot-edge Tutorial Csharp Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-csharp-module.md
Title: Tutorial - Develop C# module for Linux using Azure IoT Edge
description: This tutorial shows you how to create an IoT Edge module with C# code and deploy it to a Linux IoT Edge device. -+ Last updated 07/30/2020
iot-edge Tutorial Deploy Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-deploy-custom-vision.md
Title: Tutorial - Deploy Custom Vision classifier to a device using Azure IoT Ed
description: In this tutorial, learn how to make a computer vision model run as a container using Custom Vision and IoT Edge. -+ Last updated 07/30/2020
iot-edge Tutorial Deploy Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-deploy-function.md
Title: 'Tutorial: Deploy Azure Functions as modules - Azure IoT Edge' description: In this tutorial, you develop an Azure Function as an IoT Edge module, then deploy it to an edge device. -+ Last updated 07/29/2020
iot-edge Tutorial Deploy Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-deploy-machine-learning.md
Title: 'Tutorial - Deploy Azure Machine Learning to a device using Azure IoT Edge' description: 'In this tutorial, you create an Azure Machine Learning model, then deploy it as a module to an edge device' -+ Last updated 07/29/2020
iot-edge Tutorial Develop For Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-develop-for-linux.md
Title: 'Tutorial - Develop module for Linux devices using Azure IoT Edge' description: This tutorial walks through setting up your development machine and cloud resources to develop IoT Edge modules using Linux containers for Linux devices -+ Last updated 07/30/2020
iot-edge Tutorial Develop For Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-develop-for-windows.md
Title: 'Tutorial - Develop module for Windows devices using Azure IoT Edge' description: This tutorial walks through setting up your development machine and cloud resources to develop IoT Edge modules using Windows containers for Windows devices -+ Last updated 07/30/2020
iot-edge Tutorial Java Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-java-module.md
Title: Tutorial - Custom Java module tutorial using Azure IoT Edge
description: This tutorial shows you how to create an IoT Edge module with Java code and deploy it to an edge device. -+ Last updated 07/30/2020
iot-edge Tutorial Machine Learning Edge 01 Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-01-intro.md
Title: 'Tutorial: Detailed walkthrough of Machine Learning on Azure IoT Edge' description: A high-level tutorial that walks through the various tasks necessary to create an end-to-end, machine learning at the edge scenario. -+ Last updated 11/11/2019
iot-edge Tutorial Machine Learning Edge 02 Prepare Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-02-prepare-environment.md
Title: 'Tutorial: Set up environment - Machine Learning on Azure IoT Edge' description: 'Tutorial: Prepare your environment for development and deployment of modules for machine learning at the edge.' -+ Last updated 3/12/2020
iot-edge Tutorial Machine Learning Edge 03 Generate Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-03-generate-data.md
Title: 'Tutorial: Generate simulated device data - Machine Learning on Azure IoT Edge' description: 'Tutorial - Create virtual devices that generate simulated telemetry that can later be used to train a machine learning model.' -+ Last updated 1/20/2020
iot-edge Tutorial Machine Learning Edge 04 Train Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-04-train-model.md
Title: 'Tutorial: Train and deploy a model - Machine Learning on Azure IoT Edge' description: In this tutorial, you'll train a machine learning model by using Azure Machine Learning and then package the model as a container image that can be deployed as an Azure IoT Edge module. -+ Last updated 3/24/2020
iot-edge Tutorial Machine Learning Edge 05 Configure Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-05-configure-edge-device.md
Title: 'Tutorial: Configure an Azure IoT Edge device - Machine learning on IoT Edge' description: In this tutorial, you'll configure an Azure virtual machine running Linux as an Azure IoT Edge device that acts as a transparent gateway. -+ Last updated 2/5/2020
iot-edge Tutorial Machine Learning Edge 06 Custom Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-06-custom-modules.md
Title: 'Tutorial: Create and deploy custom modules - Machine Learning on Azure IoT Edge' description: 'This tutorial shows how to create and deploy IoT Edge modules that process data from leaf devices through a machine learning model and then send the insights to IoT Hub.' -+ Last updated 6/30/2020
iot-edge Tutorial Machine Learning Edge 07 Send Data To Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-07-send-data-to-hub.md
Title: 'Tutorial: Send device data via transparent gateway - Machine Learning on Azure IoT Edge' description: 'This tutorial shows how you can use your development machine as a simulated IoT Edge device to send data to the IoT Hub by going through a device configured as a transparent gateway.' -+ Last updated 6/30/2020
iot-edge Tutorial Nested Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-nested-iot-edge.md
Title: Tutorial - Create a hierarchy of IoT Edge devices - Azure IoT Edge description: This tutorial shows you how to create a hierarchical structure of IoT Edge devices using gateways. -+ Last updated 2/26/2021
iot-edge Tutorial Node Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-node-module.md
Title: Tutorial develop Node.js module for Linux - Azure IoT Edge | Microsoft Do
description: This tutorial shows you how to create an IoT Edge module with Node.js code and deploy it to an edge device -+ Last updated 07/30/2020
iot-edge Tutorial Python Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-python-module.md
Title: Tutorial - Custom Python module tutorial using Azure IoT Edge
description: This tutorial shows you how to create an IoT Edge module with Python code and deploy it to an edge device. -+ Last updated 08/04/2020
iot-edge Tutorial Store Data Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-store-data-sql-server.md
Title: Tutorial - Store data with SQL module using Azure IoT Edge
description: This tutorial shows how to store data locally on your IoT Edge device with a SQL Server module -+ Last updated 08/04/2020
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/version-history.md
Title: IoT Edge version navigation and history - Azure IoT Edge description: Discover what's new in IoT Edge with information about new features and capabilities in the latest releases. -+ Last updated 04/07/2021
iot-fundamentals Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-fundamentals/security-recommendations.md
Title: Security recommendations for Azure IoT | Microsoft Docs description: This article summarizes additional steps to ensure security in your Azure IoT Hub solution. -+
iot-hub-device-update Device Update Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-networking.md
Title: Device Update for IoT Hub network requirements | Microsoft Docs description: Device Update for IoT Hub uses a variety of network ports for different purposes.--++ Last updated 1/11/2021
iot-hub-device-update Understand Device Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/understand-device-update.md
To realize the full benefits of IoT-enabled digital transformation, customers ne
## Support for a wide range of IoT devices
-Device Update for IoT Hub is designed to offer optimized update deployment and streamlined operations through integration with [Azure IoT Hub](https://azure.microsoft.com/en-us/services/iot-hub/). This integration makes it easy to adopt Device Update on any existing solution. It provides a cloud-hosted solution to connect virtually any device. Device Update supports a broad range of IoT operating systemsΓÇöincluding Linux and [Azure RTOS](https://azure.microsoft.com/en-us/services/rtos/) (real-time operating system)ΓÇöand is extensible via open source. We are codeveloping Device Update for IoT Hub offerings with our semiconductor partners, including STMicroelectronics, NXP, Renesas, and Microchip. See the [samples](https://github.com/azure-rtos/samples/tree/PublicPreview/ADU) of key semiconductors evaluation boards that includes the get started guides to learn how to configure, build, and deploy the over-the-air (OTA) updates to MCU class devices.
+Device Update for IoT Hub is designed to offer optimized update deployment and streamlined operations through integration with [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/). This integration makes it easy to adopt Device Update on any existing solution. It provides a cloud-hosted solution to connect virtually any device. Device Update supports a broad range of IoT operating systemsΓÇöincluding Linux and [Azure RTOS](https://azure.microsoft.com/services/rtos/) (real-time operating system)ΓÇöand is extensible via open source. We are codeveloping Device Update for IoT Hub offerings with our semiconductor partners, including STMicroelectronics, NXP, Renesas, and Microchip. See the [samples](https://github.com/azure-rtos/samples/tree/PublicPreview/ADU) of key semiconductors evaluation boards that includes the get started guides to learn how to configure, build, and deploy the over-the-air (OTA) updates to MCU class devices.
Both a Device Update Agent Simulator binary and Raspberry Pi reference Yocto images are provided. Device Update for IoT Hub also supports updating Azure IoT Edge devices. A Device Update Agent is provided for Ubuntu Server 18.04 amd64
iot-hub Iot Hub Bulk Identity Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-bulk-identity-mgmt.md
Title: Import/Export of Azure IoT Hub device identities | Microsoft Docs description: How to use the Azure IoT service SDK to run bulk operations against the identity registry to import and export device identities. Import operations enable you to create, update, and delete device identities in bulk. -+
iot-hub Iot Hub Compare Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-compare-event-hubs.md
Title: Compare Azure IoT Hub to Azure Event Hubs | Microsoft Docs description: A comparison of the IoT Hub and Event Hubs Azure services highlighting functional differences and use cases. The comparison includes supported protocols, device management, monitoring, and file uploads. -+
iot-hub Iot Hub Configure File Upload Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-configure-file-upload-cli.md
Title: Configure file upload to IoT Hub using Azure CLI | Microsoft Docs description: How to configure file uploads to Azure IoT Hub using the cross-platform Azure CLI. -+
iot-hub Iot Hub Configure File Upload Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-configure-file-upload-powershell.md
Title: Use the Azure PowerShell to configure file upload | Microsoft Docs description: How to use the Azure PowerShell cmdlets to configure your IoT hub to enable file uploads from connected devices. Includes information about configuring the destination Azure storage account. -+
iot-hub Iot Hub Configure File Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-configure-file-upload.md
Title: Use the Azure portal to configure file upload | Microsoft Docs description: How to use the Azure portal to configure your IoT hub to enable file uploads from connected devices. Includes information about configuring the destination Azure storage account. -+
iot-hub Iot Hub Csharp Csharp C2d https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-csharp-csharp-c2d.md
Title: Cloud-to-device messages with Azure IoT Hub (.NET) | Microsoft Docs description: How to send cloud-to-device messages to a device from an Azure IoT hub using the Azure IoT SDKs for .NET. You modify a device app to receive cloud-to-device messages and modify a back-end app to send the cloud-to-device messages. -+ ms.devlang: csharp
iot-hub Iot Hub Csharp Csharp Device Management Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-csharp-csharp-device-management-get-started.md
Title: Get started with Azure IoT Hub device management (.NET/.NET) | Microsoft Docs description: How to use Azure IoT Hub device management to initiate a remote device reboot. You use the Azure IoT device SDK for .NET to implement a simulated device app that includes a direct method and the Azure IoT service SDK for .NET to implement a service app that invokes the direct method. -+ ms.devlang: csharp
iot-hub Iot Hub Csharp Csharp File Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-csharp-csharp-file-upload.md
Title: Upload files from devices to Azure IoT Hub with .NET | Microsoft Docs description: How to upload files from a device to the cloud using Azure IoT device SDK for .NET. Uploaded files are stored in an Azure storage blob container. -+ ms.devlang: csharp
iot-hub Iot Hub Csharp Csharp Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-csharp-csharp-schedule-jobs.md
Title: Schedule jobs with Azure IoT Hub (.NET/.NET) | Microsoft Docs description: How to schedule an Azure IoT Hub job to invoke a direct method on multiple devices. You use the Azure IoT device SDK for .NET to implement the simulated device apps and a service app to run the job. -+
iot-hub Iot Hub Csharp Csharp Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-csharp-csharp-twin-getstarted.md
Title: Get started with Azure IoT Hub device twins (.NET/.NET) | Microsoft Docs description: How to use Azure IoT Hub device twins to add tags and then use an IoT Hub query. You use the Azure IoT device SDK for .NET to implement the simulated device app and the Azure IoT service SDK for .NET to implement a service app that adds the tags and runs the IoT Hub query. -+ ms.devlang: csharp
iot-hub Iot Hub Dev Guide Sas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-dev-guide-sas.md
Title: Control access to IoT Hub using SAS tokens | Microsoft Docs description: How to control access to IoT Hub for device apps and back-end apps using shared access signature tokens. -+
iot-hub Iot Hub Devguide C2d Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-c2d-guidance.md
Title: Azure IoT Hub cloud-to-device options | Microsoft Docs description: Developer guide - guidance on when to use direct methods, device twin's desired properties, or cloud-to-device messages for cloud-to-device communications. -+
iot-hub Iot Hub Devguide D2c Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-d2c-guidance.md
Title: Azure IoT Hub device-to-cloud options | Microsoft Docs description: Developer guide - guidance on when to use device-to-cloud messages, reported properties, or file upload for cloud-to-device communications. -+
iot-hub Iot Hub Devguide Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-device-twins.md
Title: Understand Azure IoT Hub device twins | Microsoft Docs description: Developer guide - use device twins to synchronize state and configuration data between IoT Hub and your devices -+
iot-hub Iot Hub Devguide Direct Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-direct-methods.md
 Title: Understand Azure IoT Hub direct methods | Microsoft Docs description: Developer guide - use direct methods to invoke code on your devices from a service app.-+ Last updated 07/17/2018-+
iot-hub Iot Hub Devguide Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-endpoints.md
Title: Understand Azure IoT Hub endpoints | Microsoft Docs description: Developer guide - reference information about IoT Hub device-facing and service-facing endpoints. -+
iot-hub Iot Hub Devguide File Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-file-upload.md
Title: Understand Azure IoT Hub file upload | Microsoft Docs description: Developer guide - use the file upload feature of IoT Hub to manage uploading files from a device to an Azure storage blob container. -+
iot-hub Iot Hub Devguide Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-jobs.md
Title: Understand Azure IoT Hub jobs | Microsoft Docs description: Developer guide - scheduling jobs to run on multiple devices connected to your IoT hub. Jobs can update tags and desired properties and invoke direct methods on multiple devices. -+
iot-hub Iot Hub Devguide Messages C2d https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messages-c2d.md
Title: Understand Azure IoT Hub cloud-to-device messaging | Microsoft Docs description: This developer guide discusses how to use cloud-to-device messaging with your IoT hub. It includes information about the message life cycle and configuration options. -+
iot-hub Iot Hub Devguide Messages Read Builtin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md
Title: Understand the Azure IoT Hub built-in endpoint | Microsoft Docs description: Developer guide - describes how to use the built-in, Event Hub-compatible endpoint to read device-to-cloud messages. -+
iot-hub Iot Hub Devguide Messages Read Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messages-read-custom.md
Title: Understand Azure IoT Hub custom endpoints | Microsoft Docs description: Developer guide - using routing queries to route device-to-cloud messages to custom endpoints. -+
iot-hub Iot Hub Devguide Messaging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messaging.md
Title: Understand Azure IoT Hub messaging | Microsoft Docs description: Developer guide - device-to-cloud and cloud-to-device messaging with IoT Hub. Includes information about message formats and supported communications protocols. -+
iot-hub Iot Hub Devguide No Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-no-sdk.md
Title: Develop without an Azure IoT SDK | Microsoft Docs description: Developer guide - information about and links to topics that you can use to build device apps and back-end apps without using an Azure IoT SDK. -+
This topic provides helpful information and links for developers who want to develop device or back-end apps without using the Azure IoT SDKs.
-Microsoft strongly advises using an Azure IoT SDK. The Azure IoT device and service SDKs are published on many popular platforms. The SDKs provide a convenience layer that handles much of the complexity of the underlying communication protocol, including device connection and reconnection, and retry policy. The SDKs are regularly updated to provide the latest features exposed by IoT Hub as well as security updates. Using the SDKs can help you reduce development time and time devoted to code maintenance. To learn more about the Azure IoT SDKs, see [Azure IoT Device and Service SDKs](iot-hub-devguide-sdks.md). For more detail about the advantages of using an Azure IoT SDK, see the [Benefits of using the Azure IoT SDKs and pitfalls to avoid if you donΓÇÖt](https://azure.microsoft.com/en-us/blog/benefits-of-using-the-azure-iot-sdks-in-your-azure-iot-solution/) blog post.
+Microsoft strongly advises using an Azure IoT SDK. The Azure IoT device and service SDKs are published on many popular platforms. The SDKs provide a convenience layer that handles much of the complexity of the underlying communication protocol, including device connection and reconnection, and retry policy. The SDKs are regularly updated to provide the latest features exposed by IoT Hub as well as security updates. Using the SDKs can help you reduce development time and time devoted to code maintenance. To learn more about the Azure IoT SDKs, see [Azure IoT Device and Service SDKs](iot-hub-devguide-sdks.md). For more detail about the advantages of using an Azure IoT SDK, see the [Benefits of using the Azure IoT SDKs and pitfalls to avoid if you donΓÇÖt](https://azure.microsoft.com/blog/benefits-of-using-the-azure-iot-sdks-in-your-azure-iot-solution/) blog post.
Although IoT Hub supports AMQP, AMQP over WebSockets, HTTPS, MQTT, and MQTT over WebSockets for communication with devices, we recommend using MQTT if your device supports it.
iot-hub Iot Hub Devguide Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-pricing.md
Title: Understand Azure IoT Hub pricing | Microsoft Docs description: Developer guide - information about how metering and pricing works with IoT Hub including worked examples. -+
iot-hub Iot Hub Devguide Protocols https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-protocols.md
Title: Azure IoT Hub communication protocols and ports | Microsoft Docs description: Developer guide - describes the supported communication protocols for device-to-cloud and cloud-to-device communications and the port numbers that must be open. -+
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-sdks.md
Title: Azure IoT Hub SDKs | Microsoft Docs description: Links to the Azure IoT Hub SDKs which you can use to build device apps and back-end apps. -+
iot-hub Iot Hub Devguide Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-security.md
Title: Access control and security for IoT Hub | Microsoft Docs description: Overview on how to control access to IoT Hub, includes links to depth articles on AAD integration and SAS options. -+
iot-hub Iot Hub Devguide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide.md
Title: Developer guide for Azure IoT Hub | Microsoft Docs description: The Azure IoT Hub developer guide includes discussions of endpoints, security, the identity registry, device management, direct methods, device twins, file uploads, jobs, the IoT Hub query language, and messaging. -+
iot-hub Iot Hub Event Grid Routing Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-event-grid-routing-comparison.md
Title: Compare Event Grid, routing for IoT Hub | Microsoft Docs description: IoT Hub offers its own message routing service, but also integrates with Event Grid for event publishing. Compare the two features. -+
iot-hub Iot Hub Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-event-grid.md
Title: Azure IoT Hub and Event Grid | Microsoft Docs description: Use Azure Event Grid to trigger processes based on actions that happen in IoT Hub. -+
iot-hub Iot Hub Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-ha-dr.md
Title: Azure IoT Hub high availability and disaster recovery | Microsoft Docs description: Describes the Azure and IoT Hub features that help you to build highly available Azure IoT solutions with disaster recovery capabilities.-+ Last updated 03/17/2020-+ # IoT Hub high availability and disaster recovery
iot-hub Iot Hub Java Java C2d https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-java-java-c2d.md
Title: Cloud-to-device messages with Azure IoT Hub (Java) | Microsoft Docs description: How to send cloud-to-device messages to a device from an Azure IoT hub using the Azure IoT SDKs for Java. You modify a simulated device app to receive cloud-to-device messages and modify a back-end app to send the cloud-to-device messages. -+
iot-hub Iot Hub Java Java Device Management Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-java-java-device-management-getstarted.md
Title: Get started with Azure IoT Hub device management (Java) | Microsoft Docs description: How to use Azure IoT Hub device management to initiate a remote device reboot. You use the Azure IoT device SDK for Java to implement a simulated device app that includes a direct method and the Azure IoT service SDK for Java to implement a service app that invokes the direct method. -+
iot-hub Iot Hub Java Java File Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-java-java-file-upload.md
Title: Upload files from devices to Azure IoT Hub with Java | Microsoft Docs description: How to upload files from a device to the cloud using Azure IoT device SDK for Java. Uploaded files are stored in an Azure storage blob container. -+
iot-hub Iot Hub Java Java Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-java-java-schedule-jobs.md
Title: Schedule jobs with Azure IoT Hub (Java) | Microsoft Docs description: How to schedule an Azure IoT Hub job to invoke a direct method and set a desired property on multiple devices. You use the Azure IoT device SDK for Java to implement the simulated device apps and the Azure IoT service SDK for Java to implement a service app to run the job. -+
iot-hub Iot Hub Java Java Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-java-java-twin-getstarted.md
Title: Get started with Azure IoT Hub device twins (Java) | Microsoft Docs description: How to use Azure IoT Hub device twins to add tags and then use an IoT Hub query. You use the Azure IoT device SDK for Java to implement the device app and the Azure IoT service SDK for Java to implement a service app that adds the tags and runs the IoT Hub query. -+
iot-hub Iot Hub Message Enrichments Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-message-enrichments-overview.md
Title: Overview of Azure IoT Hub message enrichments description: This article shows message enrichments, which give the IoT Hub the ability to stamp messages with additional information before the messages are sent to the designated endpoint. -+
iot-hub Iot Hub Migrate To Diagnostics Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-migrate-to-diagnostics-settings.md
Title: Migrate Azure IoT Hub operations monitoring to IoT Hub resource logs in Azure Monitor | Microsoft Docs description: How to update Azure IoT Hub to use Azure Monitor instead of operations monitoring to monitor the status of operations on your IoT hub in real time. -+
iot-hub Iot Hub Node Node C2d https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-c2d.md
Title: Cloud-to-device messages with Azure IoT Hub (Node) | Microsoft Docs description: How to send cloud-to-device messages to a device from an Azure IoT hub using the Azure IoT SDKs for Node.js. You modify a simulated device app to receive cloud-to-device messages and modify a back-end app to send the cloud-to-device messages. -+
iot-hub Iot Hub Node Node Device Management Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-device-management-get-started.md
Title: Get started with Azure IoT Hub device management (Node) | Microsoft Docs description: How to use IoT Hub device management to initiate a remote device reboot. You use the Azure IoT SDK for Node.js to implement a simulated device app that includes a direct method and a service app that invokes the direct method. -+
iot-hub Iot Hub Node Node File Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-file-upload.md
Title: Upload files from devices to Azure IoT Hub with Node | Microsoft Docs description: How to upload files from a device to the cloud using Azure IoT device SDK for Node.js. Uploaded files are stored in an Azure storage blob container. -+
iot-hub Iot Hub Node Node Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-module-twin-getstarted.md
Title: Start with Azure IoT Hub module identity & module twin (Node.js) description: Learn how to create module identity and update module twin using IoT SDKs for Node.js. -+
iot-hub Iot Hub Node Node Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-schedule-jobs.md
Title: Schedule jobs with Azure IoT Hub (Node) | Microsoft Docs description: How to schedule an Azure IoT Hub job to invoke a direct method on multiple devices. You use the Azure IoT SDKs for Node.js to implement the simulated device apps and a service app to run the job. -+
iot-hub Iot Hub Operations Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-operations-monitoring.md
Title: Azure IoT Hub operations monitoring (deprecated) | Microsoft Docs description: How to use Azure IoT Hub operations monitoring to monitor the status of operations on your IoT hub in real time. -+
iot-hub Iot Hub Portal Csharp Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-portal-csharp-module-twin-getstarted.md
Title: Azure IoT Hub module identity & module twin (portal and .NET) description: Learn how to create module identity and update module twin using the portal and .NET. -+
iot-hub Iot Hub Protocol Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-protocol-gateway.md
Title: Azure IoT protocol gateway | Microsoft Docs description: How to use an Azure IoT protocol gateway to extend IoT Hub capabilities and protocol support to enable devices to connect to your hub using protocols not supported by IoT Hub natively. -+
iot-hub Iot Hub Raspberry Pi Web Simulator Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-raspberry-pi-web-simulator-get-started.md
Title: Connect Raspberry Pi web simulator to Azure IoT Hub (Node.js) description: Connect Raspberry Pi web simulator to Azure IoT Hub for Raspberry Pi to send data to the Azure cloud. -+ keywords: raspberry pi simulator, azure iot raspberry pi, raspberry pi iot hub, raspberry pi send data to cloud, raspberry pi to cloud
iot-hub Iot Hub Rm Template Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-rm-template-powershell.md
Title: Create an Azure IoT Hub using a template (PowerShell) | Microsoft Docs description: How to use an Azure Resource Manager template to create an IoT Hub with Azure PowerShell. -+
iot-hub Iot Hub Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-rm-template.md
Title: Create an Azure IoT Hub using a template (.NET) | Microsoft Docs description: How to use an Azure Resource Manager template to create an IoT Hub with a C# program. -+
iot-hub Iot Hub Understand Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-understand-ip-address.md
Title: Understanding the IP address of your IoT hub | Microsoft Docs description: Understand how to query your IoT hub IP address and its properties. The IP address of your IoT hub can change during certain scenarios such as disaster recovery or regional failover.--++
iot-hub Iot Hub Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-upgrade.md
Title: Upgrade Azure IoT Hub | Microsoft Docs description: Change the pricing and scale tier for IoT Hub to get more messaging and device management capabilities. -+
iot-hub Iot Hub Weather Forecast Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-weather-forecast-machine-learning.md
Title: Weather forecast using Azure Machine Learning Studio (classic) with IoT Hub data description: Use Azure Machine Learning Studio (classic) to predict the chance of rain based on the temperature and humidity data your IoT hub collects from a sensor. -+ keywords: weather forecast machine learning
iot-hub Quickstart Control Device Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-control-device-android.md
Title: Control a device from Azure IoT Hub (Android) | Microsoft Docs description: In this quickstart, you run two sample Java applications. One application is a service application that can remotely control devices connected to your hub. The other application runs on a physical or simulated device connected to your hub that can be controlled remotely. -+ ms.devlang: java
iot-hub Quickstart Control Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-control-device.md
Title: Quickstart - Control a device from Azure IoT Hub | Microsoft Docs description: In this quickstart, you run two sample applications. One application is a service application that can remotely control devices connected to your hub. The other application simulates a device connected to your hub that can be controlled remotely.--++
iot-hub Tutorial Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-connectivity.md
Title: Tutorial - Check device connectivity to Azure IoT Hub
description: Tutorial - Use IoT Hub tools to troubleshoot, during development, device connectivity issues to your IoT hub. -+ Last updated 02/22/2019
iot-hub Tutorial Routing Config Message Routing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-routing-config-message-routing-PowerShell.md
Title: Tutorial - Configure message routing for Azure IoT Hub with Azure PowerShell description: Tutorial - Configure message routing for Azure IoT Hub using Azure PowerShell. Depending on properties in the message, route to either a storage account or a Service Bus queue. -+
iot-hub Tutorial Routing Config Message Routing RM Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-routing-config-message-routing-RM-template.md
Title: Tutorial - Configure message routing for Azure IoT Hub using an Azure Resource Manager template description: Tutorial - Configure message routing for Azure IoT Hub using an Azure Resource Manager template -
iot-hub Tutorial Routing View Message Routing Results https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-routing-view-message-routing-results.md
Title: Tutorial - View Azure IoT Hub message routing results (.NET) | Microsoft Docs description: Tutorial - After setting up all of the resources using Part 1 of the tutorial, add the ability to route messages to Azure Stream Analytics and view the results in Power BI. -+
iot-hub Tutorial X509 Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-certificates.md
Title: Tutorial - Understand X.509 public key certificates for Azure IoT Hub| Microsoft Docs description: Tutorial - Understand X.509 public key certificates for Azure IoT Hub -+
iot-hub Tutorial X509 Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-introduction.md
Title: Tutorial - Understand Cryptography and X.509 certificates for Azure IoT Hub | Microsoft Docs description: Tutorial - Understand cryptography and X.509 PKI for Azure IoT Hub -+
iot-hub Tutorial X509 Openssl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-openssl.md
Title: Tutorial - Use OpenSSL to create X.509 test certificates for Azure IoT Hub| Microsoft Docs description: Tutorial - Use OpenSSL to create CA and device certificates for Azure IoT hub -+
iot-hub Tutorial X509 Prove Possession https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-prove-possession.md
Title: Tutorial - Prove Ownership of CA certificates in Azure IoT Hub | Microsoft Docs description: Tutorial - Prove that you own a CA certificate for Azure IoT Hub -+
iot-hub Tutorial X509 Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-scripts.md
Title: Tutorial - Use Microsoft scripts to create x.509 test certificates for Azure IoT Hub | Microsoft Docs description: Tutorial - Use custom scripts to create CA and device certificates for Azure IoT Hub -+
iot-hub Tutorial X509 Self Sign https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-self-sign.md
Title: Tutorial - Use OpenSSL to create self signed certificates for Azure IoT Hub | Microsoft Docs description: Tutorial - Use OpenSSL to create self-signed X.509 certificates for Azure IoT Hub -+
iot-hub Tutorial X509 Test Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-test-certificate.md
Title: Tutorial - Test ability of X.509 certificates to authenticate devices to an Azure IoT Hub | Microsoft Docs description: Tutorial - Test your X.509 certificates to authenticate to Azure IoT Hub -+
key-vault Tutorial Rotate Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/tutorial-rotate-certificates.md
Create an Azure Key Vault using [Azure portal](../general/quick-create-portal.md
## Create a certificate in Key Vault
-Create a certificate or import a certificate into the key vault (see [Steps to create a certificate in Key Vault](../secrets/quick-create-portal.md)). In this case, you'll work on a certificate called **ExampleCertificate**.
+Create a certificate or import a certificate into the key vault (see [Steps to create a certificate in Key Vault](../certificates/quick-create-portal.md). In this case, you'll work on a certificate called **ExampleCertificate**.
## Update certificate lifecycle attributes
To delete the resource group by using the portal:
In this tutorial, you updated a certificate's lifecycle attributes. To learn more about Key Vault and how to integrate it with your applications, continue on to the following articles: - Read more about [Managing certificate creation in Azure Key Vault](./create-certificate-scenarios.md).-- Review the [Key Vault Overview](../general/overview.md).
+- Review the [Key Vault Overview](../general/overview.md).
key-vault Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/azure-policy.md
This policy allows you to manage the lifetime action specified for certificates
### Certificates should be issued by the specified integrated certificate authority (preview)
-If you use a Key Vault integrated certificate authority (Digicert or GlobalSign) and you want users to use one or either of these providers, you can use this policy to audit or enforce your selection. This policy can also be used to audit or deny the creation of self-signed certificates in key vault.
+If you use a Key Vault integrated certificate authority (Digicert or GlobalSign) and you want users to use one or either of these providers, you can use this policy to audit or enforce your selection. This policy will evaluate the CA selected in the issuance policy of the cert and the CA provider defined in the key vault. This policy can also be used to audit or deny the creation of self-signed certificates in key vault.
### Certificates should be issued by the specified non-integrated certificate authority (preview)
If the compliance results show up as "Not Started" it may be due to the followin
- Learn more about the [Azure Policy service](../../governance/policy/overview.md) - See Key Vault samples: [Key Vault built-in policy definitions](../../governance/policy/samples/built-in-policies.md#key-vault)-- Learn about [Azure Security Benchmark guidance on Key vault](/security/benchmark/azure/baselines/key-vault-security-baseline?source=docs#network-security)
+- Learn about [Azure Security Benchmark guidance on Key vault](/security/benchmark/azure/baselines/key-vault-security-baseline?source=docs#network-security)
key-vault Overview Security Worlds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/overview-security-worlds.md
Last updated 07/03/2017
# Azure Key Vault security worlds and geographic boundaries
-Azure products are available in a number of [Azure geographies](https://azure.microsoft.com/en-us/global-infrastructure/geographies/), with each Azure geography containing one or more regions. For example, the Europe geography contains two regions -- North Europe and West Europe -- while the sole region in the Brazil geography is Brazil South.
+Azure products are available in a number of [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/), with each Azure geography containing one or more regions. For example, the Europe geography contains two regions -- North Europe and West Europe -- while the sole region in the Brazil geography is Brazil South.
Azure Key Vault is a multi-tenant service that uses a pool of Hardware Security Modules (HSMs). All HSMs in a geography share the same cryptographic boundary, referred to as a "security world". Every geography corresponds to a single security world, and vice versa.
key-vault Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Key Vault description: Sample Azure Resource Graph queries for Azure Key Vault showing use of resource types and tables to access Azure Key Vault related resources and properties. Previously updated : 08/04/2021 Last updated : 08/09/2021
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/administrator-guide.md
In this example, the cost is:
* 1 custom image (32 GB) &times; 2 versions &times; 8 US regions &times; $1.54 = $24.64 per month > [!NOTE]
-> The preceding calculation is for example purposes only. It covers storage costs associated with using Shared Image Gallery and does *not* include egress costs. For actual pricing for storage, see [Managed Disks pricing](https://azure.microsoft.com/en-us/pricing/details/managed-disks/).
+> The preceding calculation is for example purposes only. It covers storage costs associated with using Shared Image Gallery and does *not* include egress costs. For actual pricing for storage, see [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/).
#### Cost management
lighthouse Monitor At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/monitor-at-scale.md
Title: Monitor delegated resources at scale description: Azure Lighthouse helps you use Azure Monitor Logs in a scalable way across customer tenants. Previously updated : 06/30/2021 Last updated : 08/12/2021
You can create a Log Analytics workspace by using the [Azure portal](../../azure
> New-AzADServicePrincipal -ApplicationId 1215fb39-1d15-4c05-b2e3-d519ac3feab4 > New-AzADServicePrincipal -ApplicationId 6da94f3c-0d67-4092-a408-bb5d1cb08d2d > ```
->
## Deploy policies that log data
workspace("WS-customer-tenant-2").AzureDiagnostics
For more examples of queries across multiple Log Analytics workspaces, see [Query across resources with Azure Monitor](../../azure-monitor/logs/cross-workspace-query.md).
+> [!IMPORTANT]
+> If you use an automation account used to query data from a Log Analytics workspace, that automation account must be created in the same tenant as the workspace.
+ ## View alerts across customers You can view [alerts](../../azure-monitor/alerts/alerts-overview.md) for the delegated subscriptions in customer tenants that your manage.
lighthouse View Manage Customers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/view-manage-customers.md
Title: View and manage customers and delegated resources in the Azure portal description: As a service provider or enterprise using Azure Lighthouse, you can view all of your delegated resources and subscriptions by going to My customers in the Azure portal. Previously updated : 08/10/2021 Last updated : 08/12/2021
You can view the following information from this page:
- To see more details about an offer and its delegations, select the offer name. - To view more details about role assignments for delegated subscriptions or resource groups, select the entry in the **Delegations** column.
+> [!NOTE]
+> If a customer renames a subscription after it's been delegated, you'll see the updated subscription name. If they rename the tenant, you may still see the older tenant name in some places in the Azure portal.
+ ## View and manage delegations Delegations show the subscription or resource group that has been delegated, along with the users and permissions that have access to it. To view this info, select **Delegations** on the left side of the **My customers** page.
logic-apps Connect Virtual Network Vnet Isolated Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/connect-virtual-network-vnet-isolated-environment.md
Title: Connect to Azure virtual networks using an ISE
description: Create an integration service environment (ISE) to access Azure virtual networks (VNETs) from Azure Logic Apps ms.suite: integration-+ Previously updated : 08/04/2021 Last updated : 08/11/2021 # Connect to Azure virtual networks from Azure Logic Apps using an integration service environment (ISE)
When you create an ISE, Azure *injects* that ISE into your Azure virtual network
> For logic apps and integration accounts to work together in an ISE, both must use the *same ISE* as their location. An ISE has increased limits on:+ * Run duration * Storage retention * Throughput
You can also create an ISE by using the [sample Azure Resource Manager quickstar
You can create the subnets in advance or when you create your ISE so that you can create the subnets at the same time. However, before you create your subnets, make sure that you review the [subnet requirements](#create-subnet). * Make sure that your virtual network [enables access for your ISE](#enable-access) so that your ISE can work correctly and stay accessible.
-
+ * If you use a [network virtual appliance (NVA)](../virtual-network/virtual-networks-udr-overview.md#user-defined), make sure that you don't enable TLS/SSL termination or change the outbound TLS/SSL traffic. Also, make sure that you don't enable inspection for traffic that originates from your ISE's subnet. For more information, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). * If you want to use custom Domain Name System (DNS) servers for your Azure virtual network, [set up those servers by following these steps](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md) before you deploy your ISE to your virtual network. For more information about managing DNS server settings, see [Create, change, or delete a virtual network](../virtual-network/manage-virtual-network.md#change-dns-servers).
This table describes the ports that your ISE requires to be accessible and the p
| Purpose | Source service tag or IP addresses | Source ports | Destination service tag or IP addresses | Destination ports | Notes | |||--|--|-|-| | Intersubnet communication within virtual network | Address space for the virtual network with ISE subnets | * | Address space for the virtual network with ISE subnets | * | Required for traffic to flow *between* the subnets in your virtual network. <p><p>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. |
-| Communication from your logic app | **VirtualNetwork** | * | Varies based on destination | Varies based on destination | Destination ports vary based on the endpoints for the external services with which your logic app needs to communicate. <p><p>For example, the destination port is 443 for a web service, port 25 for an SMTP service, port 22 for an SFTP service, and so on. |
+| Communication from your logic app | **VirtualNetwork** | * | Internet | 443, 80 | This rule is required for Secure Socket Layer (SSL) certificate verification. This check is for various internal and external sites, which is the reason that the Internet is required as the destination. |
+| Communication from your logic app | **VirtualNetwork** | * | Varies based on destination | Varies based on destination | Destination ports vary based on the endpoints for the external services with which your logic app needs to communicate. <p><p>For example, the destination port is port 25 for an SMTP service, port 22 for an SFTP service, and so on. |
| Azure Active Directory | **VirtualNetwork** | * | **AzureActiveDirectory** | 80, 443 || | Azure Storage dependency | **VirtualNetwork** | * | **Storage** | 80, 443, 445 || | Connection management | **VirtualNetwork** | * | **AppService** | 443 ||
logic-apps Logic Apps Enterprise Integration As2 Mdn Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-enterprise-integration-as2-mdn-acknowledgment.md
+
+ Title: AS2 MDN acknowledgments
+description: Learn about Message Disposition Notification (MDN) acknowledgments for AS2 messages in Azure Logic Apps.
+
+ms.suite: integration
++++ Last updated : 08/12/2021++
+# MDN acknowledgments for AS2 messages in Azure Logic Apps
+
+In Azure Logic Apps, you can create workflows that handle AS2 messages for Electronic Data Interchange (EDI) communication when you use **AS2** operations. In EDI messaging, acknowledgments provide the status from processing an EDI interchange. When receiving an interchange, the [**AS2 Decode** action](logic-apps-enterprise-integration-as2.md#decode) can return a Message Disposition Notification (MDN) or acknowledgment to the sender. An MDN verifies the following items:
+
+* The receiving partner successfully received the original message.
+
+ The sending partner compares the `MessageID` for the originally sent message with the `original-message-id` field that the receiver includes in the MDN.
+
+* The receiving partner verified the integrity of the exchanged data.
+
+ A Message Integrity Check (MIC) or MIC digest is calculated from the payload in the originally sent message. The sending partner compares this MIC with the MIC that the receiver calculated from the payload in the received message and included in the `Received-content-MIC` field in the MDN, if signed.
+
+ > [!NOTE]
+ > An MDN can be signed, but not encrypted or compressed.
+
+* Non-repudiation of receipt
+
+ The sending partner compares the signed MDN with the receiving partner's public key and verifies that the returned MIC value in the MDN is the same as the MIC for the original message payload stored in the non-repudiation database.
+
+> [!NOTE]
+> If you enable sending an MDN in response, your logic app attempts to return an MDN to report the status of AS2 message processing,
+> even if an error occurs during processing. The AS2 transmission isn't complete until the sender receives and verifies the MDN.
+> A synchronous MDN serves as an HTTP response, for example, a `200 OK` status.
+
+This topic provides a brief overview about the AS2 MDN ACK, including the properties used to generate the acknowledgment, the MDN headers to use, and the MIC. For other related information, review the following documentation:
+
+* [Exchange AS2 messages for B2B enterprise integration in Azure Logic Apps](logic-apps-enterprise-integration-as2.md)
+* [AS2 message settings](logic-apps-enterprise-integration-as2-message-settings.md)
+* [What is Azure Logic Apps](logic-apps-overview.md)
+
+## MDN generation
+
+The AS2 Decode action generates an MDN based on a trading partner's AS2 agreement properties when the agreement's **Receive Settings** has the **Send MDN** option selected. In this instance, the **AS2-From** property in the message header is used for generating the MDN, but other properties and their values are taken from the partner's AS2 agreement settings.
+
+By default, the incoming AS2 message headers are used for validation and generating the MDN. To use the agreement's validation and MDN settings instead, in the agreement's **Receive Settings**, select **Override message properties**. Otherwise, if this option remains unselected or an agreement is unavailable, the AS2 Decode action uses the incoming message headers instead.
+
+## MDN headers
+
+To correlate an MDN to the AS2 message as the response, the `AS2-From` header, `AS2-To` header, and `MessageID` context property are used. In the MDN, the `Original-Message-ID` header comes from the `Message-ID` header in the AS2 message for which the MDN is the response. An MDN contains the following headers:
+
+| Headers | Description |
+||-|
+| HTTP and AS2 | For more information, review [AS2 message settings](logic-apps-enterprise-integration-as2-message-settings.md).
+| Transfer layer | This header includes the `Content-Type` header that includes the signed multipart message, the algorithm for the MIC, the signature formatting protocol, and the outermost multipart boundary subheaders. |
+| First part | The first part of the signed multipart message is the embedded MDN. This part is human readable. |
+| Second part | The second part of the signed multipart message contains the digital signature, a reference to the original message, the disposition type and status, and the MIC value. This part is machine readable. |
+|||
+
+## MIC digest
+
+The MIC digest or MIC verifies that an MDN correlates to the payload in the originally sent message. This MIC is included in the second part of the signed multipart MDN message in the `Received-Content-MIC` extension field.
+
+The MIC is base64-encoded and is determined from the **MIC Algorithm** property, which is enabled when the **Send MDN** and **Send signed MDN** properties are selected on the AS2 agreement's **Receive Settings** page. For MIC generation, you can choose from the following supported hash algorithms:
+
+* SHA1
+* MD5
+* SHA2-256
+* SHA2-384
+* SHA2-512
+
+For example, the following screenshot shows the MDN properties in the AS2 agreement's **Receive Settings** page:
+
+![Screenshot that shows Azure portal with AS2 agreement's "Receive Settings" with MDN acknowledgement settings.](./medin-ack-settings.png)
+
+## Next steps
+
+* [AS2 message settings](logic-apps-enterprise-integration-as2-message-settings.md)
logic-apps Logic Apps Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-pricing.md
To help you estimate more accurate consumption costs, review these tips:
In single-tenant Azure Logic Apps, a logic app and its workflows follow the [**Standard** plan](https://azure.microsoft.com/pricing/details/logic-apps/) for pricing and billing. You create such logic apps in various ways, for example, when you choose the **Logic App (Standard)** resource type or use the **Azure Logic Apps (Standard)** extension in Visual Studio Code. This pricing model requires that logic apps use a hosting plan and a pricing tier, which differs from the Consumption plan in that you're billed for reserved capacity and dedicated resources whether or not you use them. > [!IMPORTANT]
-> When you create or deploy new logic apps based on the **Logic App (Standard)** resource type, you must use the
-> **Workflow Standard** hosting plan. Although preview versions let you use the App Service plan, Functions Premium plan,
-> and App Service Environment, these options aren't available for the **Logic App (Standard)** resource type.
+> When you create or deploy new logic apps based on the **Logic App (Standard)** resource type, you can use the Workflow Standard hosting plan in all Azure regions, or you can use the App Service hosting plan, but only when you select the **App Service Environment v3** region on the **Basics** tab.
+>
+> Although the preview **Logic App (Standard)** resource type lets you use the App Service plan, Functions Premium plan, App Service Environment v1, and App Service Environment v2, these options are no longer available or supported for the public release of this Azure Logic Apps resource type.
The following table summarizes how the Standard model handles metering and billing for the following components when used with a logic app and a workflow in single-tenant Azure Logic Apps:
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-using-sap-connector.md
Previously updated : 08/05/2021 Last updated : 08/12/2021 tags: connectors
This article explains how you can access your SAP resources from Logic Apps usin
* If you want to use the **When a message is received from SAP** trigger, you must also do the following:
- * Set up your SAP gateway security permissions with this setting:
- `"TP=Microsoft.PowerBI.EnterpriseGateway HOST=<gateway-server-IP-address> ACCESS=*"`
+ * Set up your SAP gateway security permissions or Access Control List (ACL). In the **secinfo** and **reginfo** files, which are visible in the Gateway Monitor dialog box, T-Code SMGW **Goto > Expert Functions > External Security -> Maintenance of ACL Files**. The following permission setting is required:
- * Set up your SAP gateway security logging to help find Access Control List (ACL). For more information, see the [SAP help topic for setting up gateway logging](https://help.sap.com/erp_hcm_ias2_2015_02/helpdata/en/48/b2a710ca1c3079e10000000a42189b/frameset.htm). Otherwise, you might receive this error:
- `"Registration of tp Microsoft.PowerBI.EnterpriseGateway from host <host-name> not allowed"`
+ `P TP=LOGICAPP HOST=<on-premises-gateway-server-IP-address> ACCESS=*`
+
+ This line has the following format:
+
+ `P TP=<Trading partner identifier (program name) or * for all partners> HOST=<comma separated list of external host IP or network name that can register the program> ACCESS=<* for all permissions or comma separated list of permissions>`
+
+ If you do not configure the SAP gateway security permissions, you might receive this error:
+
+ `Registration of tp Microsoft.PowerBI.EnterpriseGateway from host <host-name> not allowed`
+
+ For more information, see the [SAP Note 1850230 - GW: "Registration of tp &lt;program ID&gt; not allowed"](https://userapps.support.sap.com/sap/support/knowledge/en/1850230).
+
+ * Set up your SAP gateway security logging to help find Access Control List (ACL) issues. For more information, see the [SAP help topic for setting up gateway logging](https://help.sap.com/erp_hcm_ias2_2015_02/helpdata/en/48/b2a710ca1c3079e10000000a42189b/frameset.htm).
> [!NOTE] > This trigger uses the same URI location to both renew and unsubscribe from a webhook subscription. The renewal operation uses the HTTP `PATCH` method, while the unsubscribe operation uses the HTTP `DELETE` method. This behavior might make a renewal operation appear as an unsubscribe operation in your trigger's history, but the operation is still a renewal because the trigger uses `PATCH` as the HTTP method, not `DELETE`.
The managed SAP connector integrates with SAP systems through your [on-premises
> which might include updates to resolve your problem. * [Download and install the latest SAP client library](#sap-client-library-prerequisites) on the same local computer as your on-premises data gateway.
+
+* Configure the network host names and service names resolution for the host machine where you installed the on-premises data gateway. If you intend to use host names or service names for connection from Azure Logic Apps, you must set up each SAP application, message, and gateway server and their services for name resolution. The network host name resolution is configured in the `%windir%\System32\drivers\etc\hosts` file or in the DNS server that's available to your on-premises data gateway host machine. The service name resolution is configured in `%windir%\System32\drivers\etc\services`. If you do not intend to use network host names or service names for the connection, you can use host IP addresses and service port numbers instead.
+
+ If you do not have a DNS entry for your SAP system, the following example shows a sample entry for the hosts file:
+
+ ```text
+ 10.0.1.9 sapserver # SAP single-instance system host IP by simple computer name
+ 10.0.1.9 sapserver.contoso.com # SAP single-instance system host IP by fully qualified DNS name
+ ```
+
+ A sample set of entries for the services files is:
+
+ ```text
+ sapdp00 3200/tcp # SAP system instance 00 dialog (application) service port
+ sapgw00 3300/tcp # SAP system instance 00 gateway service port
+ sapmsDV6 3601/tcp # SAP system ID DV6 message service port
+ ```
### ISE prerequisites
-These prerequisites apply if you're running your logic app in a Premium-level ISE. However, they don't apply to logic apps running in a Developer-level ISE. An ISE provides access to resources that are protected by an Azure virtual network and offers other ISE-native connectors that let logic apps directly access on-premises resources without using on-premises data gateway.
+An ISE provides access to resources that are protected by an Azure virtual network and offers other ISE-native connectors that let logic app workflows directly access on-premises resources without using the on-premises data gateway.
1. If you don't already have an Azure Storage account with a blob container, create a container using either the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Storage Explorer](../storage/blobs/storage-quickstart-blobs-storage-explorer.md).
These prerequisites apply if you're running your logic app in a Premium-level IS
1. If your SAP instance and ISE are in different virtual networks, you also need to [peer those networks](../virtual-network/tutorial-connect-virtual-networks-portal.md) so they are connected. Also see the [SNC prerequisites for the ISE connector](#snc-prerequisites-ise).
+1. Get the IP addresses for the SAP application, message and gateway servers that you plan to use for connecting from your logic app workflow. Network name resolution is not available for SAP connections in an ISE.
+
+1. Get the port numbers for the SAP application, message and gateway services that you plan you will use for connection with Logic App. Service name resolution is not available for SAP connector in ISE.
+ ### SAP client library prerequisites These are the prerequisites for the SAP client library that you're using with the connector.
Next, create an action to send your IDoc message to SAP when your [HTTP request
* For **Group**, these properties, which usually appear optional, are required:
- ![Create SAP message server connection](media/logic-apps-using-sap-connector/create-SAP-message-server-connection.png)
+ ![Create SAP message server connection](media/logic-apps-using-sap-connector/create-SAP-message-server-connection.png)
+
+ In SAP, the Logon Group is maintained by opening the **CCMS: Maintain Logon Groups** (T-code SMLG) dialog box. For more information, see [SAP Note 26317 - Set up for LOGON group for automatic load balancing](https://service.sap.com/sap/support/notes/26317).
By default, strong typing is used to check for invalid values by performing XML validation against the schema. This behavior can help you detect issues earlier. The **Safe Typing** option is available for backward compatibility and only checks the string length. Learn more about the [Safe Typing option](#safe-typing).
To send IDocs from SAP to your logic app, you need the following minimum configu
1. Save your changes.
-1. Register your new **Program ID** with Azure Logic Apps.
+1. Register your new **Program ID** with Azure Logic Apps by creating a logic app workflow that starts with the SAP trigger named **When a message is received from SAP**. That way, when you save your workflow, Azure Logic Apps registers the **Program ID** on the SAP Gateway.
+
+1. In your workflow's trigger history, the on-premises data gateway SAP adapter logs, and the SAP Gateway trace logs, check the registration status. In the SAP Gateway monitor dialog box (T-Code SMGW), under **Logged-On Clients**, the new registration should appear as **Registered Server**.
+ 1. To test your connection, in the SAP interface, under your new **RFC Destination**, select **Connection Test**.
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-azure-machine-learning-architecture.md
For more information about these components, see [Deploy models with Azure Machi
[Workspace](#workspace) > **Endpoints**
-An endpoint is an instantiation of your model into either a web service that can be hosted in the cloud or an IoT module for integrated device deployments.
+An endpoint is an instantiation of your model into a web service that can be hosted in the cloud.
#### Web service endpoint
Pipeline endpoints let you call your [ML Pipelines](#ml-pipelines) programatical
A pipeline endpoint is a collection of published pipelines. This logical organization lets you manage and call multiple pipelines using the same endpoint. Each published pipeline in a pipeline endpoint is versioned. You can select a default pipeline for the endpoint, or specify a version in the REST call.
-#### IoT module endpoints
-A deployed IoT module endpoint is a Docker container that includes your model and associated script or application and any additional dependencies. You deploy these modules by using Azure IoT Edge on edge devices.
-
-If you've enabled monitoring, Azure collects telemetry data from the model inside the Azure IoT Edge module. The telemetry data is accessible only to you, and it's stored in your storage account instance.
-
-Azure IoT Edge ensures that your module is running, and it monitors the device that's hosting it.
## Automation ### Azure Machine Learning CLI
Azure Machine Learning provides the following monitoring and logging capabilitie
* [Track experiments with MLflow](how-to-use-mlflow.md) * [Visualize runs with TensorBoard](how-to-monitor-tensorboard.md) * For __Administrators__, you can monitor information about the workspace, related Azure resources, and events such as resource creation and deletion by using Azure Monitor. For more information, see [How to monitor Azure Machine Learning](monitor-azure-machine-learning.md).
-* For __DevOps__ or __MLOps__, you can monitor information generated by models deployed as web services or IoT Edge modules to identify problems with the deployments and gather data submitted to the service. For more information, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](how-to-enable-app-insights.md).
+* For __DevOps__ or __MLOps__, you can monitor information generated by models deployed as web services to identify problems with the deployments and gather data submitted to the service. For more information, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](how-to-enable-app-insights.md).
## Interacting with your workspace
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-target.md
In a typical model development lifecycle, you might:
1. Start by developing and experimenting on a small amount of data. At this stage, use your local environment, such as a local computer or cloud-based virtual machine (VM), as your compute target. 1. Scale up to larger data, or do distributed training by using one of these [training compute targets](#train).
-1. After your model is ready, deploy it to a web hosting environment or IoT device with one of these [deployment compute targets](#deploy).
+1. After your model is ready, deploy it to a web hosting environment with one of these [deployment compute targets](#deploy).
The compute resources you use for your compute targets are attached to a [workspace](concept-workspace.md). Compute resources other than the local machine are shared by users of the workspace.
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-model-management-and-deployment.md
For more information on ONNX with Azure Machine Learning, see the [Create and ac
### Use models
-Trained machine learning models are deployed as web services in the cloud or locally. You can also deploy models to Azure IoT Edge devices. Deployments use CPU, GPU, or field-programmable gate arrays (FPGA) for inferencing. You can also use models from Power BI.
+Trained machine learning models are deployed as web services in the cloud or locally. Deployments use CPU, GPU, or field-programmable gate arrays (FPGA) for inferencing. You can also use models from Power BI.
-When using a model as a web service or IoT Edge device, you provide the following items:
+When using a model as a web service, you provide the following items:
* The model(s) that are used to score data submitted to the service/device. * An entry script. This script accepts requests, uses the model(s) to score the data, and return a response.
When using a model as a web service or IoT Edge device, you provide the followin
You also provide the configuration of the target deployment platform. For example, the VM family type, available memory, and number of cores when deploying to Azure Kubernetes Service.
-When the image is created, components required by Azure Machine Learning are also added. For example, assets needed to run the web service and interact with IoT Edge.
+When the image is created, components required by Azure Machine Learning are also added. For example, assets needed to run the web service.
#### Batch scoring Batch scoring is supported through ML pipelines. For more information, see [Batch predictions on big data](./tutorial-pipeline-batch-scoring-classification.md).
When deploying to Azure Kubernetes Service, you can use controlled rollout to en
For more information, see [Controlled rollout of ML models](how-to-deploy-azure-kubernetes-service.md#deploy-models-to-aks-using-controlled-rollout-preview).
-#### IoT Edge devices
-
-You can use models with IoT devices through **Azure IoT Edge modules**. IoT Edge modules are deployed to a hardware device, which enables inference, or model scoring, on the device.
-
-For more information, see [Deploy models](how-to-deploy-and-where.md).
- ### Analytics Microsoft Power BI supports using machine learning models for data analytics. For more information, see [Azure Machine Learning integration in Power BI (preview)](/power-bi/service-machine-learning-integration).
machine-learning Concept Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-plan-manage-cost.md
When you create resources for an Azure Machine Learning workspace, resources for
* [Azure Container Registry](https://azure.microsoft.com/pricing/details/container-registry?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) Basic account * [Azure Block Blob Storage](https://azure.microsoft.com/pricing/details/storage/blobs?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) (general purpose v1) * [Key Vault](https://azure.microsoft.com/pricing/details/key-vault?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
-* [Application Insights](https://azure.microsoft.com/en-us/pricing/details/monitor?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
+* [Application Insights](https://azure.microsoft.com/pricing/details/monitor?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
When you create a [compute instance](concept-compute-instance.md), the VM stays on so it is available for your work. [Set up a schedule](how-to-create-manage-compute-instance.md#schedule) to automatically start and stop the compute instance (preview) to save cost when you aren't planning to use it.
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-workspace.md
Last updated 07/27/2021
The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace keeps a history of all training runs, including logs, metrics, output, and a snapshot of your scripts. You use this information to determine which training run produces the best model.
-Once you have a model you like, you register it with the workspace. You then use the registered model and scoring scripts to deploy to Azure Container Instances, Azure Kubernetes Service, or to a field-programmable gate array (FPGA) as a REST-based HTTP endpoint. You can also deploy the model to an Azure IoT Edge device as a module.
+Once you have a model you like, you register it with the workspace. You then use the registered model and scoring scripts to deploy to Azure Container Instances, Azure Kubernetes Service, or to a field-programmable gate array (FPGA) as a REST-based HTTP endpoint.
## Taxonomy
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Previously updated : 07/29/2021 Last updated : 08/12/2021
These rule collections are described in more detail in [What are some Azure Fire
### Inbound configuration
-When using Azure Machine Learning __compute instance__ or __compute cluster__, allow inbound traffic from Azure Batch management and Azure Machine Learning services. When creating the user-defined routes for this traffic, you can use either **IP Addresses** or **service tags** to route the traffic.
-
-> [!IMPORTANT]
-> Using service tags with user-defined routes is currently in preview and may not be fully supported. For more information, see [Virtual Network routing](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes-preview).
-
-# [IP Address routes](#tab/ipaddress)
-
-For the Azure Machine Learning service, you must add the IP address of both the __primary__ and __secondary__ regions. To find the secondary region, see the [Ensure business continuity & disaster recovery using Azure Paired Regions](../best-practices-availability-paired-regions.md#azure-regional-pairs). For example, if your Azure Machine Learning service is in East US 2, the secondary region is Central US.
-
-To get a list of IP addresses of the Batch service and Azure Machine Learning service, use one of the following methods:
-
-* Download the [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519) and search the file for `BatchNodeManagement.<region>` and `AzureMachineLearning.<region>`, where `<region>` is your Azure region.
-
-* Use the [Azure CLI](/cli/azure/install-azure-cli) to download the information. The following example downloads the IP address information and filters out the information for the East US 2 region (primary) and Central US region (secondary):
-
- ```azurecli-interactive
- az network list-service-tags -l "East US 2" --query "values[?starts_with(id, 'Batch')] | [?properties.region=='eastus2']"
- # Get primary region IPs
- az network list-service-tags -l "East US 2" --query "values[?starts_with(id, 'AzureMachineLearning')] | [?properties.region=='eastus2']"
- # Get secondary region IPs
- az network list-service-tags -l "Central US" --query "values[?starts_with(id, 'AzureMachineLearning')] | [?properties.region=='centralus']"
- ```
-
- > [!TIP]
- > If you are using the US-Virginia, US-Arizona regions, or China-East-2 regions, these commands return no IP addresses. Instead, use one of the following links to download a list of IP addresses:
- >
- > * [Azure IP ranges and service tags for Azure Government](https://www.microsoft.com/download/details.aspx?id=57063)
- > * [Azure IP ranges and service tags for Azure China](https://www.microsoft.com//download/details.aspx?id=57062)
-
-> [!IMPORTANT]
-> The IP addresses may change over time.
-
-When creating the UDR, set the __Next hop type__ to __Internet__. The following image shows an example IP address based UDR in the Azure portal:
--
-# [Service tag routes](#tab/servicetag)
-
-Create user-defined routes for the following service tags:
-
-* `AzureMachineLearning`
-* `BatchNodeManagement.<region>`, where `<region>` is your Azure region.
-
-The following commands demonstrate adding routes for these service tags:
-
-```azurecli
-az network route-table route create -g MyResourceGroup --route-table-name MyRouteTable -n AzureMLRoute --address-prefix AzureMachineLearning --next-hop-type Internet
-az network route-table route create -g MyResourceGroup --route-table-name MyRouteTable -n BatchRoute --address-prefix BatchNodeManagement.westus2 --next-hop-type Internet
-```
---
-For information on configuring UDR, see [Route network traffic with a routing table](../virtual-network/tutorial-create-route-table-portal.md).
### Outbound configuration
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-data.md
Azure Machine Learning provides several ways to use your models for scoring. Som
| -- | :--: | -- | | [Batch prediction](./tutorial-pipeline-batch-scoring-classification.md) | Γ£ö | Make predictions on large quantities of data asynchronously. | | [Web service](how-to-deploy-and-where.md) | &nbsp; | Deploy models as a web service. |
-| [Azure IoT Edge module](how-to-deploy-and-where.md) | &nbsp; | Deploy models to IoT Edge devices. |
For situations where the SDK doesn't provide access to datastores, you might be able to create custom code by using the relevant Azure SDK to access the data. For example, the [Azure Storage SDK for Python](https://github.com/Azure/azure-storage-python) is a client library that you can use to access data stored in blobs or files.
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-register-datasets.md
To create and work with datasets, you need:
* Work on your own Jupyter notebook and [install the SDK yourself](/python/api/overview/azure/ml/install). > [!NOTE]
-> Some dataset classes have dependencies on the [azureml-dataprep](https://pypi.org/project/azureml-dataprep/) package, which is only compatible with 64-bit Python. For If you are developing on Linux, these classes are supported only on the following distributions: Red Hat Enterprise Linux (7, 8), Ubuntu (18.04), Debian (9), and CentOS (7). If you are using unsupported distros, please follow [this guide](/dotnet/core/install/linux) to install .NET Core 2.1 to proceed.
+> Some dataset classes have dependencies on the [azureml-dataprep](https://pypi.org/project/azureml-dataprep/) package, which is only compatible with 64-bit Python. If you are developing on __Linux__, these classes rely on .NET Core 2.1, and are only supported on specific distributions. For more information on the supported distros, see the .NET Core 2.1 column in the [Install .NET on Linux](/dotnet/core/install/linux) article.
> [!IMPORTANT]
-> While the package may work on older versions of these Linux distros, we do not recommend using a distro that is out of mainstream support. Distros that are out of mainstream support may have security vulnerabilities, as they do not receive the latest updates. We recommend using the latest supported version of your distro, or the latest one supported with the azureml-dataprep package.
+> While the package may work on older versions of Linux distros, we do not recommend using a distro that is out of mainstream support. Distros that are out of mainstream support may have security vulnerabilities, as they do not receive the latest updates. We recommend using the latest supported version of your distro that is compatible with .
## Compute size guidance
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-and-where.md
Title: How to deploy machine learning models
-description: 'Learn how and where to deploy machine learning models. Deploy to Azure Container Instances, Azure Kubernetes Service, Azure IoT Edge, and FPGA.'
+description: 'Learn how and where to deploy machine learning models. Deploy to Azure Container Instances, Azure Kubernetes Service, and FPGA.'
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-training-vnet.md
When the creation process finishes, you train your model by using the cluster in
[!INCLUDE [low-pri-note](../../includes/machine-learning-low-pri-vm.md)]
+### Inbound traffic
++
+For more information on input and output traffic requirements for Azure Machine Learning, see [Use a workspace behind a firewall](how-to-access-azureml-behind-firewall.md).
+ ## Azure Databricks For specific information on using Azure Databricks with a virtual network, see [Deploy Azure Databricks in your Azure Virtual Network](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject).
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-track-monitor-analyze-runs.md
This article shows how to do the following tasks:
> [!TIP] > If you're looking for information on monitoring the Azure Machine Learning service and associated Azure services, see [How to monitor Azure Machine Learning](monitor-azure-machine-learning.md).
-> If you're looking for information on monitoring models deployed as web services or IoT Edge modules, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](how-to-enable-app-insights.md).
+> If you're looking for information on monitoring models deployed as web services, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](how-to-enable-app-insights.md).
## Prerequisites
machine-learning Monitor Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/monitor-azure-machine-learning.md
When you have critical applications and business processes relying on Azure reso
> * [Track experiments with MLflow](how-to-use-mlflow.md) > * [Visualize runs with TensorBoard](how-to-monitor-tensorboard.md) >
-> If you want to monitor information generated by models deployed as web services or IoT Edge modules, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](how-to-enable-app-insights.md).
+> If you want to monitor information generated by models deployed as web services, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](how-to-enable-app-insights.md).
## What is Azure Monitor?
machine-learning Overview What Happened To Workbench https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/overview-what-happened-to-workbench.md
The images that you created in your old image registry cannot be directly migrat
Now that support for the old CLI has ended, you can no longer redeploy models or manage the web services you originally deployed with your Model Management account. However, those web services will continue to work for as long as Azure Container Service (ACS) is still supported.
-In the latest version, models are deployed as web services to Azure Container Instances (ACI) or Azure Kubernetes Service (AKS) clusters. You can also deploy to FPGAs and to Azure IoT Edge.
+In the latest version, models are deployed as web services to Azure Container Instances (ACI) or Azure Kubernetes Service (AKS) clusters. You can also deploy to FPGAs.
Learn more in these articles: + [Where and how to deploy models](how-to-deploy-and-where.md)
marketplace Marketplace Metering Service Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-metering-service-apis.md
Description of status code referenced in `BatchUsageEvent` API response:
| `Error` | Error code. | | `ResourceNotFound` | The usage resource provided is invalid. | | `ResourceNotAuthorized` | You are not authorized to provide usage for this resource. |
+| `ResourceNotActive` | The resource is suspended or was never activated. |
| `InvalidDimension` | The dimension for which the usage is passed is invalid for this offer/plan. | | `InvalidQuantity` | The quantity passed is lower or equal to 0. | | `BadArgument` | The input is missing or malformed. |
marketplace Private Offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/private-offers.md
Private offers will also appear in search results and can be deployed via comman
[![[Private offers appearing in search results.]](media/marketplace-publishers-guide/private-offer.png)](media/marketplace-publishers-guide/private-offer.png#lightbox)
-Private offers will also appear in search results. Just look for the **Private** badge.
- >[!Note] >Private offers are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program.
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/whats-new.md
## Update (August 2021) -- At-scale discovery and assessment of ASP.NET web apps running on IIS servers in your VMware environment, is now in preview. [Learn More](concepts-azure-sql-assessment-calculation.md) Refer to the [Discovery](tutorial-discover-vmware.md) and [assessment](tutorial-assess-sql.md) tutorials to get started.
+- At-scale discovery and assessment of ASP.NET web apps running on IIS servers in your VMware environment, is now in preview. [Learn More](concepts-azure-webapps-assessment-calculation.md). Refer to the [Discovery](tutorial-discover-vmware.md) and [assessment](tutorial-assess-webapps.md) tutorials to get started.
- Support for Azure [ultra disks](https://docs.microsoft.com/azure/virtual-machines/disks-types#ultra-disk) in Azure VM assessment recommendation.
+- General Availability of at-scale, software inventory and agentless dependency analysis for VMware virtual machines.
+- Azure Migrate appliance updates:
+ - ΓÇÿDiagnose and solveΓÇÖ on appliance to help users identify and self-assess any issues with the appliance.
+ - Unified installer script- common script where users need to select from the scenario, cloud, and connectivity options to deploy an appliance with the desired configuration.
+ - Support to add a user account with ΓÇÿsudoΓÇÖ access on appliance configuration manager to perform discovery of Linux servers (as an alternative to providing root account or enabling setcap permissions).
+ - Support to edit the SQL Server connection properties on the appliance configuration manager.
## Update (July 2021)+ - Azure Migrate: App Containerization tool now lets you package applications running on servers into a container image and deploy the containerized application to Azure App Service containers, in addition to Azure Kubernetes Service. You can also automatically integrate application monitoring for Java apps with Azure Application Insights and use Azure Key Vault to manage application secrets such as certificates and parameterized configurations. For more information, see [ASP.NET app containerization and migration to Azure App Service](tutorial-app-containerization-aspnet-app-service.md) and [Java web app containerization and migration to Azure App Service](tutorial-app-containerization-java-app-service.md) tutorials to get started. ## Update (June 2021)
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-high-availability.md
Previously updated : 01/29/2021 Last updated : 08/10/2021 # High availability concepts in Azure Database for MySQL Flexible Server (Preview)
Last updated 01/29/2021
> [!IMPORTANT] > Azure Database for MySQL - Flexible Server is currently in public preview.
-Azure Database for MySQL Flexible Server (Preview), allows configuring high availability with automatic failover using zone redundant high availability option. When deployed in a zone redundant configuration, flexible server automatically provisions and manages a standby replica in a different availability zone.
+Azure Database for MySQL Flexible Server (Preview), allows configuring high availability with automatic failover. When high availability is configured, flexible server automatically provisions and manages a standby replica using two different options
+
+* **Zone Redundant High Availability**: this option is preferred for complete isolation and redundancy of infrastructure across multiple availability zones. It provides highest level of availability but it requires you to configure application redundancy across zones. Zone redundant HA is preferred when you want to achieve highest level of availability against any infrastructure failure in the availability zone and where latency across the availability zone is acceptable. Zone redundant HA is available in [subset of Azure regions](https://docs.microsoft.com/azure/mysql/flexible-server/overview#azure-regions) where the region supports multiple availability zones and Zone redundant HA is available.
+
+* **Same-Zone High Availability**: this option is preferred for infrastructure redundancy with lower network latency as both primary and standby server will be in the same availability zone. It provides high availability without configuring application redundancy across zones. Same-Zone HA is preferred when you want to achieve highest level of availability within a single availability zone with the lowest network latency. Same-Zone HA is available in all [Azure regions which Flexible server is available in](https://docs.microsoft.com/azure/mysql/flexible-server/overview#azure-regions).
+
+## Zone Redundant High Availability
When the flexible server is created with zone redundant high availability enabled, the data and log files are hosted in a [Zone-redundant storage (ZRS)](https://docs.microsoft.com/azure/storage/common/storage-redundancy#redundancy-in-the-primary-region). Using storage level replication available with ZRS, the data and log files are synchronously replicated to the standby server to ensure zero data loss. The failover is fully transparent from the client application and doesn't require any user actions. The recovery of the standby server to come online during failover is dependent on the binary log application on the standby. It is therefore advised to use primary keys on all the tables to reduce failover time. The standby server is not available for any read or write operations but is a passive standby to enable fast failover. The failover times typically ranges from 60-120 seconds.
-Zone redundant high availability configuration enables automatic failover during planned events such as user-initiated scale compute operations, and unplanned events such as underlying hardware and software faults, network failures, and even availability zone failures.
+> [!Note]
+> Zone redundant HA might cause 5-10% drop in latency if the application connecting to the database server across availability zones where network latency is relatively higher in the order of 2-4ms.
++
+### Zone Redundancy Architecture
+
+The primary server is deployed in the region and a specific availability zone. When the high availability is chosen, a standby replica server with the same configuration as that of the primary server is automatically deployed in the "specified availability zone", including compute tier, compute size, storage size, and network configuration. The log data is synchronously replicated to the standby replica to ensure zero data loss in any failure situation. Automatic backups, both snapshots and log backups, are performed on a zone redundant storage from the primary database server.
+
+### Standby Zone Selection
+In zone redundant high availability scenario you may choose the standby server zone location of your choice. Co-locating the standby database servers and standby applications in the same zone reduces latencies and allows users to better prepare for disaster recovery situations and ΓÇ£zone downΓÇ¥ scenarios.
+## Same-Zone High Availability
-## Zone redundancy Architecture
+When the flexible server is created with same-zone high availability enabled, the data and log files are hosted in a [Locally redundant storage (LRS)](https://docs.microsoft.com/azure/storage/common/storage-redundancy#locally-redundant-storage). Using storage level replication available with LRS, the data and log files are synchronously replicated to the standby server to ensure zero data loss. The standby server offers infrastructure redundancy with a separate virtual machine (compute) which reduces the failover time and network latency between the user application and the database server due to colocation. The failover is fully transparent from the client application and doesn't require any user actions. The recovery of the standby server to come online during failover is dependent on the binary log application on the standby. It is therefore advised to use primary keys on all the tables to reduce failover time. The standby server is not available for any read or write operations but is a passive standby to enable fast failover. The failover times typically ranges from 60-120 seconds.
-The primary server is deployed in the region and a specific availability zone. When the high availability is chosen, a standby replica server with the same configuration as that of the primary server is automatically deployed, including compute tier, compute size, storage size, and network configuration. The log data is synchronously replicated to the standby replica to ensure zero data loss in any failure situation. Automatic backups, both snapshots and log backups, are performed from the primary database server.
+Same-Zone high availability enable users to place a standby server in the same zone as the primary server, which reduces the replication lag between primary and standby. This also provides for lower latencies between the application server and database server if placed within the same Azure availability zone.
-The health of the HA is continuously monitored and reported on the overview page.
-The various replication statuses are listed below:
+## High Availability Monitoring
+The health of the HA is continuously monitored and reported on the overview page. The various replication statuses are listed below:
| **Status** | **Description** | | :-- | : |
Here are some advantages for using zone redundancy HA feature:
- Standby replica deploys in an exact VM configuration as that of primary such as vCores, storage, network settings (VNET, Firewall), etc. - Ability to remove standby replica by disabling high availability.-- Automatic backups are snapshot-based, performed from the primary database server and stored in a zone redundant storage.
+- Automatic backups are snapshot-based, performed from the primary database server and stored in a zone redundant storage or locally redundant storage depending on the high availability option.
- In the event of failover, Azure Database for MySQL flexible server automatically fails over to the standby replica if high availability is enabled. The high availability setup monitors the primary server and bring it back online. - Clients always connect to the primary database server. - If there is a database crash or node failure, the flexible server VM is restarted on the same node. At the same time, an automatic failover is triggered. If flexible server VM restart is successful before the failover finishes, the failover operation will be canceled. - Ability to restart the server to pick up any static server parameter changes.
-## Steady-state operations
+## Steady-state Operations
Applications are connected to the primary server using the database server name. The standby replica information is not exposed for direct access. Commits and writes are acknowledged after flushing the log files at the Primary Server's Zone-redundant storage (ZRS) storage. Due to the sync replication technology used in ZRS storage, applications can expect minor latency for writes and commits.
-## Failover process
+## Failover Process
For business continuity, you need to have a failover process for planned and unplanned events. >[!NOTE]
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/overview.md
Title: Overview - Azure Database for MySQL - Flexible Server description: Learn about the Azure Database for MySQL Flexible server, a relational database service in the Microsoft cloud based on the MySQL Community Edition. - + Previously updated : 6/19/2021 Last updated : 08/10/2021 # Azure Database for MySQL - Flexible Server (Preview)
See [Compute and Storage concepts](concepts-compute-storage.md) to learn more.
MySQL is one of the popular database engines for running internet-scale web and mobile applications. Many of our customers use it for their online education services, video streaming services, digital payment solutions, e-commerce platforms, gaming services, news portals, government, and healthcare websites. These services are required to serve and scale as the traffic on the web or mobile application increases.
-On the applications side, the application is typically developed in Java or php and migrated to run on [Azure virtual machine scale sets](../../virtual-machine-scale-sets/overview.md) or [Azure App Services](../../app-service/overview.md) or are containerized to run on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md). With virtual machine scale set, App Service or AKS as underlying infrastructure, application scaling is simplified by instantaneously provisioning new VMs and replicating the stateless components of applications to cater to the requests but often, database ends up being a bottleneck as centralized stateful component.
+On the applications side, the application is typically developed in Java or PHP and migrated to run on [Azure virtual machine scale sets](../../virtual-machine-scale-sets/overview.md) or [Azure App Services](../../app-service/overview.md) or are containerized to run on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md). With virtual machine scale set, App Service or AKS as underlying infrastructure, application scaling is simplified by instantaneously provisioning new VMs and replicating the stateless components of applications to cater to the requests but often, database ends up being a bottleneck as centralized stateful component.
The read replica feature allows you to replicate data from an Azure Database for MySQL flexible server to a read-only server. You can replicate from the source server to **up to 10 replicas**. Replicas are updated asynchronously using the MySQL engine's native [binary log (binlog) file position-based replication technology](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html). You can use a load balancer proxy solution like [ProxySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042) to seamlessly scale out your application workload to read replicas without any application refactoring cost.
One advantage of running your workload in Azure is its global reach. The flexibl
| Central US | :heavy_check_mark: | :x: | | East US | :heavy_check_mark: | :heavy_check_mark: | | East US 2 | :heavy_check_mark: | :heavy_check_mark: |
-| France Central | :heavy_check_mark: | :x:|
+| France Central | :heavy_check_mark: | :heavy_check_mark:|
| Germany West Central | :heavy_check_mark: | :x: | | Japan East | :heavy_check_mark: | :heavy_check_mark: | | Korea Central | :heavy_check_mark: | :x: |
Now that you've read an introduction to Azure Database for MySQL - Single Server
- Build your first app using your preferred language: - [Python](connect-python.md)
- - [Php](connect-php.md)
+ - [PHP](connect-php.md)
mysql Quickstart Create Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/quickstart-create-server-portal.md
Complete these steps to create a flexible server:
Region|The region closest to your users| The location that's closest to your users.| Workload type| Development | For production workload, you can choose Small/Medium-size or Large-size depending on [max_connections](concepts-server-parameters.md#max_connections) requirements| Availability zone| No preference | If your application in Azure VMs, virtual machine scale sets or AKS instance is provisioned in a specific availability zone, you can specify your flexible server in the same availability zone to collocate application and database to improve performance by cutting down network latency across zones.|
- High Availability| Default | For production servers, enabling zone redundant high availability (HA) is highly recommended for business continuity and protection against zone failures|
+ High Availability| Unchecked | For production servers, choose between [zone redundant high availability](https://docs.microsoft.com/azure/mysql/flexible-server/concepts-high-availability#zone-redundant-high-availability) and [same-zone high availability](https://docs.microsoft.com/azure/mysql/flexible-server/concepts-high-availability#same-zone-high-availability). This is highly recommended for business continuity and protection against VM failures|
+ |Standby availability zone| No preference| Choose the standby server zone location and colocate it with the application standby server in case of zone failure |
MySQL version|**5.7**| A MySQL major version.| Admin username |**mydemouser**| Your own sign-in account to use when you connect to the server. The admin user name can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.| Password |Your password| A new password for the server admin account. It must contain between 8 and 128 characters. It must also contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, and so on).|
- Compute + storage | **Burstable**, **Standard_B1ms**, **10 GiB**, **100 iops**, **7 days** | The compute, storage, IOPS, and backup configurations for your new server. Select **Configure server**. **Burstable**, **Standard_B1ms**, **10 GiB**, **100 iops**, and **7 days** are the default values for **Compute tier**, **Compute size**, **Storage size**, **iops**, and backup **Retention period**. You can leave those values as is or adjust them. For faster data loads during migration, it is recommended to increase the IOPS to the maximum size supported by compute size and later scale it back to save cost. To save the compute and storage selection, select **Save** to continue with the configuration. The following screenshot shows the compute and storage options.|
-
+ Compute + storage | **Burstable**, **Standard_B1ms**, **10 GiB**, **100 IOPS**, **7 days** | The compute, storage, IOPS, and backup configurations for your new server. Select **Configure server**. **Burstable**, **Standard_B1ms**, **10 GiB**, **100 IOPS**, and **7 days** are the default values for **Compute tier**, **Compute size**, **Storage size**, **IOPS**, and backup **Retention period**. You can leave those values as is or adjust them. For faster data loads during migration, it is recommended to increase the IOPS to the maximum size supported by compute size and later scale it back to save cost. To save the compute and storage selection, select **Save** to continue with the configuration. The following screenshot shows the compute and storage options.|
+
+
+ > :::image type="content" source="./media/quickstart-create-server-portal/high-availability.png" alt-text="Screenshot that shows high availability options.":::
+ > :::image type="content" source="./media/quickstart-create-server-portal/compute-storage.png" alt-text="Screenshot that shows compute and storage options."::: 5. Configure networking options.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/whats-new.md
Previously updated : 08/06/2021 Last updated : 08/12/2021 # What's new in Azure Database for MySQL - Flexible Server (Preview)?
This release of Azure Database for MySQL - Flexible Server includes the followin
- East Asia (Hong Kong) - Central India
+- **Known issue**
+
+ - Right after Zone-Redundant high availability server failover, clients fail to connect to the server if using SSL with ssl_mode VERIFY_IDENTITY. This issue can be mitigated by using ssl_mode as VERIFY_CA.
+ - Unable to create Same-Zone High availability server in the following regions: Central India, East Asia, Korea Central, South Africa North, Switzerland North.
+ - In a rare scenario and after HA failover, the primary server will be in read_only mode. Resolve the issue by updating ΓÇ£read_onlyΓÇ¥ value from the server parameters blade to OFF.
+ - After successfully scaling Compute in the Compute+Storage blade, IOPS is reset to the SKU default. Customers can work around the issue by re-scaling IOPs in the Compute+Storage blade to desired value (previously set) post the compute deployment and consequent IOPS reset.
+ ## July 2021 This release of Azure Database for MySQL - Flexible Server includes the following updates.
networking Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/fundamentals/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure networking description: Sample Azure Resource Graph queries for Azure networking showing use of resource types and tables to access Azure networking related resources and properties. Previously updated : 08/04/2021 Last updated : 08/09/2021
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[Zertia](https://zertia.es/)||[Express Route ΓÇô Intercloud connectivity](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-inter-conect-of103?tab=Overview)|[Enterprise Connectivity Suite - Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-vw-suite-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Fortinet](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-fortinet-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Cisco Meraki](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-cisco-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Citrix](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-citrix-of101?tab=Overview);||| Azure Marketplace offers for Managed ExpressRoute, Virtual WAN, Security Services and Private Edge Zone Services from the following Azure Networking MSP Partners are on our roadmap:
-[Amdocs](https://www.amdocs.com/); [Cirrus Core Networks](https://cirruscorenetworks.com/); [Cognizant](https://www.cognizant.com/cognizant-digital-systems-technology/cloud-enablement-services); [Deutsche Telekom](https://www.telekom.com/en/media/media-information/archive/deutsche-telekom-offers-managed-network-services-for-microsoft-azure-598406); [InterCloud](https://intercloud.com/partners/microsoft-azure/); [KINX](https://www.kinx.net/service/cloud/?lang=en); [OmniClouds](https://omniclouds.com/); [Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms); [SES](https://www.ses.com/networks/cloud/ses-and-azure-expressroute); [Telia](https://business.teliacompany.com/global-solutions/Business-Defined-Networking/Hybrid-Networking);
+[Amdocs](https://www.amdocs.com/); [Cirrus Core Networks](https://cirruscorenetworks.com/); [Cognizant](https://www.cognizant.com/cognizant-digital-systems-technology/cloud-enablement-services); [Deutsche Telekom](https://www.telekom.com/en/media/media-information/archive/deutsche-telekom-offers-managed-network-services-for-microsoft-azure-598406); [InterCloud](https://intercloud.com/partners/microsoft-azure/); [KINX](https://www.kinx.net/service/cloud/?lang=en); [OmniClouds](https://omniclouds.com/); [Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms); [Servent](https://www.servent.co.uk/); [SES](https://www.ses.com/networks/cloud/ses-and-azure-expressroute); [Telia](https://business.teliacompany.com/global-solutions/Business-Defined-Networking/Hybrid-Networking);
## <a name="expressroute"></a>ExpressRoute partners
open-datasets How To Create Azure Machine Learning Dataset From Open Dataset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset.md
By creating an [Azure Machine Learning dataset](../machine-learning/how-to-creat
To understand where datasets fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](../machine-learning/concept-data.md#data-workflow) article.
-Azure Open Datasets are curated public datasets that you can use to add scenario-specific features to enrich your predictive solutions and improve their accuracy. See the [Open Datasets catalog](https://azure.microsoft.com/en-in/services/open-datasets/catalog/) for public-domain data that can help you train machine learning models, like:
+Azure Open Datasets are curated public datasets that you can use to add scenario-specific features to enrich your predictive solutions and improve their accuracy. See the [Open Datasets catalog](https://azure.microsoft.com/services/open-datasets/catalog/) for public-domain data that can help you train machine learning models, like:
* [weather](https://azure.microsoft.com/services/open-datasets/catalog/noaa-integrated-surface-data/) * [census](https://azure.microsoft.com/services/open-datasets/catalog/us-decennial-census-zip/)
openshift Intro Openshift https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/intro-openshift.md
Azure Red Hat OpenShift nodes run on Azure virtual machines. You can connect sto
## Service Level Agreement
-Azure Red Hat OpenShift offers a Service Level Agreement to guarantee that the service will be available 99.95% of the time. For more details on the SLA, see [Azure Red Hat OpenShift SLA](https://azure.microsoft.com/en-au/support/legal/sla/openshift/v1_0/).
+Azure Red Hat OpenShift offers a Service Level Agreement to guarantee that the service will be available 99.95% of the time. For more details on the SLA, see [Azure Red Hat OpenShift SLA](https://azure.microsoft.com/support/legal/sla/openshift/v1_0/).
## Next steps
openshift Tutorial Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/tutorial-create-cluster.md
Next, you will create a virtual network containing two empty subnets. If you hav
An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are asked to specify a location. This location is where resource group metadata is stored, and it is also where your resources run in Azure if you don't specify another region during resource creation. Create a resource group using the [az group create](/cli/azure/group#az_group_create) command. > [!NOTE]
- > Azure Red Hat OpenShift is not available in all regions where an Azure resource group can be created. See [Available regions](https://azure.microsoft.com/en-gb/global-infrastructure/services/?products=openshift) for information on where Azure Red Hat OpenShift is supported.
+ > Azure Red Hat OpenShift is not available in all regions where an Azure resource group can be created. See [Available regions](https://azure.microsoft.com/global-infrastructure/services/?products=openshift) for information on where Azure Red Hat OpenShift is supported.
```azurecli-interactive az group create \
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/partner-solutions/elastic/overview.md
Here are the key capabilities provided by the Elastic integration with Azure:
## Elastic links
-For more help with using the Elastic service, see the [Elastic documentation](https://azure-native-02.docs-preview.app.elstc.co/guide/en/cloud/master/ec-azure-marketplace-native.html) for Azure integration.
+For more help with using the Elastic service, see the [Elastic documentation](https://www.elastic.co/guide/en/cloud/current/ec-azure-marketplace-native.html) for Azure integration.
## Next steps
-To create an instance of Elastic, see [QuickStart: Get started with Elastic](create.md).
+To create an instance of Elastic, see [QuickStart: Get started with Elastic](create.md).
postgresql Concepts Intelligent Tuning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-intelligent-tuning.md
+
+ Title: Intelligent Tuning - Azure Database for PostgreSQL - Flexible Server
+description: This article describes Intelligent Tuning features in Azure Database for PostgreSQL - Flexible Server.
++++ Last updated : 08/04/2021++
+# Perform Intelligent Tuning on your PostgreSQL Flex Server
+
+> [!IMPORTANT]
+> Azure Database for PostgreSQL - Flexible Server is in preview
+
+> [!IMPORTANT]
+> Intelligent Tuning is in preview
+
+**Applies to:** Azure Database for PostgreSQL - Flex Server versions 11 and above
+
+The Intelligent Tuning features in Azure Database for PostgreSQL Flexible Server provides a way to automatically improve your databases performance. With our service, Intelligent Tuning automatically adjusts your Checkpoint_completion_target, min_wal_size and max_wal_size based on usage patterns and values. Our service will query statistics around your database every 30 minutes and make constant adjustments to optimize your performance without any interaction from the user.
+
+## Enabling Intelligent Tuning
+
+Intelligent Tuning is an opt-in feature, so it isn't active by default on a server. The tuning is available for singular databases and isn't global, so enabling it on one database doesn't enable it on all databases connected.
+
+### Enable Intelligent Tuning using the Azure portal
+
+1. Sign in to the Azure portal and select your Azure Database for PostgreSQL server.
+2. Select Server Parameters in the Settings section of the menu.
+3. Search for the Intelligent Tuning parameter.
+4. Set the value to True and Save.
+
+Allow up to 35 minutes for the first batch of data to persist in the azure_sys database.
+
+## Information about Intelligent Tuning
+
+Intelligent tuning operates based around three main features for the given time
+
+* Checkpoint_completion_target
+* Min_wal_size
+* Max_wal_size
+
+These three variables mostly affect the following
+
+* The duration of checkpoints
+* The frequency of checkpoints
+* The duration of synchronizations
+
+Intelligent tuning operates in both directions, it attempts to lower durations during high workloads and increasing them during idle segments. These rules are the ways we attempt to optimize these features so users can get personalized results during difficult time periods without manual updates.
+
+## Limitations and known issues
+
+* Optimizations are only issued in specific ranges, there's the possibility that there may not be any changes made
+* Intelligent Tuning functionality can be delayed by deleted databases in the query that can cause slight delays in the execution of improvements
+
+Optimizations are only made in the storage sections as of now, expanding into other various categories is TBD
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Web Apps (Microsoft.Web/sites) / sites | privatelink.azurewebsites.net | azurewebsites.net | | Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) / amlworkspace | privatelink.api.azureml.ms<br/>privatelink.notebooks.azure.net | api.azureml.ms<br/>notebooks.azure.net<br/>instances.azureml.ms<br/>aznbcontent.net | | SignalR (Microsoft.SignalRService/SignalR) / signalR | privatelink.service.signalr.net | service.signalr.net |
-| Azure Monitor (Microsoft.Insights/privateLinkScopes) / azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/>privatelink.agentsvc.azure-automation.net <br/> privatelink.blob.core.windows.net | monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net<br/> blob.core.windows.net |
+| Azure Monitor (Microsoft.Insights/privateLinkScopes) / azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/> privatelink.agentsvc.azure-automation.net <br/> privatelink.blob.core.windows.net | monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net <br/> blob.core.windows.net |
| Cognitive Services (Microsoft.CognitiveServices/accounts) / account | privatelink.cognitiveservices.azure.com | cognitiveservices.azure.com | | Azure File Sync (Microsoft.StorageSync/storageSyncServices) / afs | privatelink.afs.azure.net | afs.azure.net | | Azure Data Factory (Microsoft.DataFactory/factories) / dataFactory | privatelink.datafactory.azure.net | datafactory.azure.net |
private-link Troubleshoot Private Endpoint Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/troubleshoot-private-endpoint-connectivity.md
Review these steps to make sure all the usual configurations are as expected to
1. Review Private Endpoint configuration by browsing the resource.
- a. Go to **Private Link Center**.
+ a. Go to [Private Link Center](https://ms.portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/overview).
![Private Link Center](./media/private-endpoint-tsg/private-link-center.png)
private-link Troubleshoot Private Link Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/troubleshoot-private-link-connectivity.md
If you experience connectivity problems with your private link setup, review the
1. Review Private Link configuration by browsing the resource.
- a. Go to **Private Link Center**.
+ a. Go to [Private Link Center](https://ms.portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/overview).
![Private Link Center](./media/private-link-tsg/private-link-center.png)
If you experience connectivity problems with your private link setup, review the
## Next steps * [Create a private link service (CLI)](./create-private-link-service-cli.md)
- * [Azure Private Endpoint troubleshooting guide](troubleshoot-private-endpoint-connectivity.md)
+ * [Azure Private Endpoint troubleshooting guide](troubleshoot-private-endpoint-connectivity.md)
purview How To Lineage Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-cassandra.md
+
+ Title: Metadata and Lineage from Cassandra
+description: This article describes the data lineage extraction from Cassandra source.
+++++ Last updated : 08/12/2021+
+# How to get lineage from Cassandra into Azure Purview
+
+This article elaborates on the data lineage aspects of Cassandra source in Azure Purview. The prerequisite to see data lineage in Purview for Cassandra is to [scan your Cassandra server.](../purview/register-scan-sapecc-source.md)
+
+## Lineage of Cassandra artifacts in Azure Purview
+
+Users can search for Cassandra artifacts by name, description, or other details to see relevant results. Under the asset overview & properties tab the basic details such as description, properties and other information are shown. Under the lineage tab, asset relationships between Cassandra Tables and materialized Views are shown.Therefore, today Cassandra materialized Views will have lineage information from tables.
+
+The lineage derived is available at the columns level as well.
+++
+## Next steps
+
+- [Learn about Data lineage in Azure Purview](catalog-lineage-user-guide.md)
+- [Link Azure Data Factory to push automated lineage](how-to-link-azure-data-factory.md)
purview How To Lineage Erwin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-erwin.md
+
+ Title: Metadata and Lineage from Erwin
+description: This article describes the data lineage extraction from Erwin source.
+++++ Last updated : 08/11/2021+
+# How to get lineage from Erwin into Azure Purview
+
+This article elaborates on the data lineage aspects of Erwin source in Azure Purview. The prerequisite to see data lineage in Purview for Erwin is to [scan your Erwin.](../purview/register-scan-erwin-source.md)
+
+## Lineage of Erwin artifacts in Azure Purview
+
+Users can search for Erwin artifacts by name, description, or other details to see relevant results. Under the asset overview & properties tab the basic details such as description, properties and other information are shown. Under the lineage tab, asset relationships between Erwin Tables and Views are shown.Therefore, Erwin Views will have lineage information from tables.
+++
+## Next steps
+
+- [Learn about Data lineage in Azure Purview](catalog-lineage-user-guide.md)
+- [Link Azure Data Factory to push automated lineage](how-to-link-azure-data-factory.md)
purview How To Lineage Google Bigquery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-google-bigquery.md
+
+ Title: Metadata and Lineage from BigQuery
+description: This article describes the data lineage extraction from BigQuery source.
+++++ Last updated : 08/12/2021+
+# How to get lineage from BigQuery into Azure Purview
+
+This article elaborates on the data lineage aspects of BigQuery source in Azure Purview. The prerequisite to see data lineage in Purview for BigQuery is to [scan your BigQuery project.](../purview/register-scan-google-bigquery-source.md)
+
+## Lineage of BigQuery artifacts in Azure Purview
+
+Users can search for BigQuery artifacts by name, description, or other details to see relevant results. Under the asset overview & properties tab the basic details such as description, properties and other information are shown. Under the lineage tab, asset relationships between BigQuery Tables and Views are shown.Therefore, BigQuery Views will have lineage information from tables.
+
+The lineage derived is available at the columns level as well.
+++
+## Next steps
+
+- [Learn about Data lineage in Azure Purview](catalog-lineage-user-guide.md)
+- [Link Azure Data Factory to push automated lineage](how-to-link-azure-data-factory.md)
purview How To Lineage Looker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-looker.md
+
+ Title: Metadata and Lineage from Looker
+description: This article describes the data lineage extraction from Looker source.
+++++ Last updated : 08/12/2021+
+# How to get lineage from Looker into Azure Purview
+
+This article elaborates on the data lineage aspects of Looker source in Azure Purview. The prerequisite to see data lineage in Purview for Looker is to [scan your Looker.](../purview/register-scan-looker-source.md)
+
+## Lineage of Looker artifacts in Azure Purview
+
+Users can search for Looker artifacts by name, description, or other details to see relevant results. Under the asset overview & properties tab the basic details such as description, properties and other information are shown. Under the lineage tab, asset relationships between Looker Layouts and Views are shown.Therefore, today Looker Views will have lineage information from tables.
+++
+## Next steps
+
+- [Learn about Data lineage in Azure Purview](catalog-lineage-user-guide.md)
+- [Link Azure Data Factory to push automated lineage](how-to-link-azure-data-factory.md)
purview How To Lineage Oracle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-oracle.md
+
+ Title: Metadata and Lineage from Oracle
+description: This article describes the data lineage extraction from Oracle source.
+++++ Last updated : 08/11/2021+
+# How to get lineage from Oracle into Azure Purview
+
+This article elaborates on the data lineage aspects of Oracle source in Azure Purview. The prerequisite to see data lineage in Purview for Oracle is to [scan your Oracle.](../purview/register-scan-oracle-source.md)
+
+## Lineage of Oracle artifacts in Azure Purview
+
+Users can search for Oracle artifacts by name, description, or other details to see relevant results. Under the asset overview & properties tab the basic details such as description, classification and other information are shown. Under the lineage tab, asset relationships between Oracle Tables and Stored Procedures, Views and Functions are shown.
+
+Therefore, today Oracle Views will have lineage information from tables, while Functions and Stored Procedures will produce lineage between Table/View with parameter_dataset and stored_procedure_result_set. The lineage derived is available at the columns level as well.
+++
+## Next steps
+
+- [Learn about Data lineage in Azure Purview](catalog-lineage-user-guide.md)
+- [Link Azure Data Factory to push automated lineage](how-to-link-azure-data-factory.md)
purview How To Lineage Sapecc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-sapecc.md
+
+ Title: Metadata and Lineage from SAP ECC
+description: This article describes the data lineage extraction from SAP ECC source.
+++++ Last updated : 08/12/2021+
+# How to get lineage from SAP ECC into Azure Purview
+
+This article elaborates on the data lineage aspects of SAP ECC source in Azure Purview. The prerequisite to see data lineage in Purview for SAP ECC is to [scan your SAP ECC.](../purview/register-scan-sapecc-source.md)
+
+## Lineage of SAP ECC artifacts in Azure Purview
+
+Users can search for SAP ECC artifacts by name, description, or other details to see relevant results. Under the asset overview & properties tab the basic details such as description, properties and other information are shown. Under the lineage tab, asset relationships between SAP ECC Tables and Views are shown.Therefore, SAP ECC Views will have lineage information from tables.
+
+The lineage derived is available at the columns level as well.
+++
+## Next steps
+
+- [Learn about Data lineage in Azure Purview](catalog-lineage-user-guide.md)
+- If you are moving data to Azure from SAP ECC using ADF we can track lineage as part of the data movement run time.[Link Azure Data Factory to push automated lineage](how-to-link-azure-data-factory.md)
purview How To Lineage Saps4hana https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-saps4hana.md
+
+ Title: Metadata and Lineage from SAP S/4HANA
+description: This article describes the data lineage extraction from SAP S/4HANA source.
+++++ Last updated : 08/12/2021+
+# How to get lineage from SAP S/4HANA into Azure Purview
+
+This article elaborates on the data lineage aspects of SAP S/4HANA source in Azure Purview. The prerequisite to see data lineage in Purview for SAP S/4HANA is to [scan your SAP S/4HANA.](../purview/register-scan-saps4hana-source.md)
+
+## Lineage of SAP S/4HANA artifacts in Azure Purview
+
+Users can search for SAP S/4HANA artifacts by name, description, or other details to see relevant results. Under the asset overview & properties tab the basic details such as description, properties and other information are shown. Under the lineage tab, asset relationships between SAP S/4HANA Tables and Views are shown. Therefore, SAP S/4HANA Views will have lineage information from tables.
+
+The lineage derived is available at the columns level as well.
+
+## Next steps
+
+- [Learn about Data lineage in Azure Purview](catalog-lineage-user-guide.md)
+- If you are moving data to Azure from SAP S/4HANA using ADF we can track lineage as part of the data movement run time.[Link Azure Data Factory to push automated lineage](how-to-link-azure-data-factory.md)
purview How To Lineage Teradata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-teradata.md
+
+ Title: Metadata and Lineage from Teradata
+description: This article describes the data lineage extraction from Teradata source.
+++++ Last updated : 08/12/2021+
+# How to get lineage from Teradata into Azure Purview
+
+This article elaborates on the data lineage aspects of Teradata source in Azure Purview. The prerequisite to see data lineage in Purview for Teradata is to [scan your Teradata.](../purview/register-scan-teradata-source.md)
+
+## Lineage of Teradata artifacts in Azure Purview
+
+Users can search for Teradata artifacts by name, description, or other details to see relevant results. Under the asset overview & properties tab the basic details such as description, properties and other information are shown. Under the lineage tab, asset relationships between Teradata Tables and Stored Procedures, and between Teradata Tables and Views are shown.
+
+Therefore, we support lineage for data transformations happening in Stored Procedures and transformations in Views from Teradata tables. The lineage derived is available at the columns level as well.
++
+In the above screenshot, **customerView** is a Teradata View created from a Teradata table **test_customer**. Also, **Return_DataSet** is a stored procedure that uses a Teradata table **test_customer**.
+
+## Next steps
+
+- [Learn about Data lineage in Azure Purview](catalog-lineage-user-guide.md)
+- If you are moving data to Azure from Teradata using ADF we can track lineage as part of the data movement run time. [Link Azure Data Factory to push automated lineage](how-to-link-azure-data-factory.md)
pytorch-enterprise Pte Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/pytorch-enterprise/pte-overview.md
To get started with PyTorch Enterprise, join the Microsoft Premier or Unified su
If you would like to try out the PyTorch LTS version, you can do so at PyTorch.org. ## Next steps
-* [PyTorch on Azure](https://azure.microsoft.com/en-us/develop/pytorch/)
+* [PyTorch on Azure](https://azure.microsoft.com/develop/pytorch/)
* [PyTorch Enterprise Support Program](https://aka.ms/PTELandingPage)
-* [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning/)
+* [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/)
role-based-access-control Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/scope-overview.md
Previously updated : 10/08/2020 Last updated : 08/09/2021
It's fairly simple to determine the scope for a management group, subscription,
} ```
+## Scope and ARM templates
+
+A role assignment is a special type in Azure Resource Manager called an *extension resource*. An extension resource is a resource that adds to another resource's capabilities. They always exist as an extension (like a child) of another resource. For example, a role assignment at subscription scope is an extension resource of the subscription. The name of a role assignment is always the name of the resource you are extending plus `/Microsoft.Authorization/roleAssignments/{roleAssignmentId}`. When assigning roles using Azure Resource Manager template (ARM template), you typically don't need to provide the scope. The reason is that the scope field ends up always being the ID of the resource you are extending. The scope can be determined from the ID of the role assignment itself. The following table shows examples of a role assignment ID and the corresponding scope:
+
+> [!div class="mx-tableFixed"]
+> | Role assignment ID | Scope |
+> | | |
+> | `/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}` | `/subscriptions/{subscriptionId}` |
+> | `/subscriptions/{subscriptionId}/resourceGroups/Example-Storage-rg/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}` | `/subscriptions/{subscriptionId}/resourceGroups/Example-Storage-rg` |
+
+For more information about scope and ARM templates, see [Assign Azure roles using Azure Resource Manager templates](role-assignments-template.md). For a full list of extension resource types, see [Resource types that extend capabilities of other resources](../azure-resource-manager/management/extension-resource-types.md).
+ ## Next steps - [Steps to assign an Azure role](role-assignments-steps.md)
search Cognitive Search Attach Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-attach-cognitive-services.md
Title: Attach Cognitive Services to a skillset
-description: Learn how to attach a Cognitive Services all-in-one subscription to an AI enrichment pipeline in Azure Cognitive Search.
+description: Learn how to attach a multi-service Cognitive Services resource to an AI enrichment pipeline in Azure Cognitive Search.
Previously updated : 02/16/2021 Last updated : 08/12/2021+ # Attach a Cognitive Services resource to a skillset in Azure Cognitive Search
-When configuring a [AI enrichment pipeline](cognitive-search-concept-intro.md) in Azure Cognitive Search, you can enrich a limited number of documents free of charge. For larger and more frequent workloads, you should attach a billable "all-in-one" Cognitive Services resource. An "all-in-one" subscription references "Cognitive Services" as the offering, rather than individual services, with access granted through a single API key.
+When configuring an [AI enrichment pipeline](cognitive-search-concept-intro.md) in Azure Cognitive Search, you can enrich a limited number of documents free of charge. For larger and more frequent workloads, you should attach a billable [multi-service Cognitive Services resource](../cognitive-services/cognitive-services-apis-create-account.md). A multi-service resource references "Cognitive Services" as the offering, rather than individual services, with access granted through a single API key.
-An "all-in-one" Cognitive Services resource drives the [built-in skills](cognitive-search-predefined-skills.md) that you can include in a skillset:
+A multi-service resource key is specified in a skillset definition and allows Microsoft to charge you for using these APIs:
+ [Computer Vision](https://azure.microsoft.com/services/cognitive-services/computer-vision/) for image analysis and optical character recognition (OCR) + [Text Analytics](https://azure.microsoft.com/services/cognitive-services/text-analytics/) for language detection, entity recognition, sentiment analysis, and key phrase extraction + [Text Translation](https://azure.microsoft.com/services/cognitive-services/translator-text-api/)
-An "all-in-one" Cognitive Services key is optional in a skillset definition. When the daily transactions number less than 20 per day, the cost is absorbed. However, when transactions exceed that number, a valid resource key is required in order for processing to continue.
+The key is used for billing, but not connections. Internally, a search service connects to a Cognitive Services resource that's co-located in the same physical region. [Product availability](https://azure.microsoft.com/global-infrastructure/services/?products=search) shows regional availability side by side.
-Any "all-in-one" resource key is valid. Internally, a search service will use the resource that's co-located in the same physical region, even if the "all-in-one" key is for a resource in a different region. The [product availability](https://azure.microsoft.com/global-infrastructure/services/?products=search) page shows regional availability side by side.
+## Key requirements
-> [!NOTE]
-> If you omit built-in skills in a skillset, then Cognitive Services is not accessed, and you won't be charged, even if the skillset specifies a key.
+A key is required for billable [built-in skills](cognitive-search-predefined-skills.md) that are used more than 20 times a day on the same indexer: Entity Linking, Entity Recognition, Image Analysis, Key Phrase Extraction, Language Detection, OCR, PII Detection, Sentiment, or Text Translation.
+
+[Custom Entity Lookup](cognitive-search-skill-custom-entity-lookup.md) requires a key to unlock transactions beyond 20 per indexer, per day. Note that adding the key unblocks the number of transactions, but is not used for billing.
+
+You can omit the key and the Cognitive Services section for skillsets that consist solely of custom skills or utility skills (Conditional, Document Extraction, Shaper, Text Merge, Text Split). You can also omit the section if your usage of billable skills is under 20 transactions per indexer per day.
## How billing works + Azure Cognitive Search uses the Cognitive Services resource key you provide on a skillset to bill for image and text enrichment. Execution of billable skills is at the [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
-+ Image extraction is an Azure Cognitive Search operation that occurs when documents are cracked prior to enrichment. Image extraction is billable. For image extraction pricing, see the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
++ Image extraction is an Azure Cognitive Search operation that occurs when documents are cracked prior to enrichment. Image extraction is billable on all tiers, with the exception of 20 free daily extractions on the free tier. Image extraction costs apply to image files inside blobs, embedded images in other files (PDF and other app files), and for images extracted using [Document Extraction](cognitive-search-skill-document-extraction.md). For image extraction pricing, see the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/). + Text extraction also occurs during the [document cracking](search-indexer-overview.md#document-cracking) phrase. It is not billable.
-+ Skills that do not call Cognitive Services, including Conditional, Shaper, Text Merge, and Text Split skills, are not billable.
++ Skills that do not call Cognitive Services, including Conditional, Shaper, Text Merge, and Text Split skills, are not billable. +
+ As noted, [Custom Entity Lookup](cognitive-search-skill-custom-entity-lookup.md) is a special case in that it requires a key, but is [metered by Cognitive Search](https://azure.microsoft.com/pricing/details/search/#pricing).
+
+> [!TIP]
+> To lower the cost of skillset processing, enable [incremental enrichment (preview)](cognitive-search-incremental-indexing-conceptual.md) to cache and reuse any enrichments that are unaffected by changes made to a skillset. Caching requires Azure Storage (see [pricing](/pricing/details/storage/blobs/) but the cumulative cost of skillset execution is lower if existing enrichments can be reused, especially for skillsets that use image extraction and analysis.
## Same-region requirement
If you are using the **Import data** wizard for AI enrichment, you'll find the "
## Use billable resources
-For workloads that create more than 20 enrichments per day, make sure to attach a billable Cognitive Services resource. We recommend that you always attach a billable Cognitive Services resource, even if you never intend to call Cognitive Services APIs. Attaching a resource overrides the daily limit.
-
-You're charged only for skills that call the Cognitive Services APIs. You're not billed for [custom skills](cognitive-search-create-custom-skill-example.md), or skills like [text merger](cognitive-search-skill-textmerger.md), [text splitter](cognitive-search-skill-textsplit.md), and [shaper](cognitive-search-skill-shaper.md), which aren't API-based.
+For workloads that create more than 20 billable enrichments per day, make sure to attach a Cognitive Services resource.
If you're using the **Import data** wizard, you can configure a billable resource from the **Add AI enrichment (Optional)** page.
search Cognitive Search Predefined Skills https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-predefined-skills.md
Title: Built-in text and image processing during indexing
-description: Data extraction, natural language, image processing cognitive skills add semantics and structure to raw content in an Azure Cognitive Search pipeline.
+description: Data extraction, natural language, and image processing skills add semantics and structure to raw content in an Azure Cognitive Search enrichment pipeline.
Previously updated : 11/04/2019 Last updated : 08/12/2021
-# Built-in cognitive skills for text and image processing during indexing (Azure Cognitive Search)
-
-In this article, you learn about the cognitive skills provided with Azure Cognitive Search that you can include in a skillset to extract content and structure. A *cognitive skill* is a module or operation that transforms content in some way. Often, it is a component that extracts data or infers structure, and therefore augments our understanding of the input data. Almost always, the output is text-based. A *skillset* is collection of skills that define the enrichment pipeline.
-
-> [!NOTE]
-> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
->
-> Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
->
-> The [incremental enrichment (preview)](cognitive-search-incremental-indexing-conceptual.md) feature allows you to provide a cache that enables the indexer to be more efficient at running only the cognitive skills that are necessary if you modify your skillset in the future, saving you time and money.
+# Built-in skills for text and image processing during indexing (Azure Cognitive Search)
+This article describes the skills provided with Azure Cognitive Search that you can include in a [skillset](cognitive-search-working-with-skillsets.md) to extract content and structure from raw unstructured text and image files. A *skill* is an atomic operation that transforms content in some way. Often, it is an operation that recognizes or extracts text, but it can also be a utility skill that reshapes the enrichments that are already created. Typically, the output is text-based so that it can be used in full text queries.
## Built-in skills
-Several skills are flexible in what they consume or produce. In general, most skills are based on pre-trained models, which means you cannot train the model using your own training data. The following table enumerates and describes the skills provided by Microsoft.
-
-| Skill | Description |
-|-|-|
-|[Microsoft.Skills.Text.CustomEntityLookupSkill](cognitive-search-skill-custom-entity-lookup.md)| Looks for text from a custom, user-defined list of words and phrases.|
-| [Microsoft.Skills.Text.KeyPhraseExtractionSkill](cognitive-search-skill-keyphrases.md) | This skill uses a pretrained model to detect important phrases based on term placement, linguistic rules, proximity to other terms, and how unusual the term is within the source data. |
-| [Microsoft.Skills.Text.LanguageDetectionSkill](cognitive-search-skill-language-detection.md) | This skill uses a pretrained model to detect which language is used (one language ID per document). When multiple languages are used within the same text segments, the output is the LCID of the predominantly used language.|
-| [Microsoft.Skills.Text.MergeSkill](cognitive-search-skill-textmerger.md) | Consolidates text from a collection of fields into a single field. |
-| [Microsoft.Skills.Text.V3.EntityLinkingSkill](cognitive-search-skill-entity-linking-v3.md) | This skill uses a pretrained model to determine linked entity matches from a given text. |
-| [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md) | This skill uses a pretrained model to establish entities for a fixed set of categories: `"Person"`, `"Location"`, `"Organization"`, `"Quantity"`, `"DateTime"`, `"URL"`, `"Email"`, `"PersonType"`, `"Event"`, `"Product"`, `"Skill"`, `"Address"`, `"Phone Number"` and `"IP Address"` fields. |
-| [Microsoft.Skills.Text.PIIDetectionSkill](cognitive-search-skill-pii-detection.md) | This skill uses a pretrained model to extract personal information from a given text. The skill also gives various options for masking the detected personal information entities in the text. |
-| [Microsoft.Skills.Text.V3.SentimentSkill](cognitive-search-skill-sentiment-v3.md) | This skill uses a pretrained model to provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level on a record by record basis. |
-| [Microsoft.Skills.Text.SplitSkill](cognitive-search-skill-textsplit.md) | Splits text into pages so that you can enrich or augment content incrementally. |
-| [Microsoft.Skills.Text.TranslationSkill](cognitive-search-skill-text-translation.md) | This skill uses a pretrained model to translate the input text into a variety of languages for normalization or localization use cases. |
-| [Microsoft.Skills.Vision.ImageAnalysisSkill](cognitive-search-skill-image-analysis.md) | This skill uses an image detection algorithm to identify the content of an image and generate a text description. |
-| [Microsoft.Skills.Vision.OcrSkill](cognitive-search-skill-ocr.md) | Optical character recognition. |
-| [Microsoft.Skills.Util.ConditionalSkill](cognitive-search-skill-conditional.md) | Allows filtering, assigning a default value, and merging data based on a condition.|
-| [Microsoft.Skills.Util.DocumentExtractionSkill](cognitive-search-skill-document-extraction.md) | Extracts content from a file within the enrichment pipeline. |
-| [Microsoft.Skills.Util.ShaperSkill](cognitive-search-skill-shaper.md) | Maps output to a complex type (a multi-part data type, which might be used for a full name, a multi-line address, or a combination of last name and a personal identifier.) |
-| [Microsoft.Skills.Custom.WebApiSkill](cognitive-search-custom-skill-web-api.md) | Allows extensibility of an AI enrichment pipeline by making an HTTP call into a custom Web API |
-| [Microsoft.Skills.Custom.AmlSkill](cognitive-search-aml-skill.md) | Allows extensibility of an AI enrichment pipeline with an Azure Machine Learning model |
--
-For guidance on creating a [custom skill](cognitive-search-custom-skill-web-api.md), see [How to define a custom interface](cognitive-search-custom-skill-interface.md) and [Example: Creating a custom skill for AI enrichment](cognitive-search-create-custom-skill-example.md).
+Built-in skills are based on pre-trained models from Microsoft, which means you cannot train the model using your own training data. Skills that call the Cognitive Resources APIs have a dependency on those services and are billed at the Cognitive Services pay-as-you-go price when you [attach a resource](cognitive-search-attach-cognitive-services.md). Other skills are metered by Azure Cognitive Search, or are utility skills that are available at no charge.
+
+The following table enumerates and describes the built-in skills.
+
+| Type | Description | Metered by |
+|-|-|-|
+|[Microsoft.Skills.Text.CustomEntityLookupSkill](cognitive-search-skill-custom-entity-lookup.md) | Looks for text from a custom, user-defined list of words and phrases.| Azure Cognitive Search ([pricing](https://azure.microsoft.com/pricing/details/search/)) |
+| [Microsoft.Skills.Text.KeyPhraseExtractionSkill](cognitive-search-skill-keyphrases.md) | This skill uses a pretrained model to detect important phrases based on term placement, linguistic rules, proximity to other terms, and how unusual the term is within the source data. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Text.LanguageDetectionSkill](cognitive-search-skill-language-detection.md) | This skill uses a pretrained model to detect which language is used (one language ID per document). When multiple languages are used within the same text segments, the output is the LCID of the predominantly used language. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Text.MergeSkill](cognitive-search-skill-textmerger.md) | Consolidates text from a collection of fields into a single field. | Not applicable |
+| [Microsoft.Skills.Text.V3.EntityLinkingSkill](cognitive-search-skill-entity-linking-v3.md) | This skill uses a pretrained model to determine linked entity matches from a given text. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md) | This skill uses a pretrained model to establish entities for a fixed set of categories: `"Person"`, `"Location"`, `"Organization"`, `"Quantity"`, `"DateTime"`, `"URL"`, `"Email"`, `"PersonType"`, `"Event"`, `"Product"`, `"Skill"`, `"Address"`, `"Phone Number"` and `"IP Address"` fields. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Text.PIIDetectionSkill](cognitive-search-skill-pii-detection.md) | This skill uses a pretrained model to extract personal information from a given text. The skill also gives various options for masking the detected personal information entities in the text. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Text.V3.SentimentSkill](cognitive-search-skill-sentiment-v3.md) | This skill uses a pretrained model to assign sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level on a record by record basis. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Text.SplitSkill](cognitive-search-skill-textsplit.md) | Splits text into pages so that you can enrich or augment content incrementally. | Not applicable |
+| [Microsoft.Skills.Text.TranslationSkill](cognitive-search-skill-text-translation.md) | This skill uses a pretrained model to translate the input text into a variety of languages for normalization or localization use cases. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Vision.ImageAnalysisSkill](cognitive-search-skill-image-analysis.md) | This skill uses an image detection algorithm to identify the content of an image and generate a text description. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Vision.OcrSkill](cognitive-search-skill-ocr.md) | Optical character recognition. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Util.ConditionalSkill](cognitive-search-skill-conditional.md) | Allows filtering, assigning a default value, and merging data based on a condition. | Not applicable |
+| [Microsoft.Skills.Util.DocumentExtractionSkill](cognitive-search-skill-document-extraction.md) | Extracts content from a file within the enrichment pipeline. | Azure Cognitive Search ([pricing](https://azure.microsoft.com/pricing/details/search/))
+| [Microsoft.Skills.Util.ShaperSkill](cognitive-search-skill-shaper.md) | Maps output to a complex type (a multi-part data type, which might be used for a full name, a multi-line address, or a combination of last name and a personal identifier.) | Not applicable |
+
+## Custom skills
+
+[Custom skills](cognitive-search-custom-skill-web-api.md) are modules that you design, develop, and deploy to the web. You can then call the module from within a skillset as a custom skill.
+
+| Type | Description | Metered by |
+|-|-|-|
+| [Microsoft.Skills.Custom.WebApiSkill](cognitive-search-custom-skill-web-api.md) | Allows extensibility of an AI enrichment pipeline by making an HTTP call into a custom Web API | None unless your solution uses a metered Azure service |
+| [Microsoft.Skills.Custom.AmlSkill](cognitive-search-aml-skill.md) | Allows extensibility of an AI enrichment pipeline with an Azure Machine Learning model | None unless your solution uses a metered Azure service |
+
+For guidance on creating a custom skill, see [Define a custom interface](cognitive-search-custom-skill-interface.md) and [Example: Creating a custom skill for AI enrichment](cognitive-search-create-custom-skill-example.md).
## See also
search Cognitive Search Skill Conditional https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-conditional.md
Previously updated : 11/04/2019 Last updated : 08/12/2021 # Conditional cognitive skill
else
``` > [!NOTE]
-> This skill isn't bound to an Azure Cognitive Services API, and you aren't charged for using it. However, you should still [attach a Cognitive Services resource](cognitive-search-attach-cognitive-services.md) to override the "Free" resource option that limits you to a small number of enrichments per day.
+> This skill isn't bound to Cognitive Services. It is non-billable and has no Cognitive Services key requirement.
## @odata.type Microsoft.Skills.Util.ConditionalSkill
search Cognitive Search Skill Custom Entity Lookup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-custom-entity-lookup.md
Previously updated : 06/17/2020 Last updated : 08/12/2021 # Custom Entity Lookup cognitive skill The **Custom Entity Lookup** skill looks for text from a custom, user-defined list of words and phrases. Using this list, it labels all documents with any matching entities. The skill also supports a degree of fuzzy matching that can be applied to find matches that are similar but not quite exact.
-This skill is not bound to a Cognitive Services API. You should still [attach a Cognitive Services resource](./cognitive-search-attach-cognitive-services.md), however, to override the daily enrichment limit. The daily limit applies to free access to Cognitive Services when accessed through Azure Cognitive Search.
-
-## Pricing details
-
-Text Records correspond to the number of 1,000-character units within a document that is provided as input to the skill.
-
-| Pricing tier | Price |
-|--|-|
-| 0-500,000 text records | $1 per 1,000 text records |
-| 0.5M-2.5M text records | $0.75 per 1,000 text records |
-| 2.5M-10.0M text records | $0.30 per 1,000 text records |
-| 10M+ text records | $0.25 per 1,000 text records |
+> [!NOTE]
+> This skill is not bound to a Cognitive Services API but requires a Cognitive Services key to allow more than 20 transactions. This skill is [metered by Cognitive Search](https://azure.microsoft.com/pricing/details/search/#pricing).
## @odata.type Microsoft.Skills.Text.CustomEntityLookupSkill
The tables below describe in more details the different configuration parameters
| `accentSensitive` | (Optional) Acts the same as root entity "accentSensitive" parameter above, but applies to only this one alias. | | `fuzzyEditDistance` | (Optional) Acts the same as root entity "fuzzyEditDistance" parameter above, but applies to only this one alias. | - ### Inline format In some cases, it may be more convenient to provide the list of custom entities to match inline directly into the skill definition. In that case you can use a similar JSON format to the one described above, but it is inlined in the skill definition. Only configurations that are less than 10 KB in size (serialized size) can be defined inline.
-## Sample definition
+## Sample definition
A sample skill definition using an inline format is shown below:
A sample skill definition using an inline format is shown below:
] } ```+ Alternatively, if you decide to provide a pointer to the entities definition file, a sample skill definition using the `entitiesDefinitionUri` format is shown below: ```json
Alternatively, if you decide to provide a pointer to the entities definition fil
```
-## Sample input
+## Sample input
```json {
Alternatively, if you decide to provide a pointer to the entities definition fil
} ```
-## Sample output
+## Sample output
```json {
search Cognitive Search Skill Document Extraction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-document-extraction.md
Previously updated : 06/17/2020 Last updated : 08/12/2021 # Document Extraction cognitive skill
The **Document Extraction** skill extracts content from a file within the enrichment pipeline. This allows you to take advantage of the document extraction step that normally happens before the skillset execution with files that may be generated by other skills. > [!NOTE]
-> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in indexing. There are no charges for text extraction from documents.
+> This skill isn't bound to Cognitive Services and has no Cognitive Services key requirement.
+> This skill extracts text and images. Text extraction is free. Image extraction is [metered by Azure Cognitive Search](https://azure.microsoft.com/pricing/details/search/). On a free search service, the cost of 20 transactions per indexer per day is absorbed so that you can complete quickstarts, tutorials, and small at no charge. For Basic, Standard, and above, image extraction is billable.
>
-> Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [pricing page](https://azure.microsoft.com/pricing/details/search/).
## @odata.type Microsoft.Skills.Util.DocumentExtractionSkill
search Cognitive Search Skill Entity Linking V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-entity-linking-v3.md
Previously updated : 05/19/2021 Last updated : 08/12/2021 # Entity Linking cognitive skill
Last updated 05/19/2021
The **Entity Linking** skill extracts linked entities from text. This skill uses the machine learning models provided by [Text Analytics](../cognitive-services/text-analytics/overview.md) in Cognitive Services. > [!NOTE]
-> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
+> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
>
-> Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
## @odata.type Microsoft.Skills.Text.V3.EntityLinkingSkill
search Cognitive Search Skill Entity Recognition V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-entity-recognition-v3.md
Previously updated : 05/19/2021 Last updated : 08/12/2021 # Entity Recognition cognitive skill (V3)
Last updated 05/19/2021
The **Entity Recognition** skill extracts entities of different types from text. These entities fall under 14 distinct categories, ranging from people and organizations to URLs and phone numbers. This skill uses the machine learning models provided by [Text Analytics](../cognitive-services/text-analytics/overview.md) in Cognitive Services. > [!NOTE]
-> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
+> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
>
-> Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
## @odata.type Microsoft.Skills.Text.V3.EntityRecognitionSkill
search Cognitive Search Skill Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-image-analysis.md
Previously updated : 06/17/2020 Last updated : 08/12/2021 # Image Analysis cognitive skill
The **Image Analysis** skill extracts a rich set of visual features based on the
+ The dimensions of the image must be greater than 50 x 50 pixels > [!NOTE]
-> Small volumes (under 20 transactions) can be executed for free in Azure Cognitive Search, but larger workloads require [attaching a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
+> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
+>
+> In addition, image extraction is [billable by Azure Cognitive Search](https://azure.microsoft.com/pricing/details/search/).
>
-> Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
## @odata.type
search Cognitive Search Skill Keyphrases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-keyphrases.md
Previously updated : 11/04/2019 Last updated : 08/12/2021 # Key Phrase Extraction cognitive skill
The **Key Phrase Extraction** skill evaluates unstructured text, and for each re
This capability is useful if you need to quickly identify the main talking points in the record. For example, given input text "The food was delicious and there were wonderful staff", the service returns "food" and "wonderful staff". > [!NOTE]
-> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
+> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
>
-> Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
- ## @odata.type Microsoft.Skills.Text.KeyPhraseExtractionSkill
search Cognitive Search Skill Language Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-language-detection.md
Previously updated : 06/17/2020 Last updated : 08/12/2021 + # Language detection cognitive skill The **Language Detection** skill detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the analysis. This skill uses the machine learning models provided by [Text Analytics](../cognitive-services/text-analytics/overview.md) in Cognitive Services.
This capability is especially useful when you need to provide the language of th
Language detection leverages Bing's natural language processing libraries, which exceeds the number of [supported languages and regions](../cognitive-services/text-analytics/language-support.md) listed for Text Analytics. The exact list of languages is not published, but includes all widely-spoken languages, plus variants, dialects, and some regional and cultural languages. If you have content expressed in a less frequently used language, you can [try the Language Detection API](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/Languages) to see if it returns a code. The response for languages that cannot be detected is `(Unknown)`. > [!NOTE]
-> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
+> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
>
-> Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
- ## @odata.type Microsoft.Skills.Text.LanguageDetectionSkill
search Cognitive Search Skill Named Entity Recognition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-named-entity-recognition.md
Previously updated : 11/04/2019 Last updated : 08/12/2021
-# Named Entity Recognition cognitive skill
+# Named Entity Recognition cognitive skill
The **Named Entity Recognition** skill extracts named entities from text. Available entities include the types `person`, `location` and `organization`. > [!IMPORTANT] > Named entity recognition skill is now discontinued replaced by [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md). Follow the recommendations in [Deprecated cognitive search skills](cognitive-search-skill-deprecated.md) to migrate to a supported skill.
-> [!NOTE]
-> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
+ > [!NOTE]
+> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
+>
+> Image extraction is an extra charge metered by Azure Cognitive Search, as described on the [pricing page](https://azure.microsoft.com/pricing/details/search/). Text extraction is free.
>
-> Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
## @odata.type
search Cognitive Search Skill Ocr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-ocr.md
Previously updated : 06/17/2020 Last updated : 08/12/2021 # OCR cognitive skill
The **OCR** skill extracts text from image files. Supported file formats include
+ .TIFF > [!NOTE]
-> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
+> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
+>
+> In addition, image extraction is [billable by Azure Cognitive Search](https://azure.microsoft.com/pricing/details/search/).
>
-> Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
- ## Skill parameters
search Cognitive Search Skill Pii Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-pii-detection.md
Previously updated : 06/17/2020 Last updated : 08/12/2021 # PII Detection cognitive skill
Last updated 06/17/2020
The **PII Detection** skill extracts personal information from an input text and gives you the option of masking it. This skill uses the machine learning models provided by [Text Analytics](../cognitive-services/text-analytics/overview.md) in Cognitive Services. > [!NOTE]
-> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
+> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
>
-> Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
## @odata.type
search Cognitive Search Skill Sentiment V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-sentiment-v3.md
Previously updated : 05/25/2021 Last updated : 08/12/2021 # Sentiment cognitive skill (V3)
Last updated 05/25/2021
The V3 **Sentiment** skill evaluates unstructured text and for each record, provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level. This skill uses the machine learning models provided by version 3 of [Text Analytics](../cognitive-services/text-analytics/overview.md) in Cognitive Services. It also exposes [the opinion mining capabilities of the Text Analytics API](../cognitive-services/text-analytics/how-tos/text-analytics-how-to-sentiment-analysis.md#opinion-mining), which provides more granular information about the opinions related to attributes of products or services in text. > [!NOTE]
-> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
+> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
>
-> Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
- ## @odata.type Microsoft.Skills.Text.V3.SentimentSkill
search Cognitive Search Skill Shaper https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-shaper.md
Previously updated : 11/04/2019 Last updated : 08/12/2021 # Shaper cognitive skill
Additionally, the **Shaper** skill illustrated in [scenario 3](#nested-complex-t
The output name is always "output". Internally, the pipeline can map a different name, such as "analyzedText" as shown in the examples below, but the **Shaper** skill itself returns "output" in the response. This might be important if you are debugging enriched documents and notice the naming discrepancy, or if you build a custom skill and are structuring the response yourself. > [!NOTE]
-> The **Shaper** skill is not bound to a Cognitive Services API and you are not charged for using it. You should still [attach a Cognitive Services resource](cognitive-search-attach-cognitive-services.md), however, to override the **Free** resource option that limits you to a small number of daily enrichments per day.
+> This skill isn't bound to Cognitive Services. It is non-billable and has no Cognitive Services key requirement.
## @odata.type Microsoft.Skills.Util.ShaperSkill
search Cognitive Search Skill Text Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-text-translation.md
Previously updated : 11/04/2019 Last updated : 08/12/2021 # Text Translation cognitive skill
This capability is useful if you expect that your documents may not all be in on
The [Translator Text API v3.0](../cognitive-services/translator/reference/v3-0-reference.md) is a non-regional Cognitive Service, meaning that your data is not guaranteed to stay in the same region as your Azure Cognitive Search or attached Cognitive Services resource. > [!NOTE]
-> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
+> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
>
-> Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
## @odata.type Microsoft.Skills.Text.TranslationSkill
search Cognitive Search Skill Textmerger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-textmerger.md
Previously updated : 06/17/2020 Last updated : 08/12/2021 # Text Merge cognitive skill
Last updated 06/17/2020
The **Text Merge** skill consolidates text from a collection of fields into a single field. > [!NOTE]
-> This skill is not bound to a Cognitive