Updates from: 01/07/2022 02:06:31
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Use the general guidelines when implementing a SCIM endpoint to ensure compatibi
* Microsoft AAD makes requests to fetch a random user and group to ensure that the endpoint and the credentials are valid. It's also done as a part of the **Test Connection** flow in the [Azure portal](https://portal.azure.com). * Support HTTPS on your SCIM endpoint. * Custom complex and multivalued attributes are supported but AAD does not have many complex data structures to pull data from in these cases. Simple paired name/value type complex attributes can be mapped to easily, but flowing data to complex attributes with three or more subattributes are not well supported at this time.
+* The "type" sub-attribute values of multivalued complex attributes must be unique. For example, there can not be two different email addresses with the "work" sub-type.
##### Retrieving Resources: * Response to a query/filter request should always be a `ListResponse`.
active-directory Tutorial Enable Sspr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/tutorial-enable-sspr.md
Previously updated : 06/01/2021 Last updated : 1/05/2022
In this tutorial, set up SSPR for a set of users in a test group. Use the *SSPR-
1. Sign in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* permissions. 1. Search for and select **Azure Active Directory**, then select **Password reset** from the menu on the left side.
-1. From the **Properties** page, under the option *Self service password reset enabled*, select **Select group**
-1. Browse for and select your Azure AD group, like *SSPR-Test-Group*, then choose *Select*.
+1. From the **Properties** page, under the option *Self service password reset enabled*, choose **Selected**.
+1. If your group isn't visible, choose **No groups selected**, browse for and select your Azure AD group, like *SSPR-Test-Group*, and then choose *Select*.
[![Select a group in the Azure portal to enable for self-service password reset](media/tutorial-enable-sspr/enable-sspr-for-group-cropped.png)](media/tutorial-enable-sspr/enable-sspr-for-group.png#lightbox)
In this tutorial, set up SSPR for a set of users in a test group. Use the *SSPR-
When users need to unlock their account or reset their password, they're prompted for another confirmation method. This extra authentication factor makes sure that Azure AD finished only approved SSPR events. You can choose which authentication methods to allow, based on the registration information the user provides.
-1. From the menu on the left side of the **Authentication methods** page, set the **Number of methods required to reset** to *1*.
+1. From the menu on the left side of the **Authentication methods** page, set the **Number of methods required to reset** to *2*.
To improve security, you can increase the number of authentication methods required for SSPR.
To keep users informed about account activity, you can set up Azure AD to send e
1. From the menu on the left side of the **Notifications** page, set up the following options:
- * Set **Notify users on password resets** option to *Yes*.
- * Set **Notify all admins when other admins reset their password** to *Yes*.
+ * Set **Notify users on password resets?** option to *Yes*.
+ * Set **Notify all admins when other admins reset their password?** to *Yes*.
1. To apply the notification preferences, select **Save**.
-If users need more help with the SSPR process, you can customize the "Contact your administrator" link. The user can select this link in the SSPR registration process and when they unlock their account or resets their password. To make sure your users get the support needed, we highly recommend you provide a custom helpdesk email or URL.
+If users need more help with the SSPR process, you can customize the "Contact your administrator" link. The user can select this link in the SSPR registration process and when they unlock their account or resets their password. To make sure your users get the support needed, we recommend you provide a custom helpdesk email or URL.
1. From the menu on the left side of the **Customization** page, set **Customize helpdesk link** to *Yes*. 1. In the **Custom helpdesk email or URL** field, provide an email address or web page URL where your users can get more help from your organization, like *https:\//support.contoso.com/*
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Previously updated : 07/01/2021 Last updated : 01/04/2022
There are certain sets of claims that define how and when they're used in tokens
### Table 1: JSON Web Token (JWT) restricted claim set
+> [!NOTE]
+> Any claim starting with "xms_" is restricted.
+ | Claim type (name) | | -- |
-| _claim_names |
-| _claim_sources |
-| access_token |
-| account_type |
-| acr |
-| actor |
-| actortoken |
-| aio |
-| altsecid |
-| amr |
-| app_chain |
-| app_displayname |
-| app_res |
-| appctx |
-| appctxsender |
-| appid |
-| appidacr |
-| assertion |
-| at_hash |
-| aud |
-| auth_data |
-| auth_time |
-| authorization_code |
-| azp |
-| azpacr |
-| c_hash |
-| ca_enf |
-| cc |
-| cert_token_use |
-| client_id |
-| cloud_graph_host_name |
-| cloud_instance_name |
-| cnf |
-| code |
-| controls |
-| credential_keys |
-| csr |
-| csr_type |
-| deviceid |
-| dns_names |
-| domain_dns_name |
-| domain_netbios_name |
-| e_exp |
-| email |
-| endpoint |
-| enfpolids |
-| exp |
-| expires_on |
-| grant_type |
-| graph |
-| group_sids |
-| groups |
-| hasgroups |
-| hash_alg |
-| home_oid |
-| `http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant` |
-| `http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod` |
-| `http://schemas.microsoft.com/ws/2008/06/identity/claims/expiration` |
-| `http://schemas.microsoft.com/ws/2008/06/identity/claims/expired` |
-| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress` |
-| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name` |
-| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier` |
-| iat |
-| identityprovider |
-| idp |
-| in_corp |
-| instance |
-| ipaddr |
-| isbrowserhostedapp |
-| iss |
-| jwk |
-| key_id |
-| key_type |
-| mam_compliance_url |
-| mam_enrollment_url |
-| mam_terms_of_use_url |
-| mdm_compliance_url |
-| mdm_enrollment_url |
-| mdm_terms_of_use_url |
-| nameid |
-| nbf |
-| netbios_name |
-| nonce |
-| oid |
-| on_prem_id |
-| onprem_sam_account_name |
-| onprem_sid |
-| openid2_id |
-| password |
-| polids |
-| pop_jwk |
-| preferred_username |
-| previous_refresh_token |
-| primary_sid |
-| puid |
-| pwd_exp |
-| pwd_url |
-| redirect_uri |
-| refresh_token |
-| refreshtoken |
-| request_nonce |
-| resource |
-| role |
-| roles |
-| scope |
-| scp |
-| sid |
-| signature |
-| signin_state |
-| src1 |
-| src2 |
-| sub |
-| tbid |
-| tenant_display_name |
-| tenant_region_scope |
-| thumbnail_photo |
-| tid |
-| tokenAutologonEnabled |
-| trustedfordelegation |
-| unique_name |
-| upn |
-| user_setting_sync_url |
-| username |
-| uti |
-| ver |
-| verified_primary_email |
-| verified_secondary_email |
-| wids |
-| win_ver |
-| nickname |
+|.|
+|_claim_names|
+|_claim_sources|
+|aai|
+|access_token|
+|account_type|
+|acct|
+|acr|
+|acrs|
+|actor|
+|ageGroup|
+|aio|
+|altsecid|
+|amr|
+|app_chain|
+|app_displayname|
+|app_res|
+|appctx|
+|appctxsender|
+|appid|
+|appidacr|
+|at_hash|
+|auth_time|
+|azp|
+|azpacr|
+|c_hash|
+|ca_enf|
+|ca_policy_result|
+|capolids_latebind|
+|capolids|
+|cc|
+|cnf|
+|code|
+|controls_auds|
+|controls|
+|credential_keys|
+|ctry|
+|deviceid|
+|domain_dns_name|
+|domain_netbios_name|
+|e_exp|
+|email|
+|endpoint|
+|enfpolids|
+|expires_on|
+|fido_auth_data|
+|fwd_appidacr|
+|fwd|
+|graph|
+|group_sids|
+|groups|
+|hasgroups|
+|haswids|
+|home_oid|
+|home_puid|
+|home_tid|
+|identityprovider|
+|idp|
+|idtyp|
+|in_corp|
+|instance|
+|inviteTicket|
+|ipaddr|
+|isbrowserhostedapp|
+|isViral|
+|login_hint|
+|mam_compliance_url|
+|mam_enrollment_url|
+|mam_terms_of_use_url|
+|mdm_compliance_url|
+|mdm_enrollment_url|
+|mdm_terms_of_use_url|
+|msproxy|
+|nameid|
+|nickname|
+|nonce|
+|oid|
+|on_prem_id|
+|onprem_sam_account_name|
+|onprem_sid|
+|openid2_id|
+|origin_header|
+|platf|
+|polids|
+|pop_jwk|
+|preferred_username|
+|primary_sid|
+|prov_data|
+|puid|
+|pwd_exp|
+|pwd_url|
+|rdp_bt|
+|refresh_token_issued_on|
+|refreshtoken|
+|rh|
+|roles|
+|rt_type|
+|scp|
+|secaud|
+|sid|
+|sid|
+|signin_state|
+|source_anchor|
+|src1|
+|src2|
+|sub|
+|target_deviceid|
+|tbid|
+|tbidv2|
+|tenant_ctry|
+|tenant_display_name|
+|tenant_region_scope|
+|tenant_region_sub_scope|
+|thumbnail_photo|
+|tid|
+|tokenAutologonEnabled|
+|trustedfordelegation|
+|ttr|
+|unique_name|
+|upn|
+|user_setting_sync_url|
+|uti|
+|ver|
+|verified_primary_email|
+|verified_secondary_email|
+|vnet|
+|wamcompat_client_info|
+|wamcompat_id_token|
+|wamcompat_scopes|
+|wids|
+|xcb2b_rclient|
+|xcb2b_rcloud|
+|xcb2b_rtenant|
+|ztdid|
### Table 2: SAML restricted claim set | Claim type (URI) | | -- |
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/expiration`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/expired`|
+|`http://schemas.microsoft.com/2012/01/devicecontext/claims/ismanaged`|
+|`http://schemas.microsoft.com/2014/02/devicecontext/claims/isknown`|
+|`http://schemas.microsoft.com/2014/03/psso`|
+|`http://schemas.microsoft.com/2014/09/devicecontext/claims/iscompliant`|
+|`http://schemas.microsoft.com/claims/authnmethodsreferences`|
+|`http://schemas.microsoft.com/claims/groups.link`|
|`http://schemas.microsoft.com/identity/claims/accesstoken`|
-|`http://schemas.microsoft.com/identity/claims/openid2_id`|
+|`http://schemas.microsoft.com/identity/claims/acct`|
+|`http://schemas.microsoft.com/identity/claims/agegroup`|
+|`http://schemas.microsoft.com/identity/claims/aio`|
|`http://schemas.microsoft.com/identity/claims/identityprovider`| |`http://schemas.microsoft.com/identity/claims/objectidentifier`|
+|`http://schemas.microsoft.com/identity/claims/openid2_id`|
|`http://schemas.microsoft.com/identity/claims/puid`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier [MR1]`|
|`http://schemas.microsoft.com/identity/claims/tenantid`|
+|`http://schemas.microsoft.com/identity/claims/xms_et`|
|`http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant`| |`http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod`|
-|`http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/expiration`|
|`http://schemas.microsoft.com/ws/2008/06/identity/claims/groups`|
-|`http://schemas.microsoft.com/claims/groups.link`|
|`http://schemas.microsoft.com/ws/2008/06/identity/claims/role`| |`http://schemas.microsoft.com/ws/2008/06/identity/claims/wids`|
-|`http://schemas.microsoft.com/2014/09/devicecontext/claims/iscompliant`|
-|`http://schemas.microsoft.com/2014/02/devicecontext/claims/isknown`|
-|`http://schemas.microsoft.com/2012/01/devicecontext/claims/ismanaged`|
-|`http://schemas.microsoft.com/2014/03/psso`|
-|`http://schemas.microsoft.com/claims/authnmethodsreferences`|
-|`http://schemas.xmlsoap.org/ws/2009/09/identity/claims/actor`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/samlissuername`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/confirmationkey`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/primarygroupsid`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/authorizationdecision`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/authentication`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/sid`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/denyonlyprimarygroupsid`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/denyonlyprimarysid`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/denyonlysid`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/denyonlywindowsdevicegroup`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsdeviceclaim`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsdevicegroup`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsfqbnversion`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowssubauthority`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsuserclaim`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/x500distinguishedname`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/spn`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/ispersistent`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/privatepersonalidentifier`|
-|`http://schemas.microsoft.com/identity/claims/scope`|
+|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier`|
+ ## Claims mapping policy properties
The ID element identifies which property on the source provides the value for th
**MatchOn:** The **MatchOn** property identifies the group attribute on which to apply the filter.
-Set the **MatchOn** property to one of the follwoing values:
+Set the **MatchOn** property to one of the following values:
- "displayname": The group display name. - "samaccountname": The On-premises SAM Account Name
active-directory Scenario Protected Web Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-protected-web-api-app-configuration.md
You can create a web API from scratch by using Microsoft.Identity.Web project te
#### Starting from an existing ASP.NET Core 3.1 application
-Today, ASP.NET Core 3.1 uses the Microsoft.AspNetCore.AzureAD.UI library. The middleware is initialized in the Startup.cs file.
+ASP.NET Core 3.1 uses the Microsoft.AspNetCore.AzureAD.UI library. The middleware is initialized in the Startup.cs file.
```csharp using Microsoft.AspNetCore.Authentication.JwtBearer;
active-directory Vs Active Directory Add Connected Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/vs-active-directory-add-connected-service.md
- Title: Using the Active Directory connected service (Visual Studio)
-description: Add an Azure Active Directory by using the Visual Studio Add Connected Services dialog box
----- Previously updated : 03/12/2018---
-# Add an Azure Active Directory by using Connected Services in Visual Studio
-
-By using Azure Active Directory (Azure AD), you can support Single Sign-On (SSO) for ASP.NET MVC web applications, or Active Directory Authentication in web API services. With Azure AD Authentication, your users can use their accounts from Azure Active Directory to connect to your web applications. The advantages of Azure AD Authentication with web API include enhanced data security when exposing an API from a web application. With Azure AD, you do not have to manage a separate authentication system with its own account and user management.
-
-This article and its companion articles provide details of using the Visual Studio Connected Service feature for Active Directory. The capability is available in Visual Studio 2015 and later.
-
-At present, the Active Directory connected service does not support ASP.NET Core applications.
-
-## Prerequisites
--- Azure account: if you don't have an Azure account, you can [sign up for a free trial](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F) or [activate your Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A261C142F).-- **Visual Studio 2015** or later. [Download Visual Studio now](https://aka.ms/vsdownload?utm_source=mscom&utm_campaign=msdocs).-
-### Connect to Azure Active Directory using the Connected Services dialog
-
-1. In Visual Studio, create or open an ASP.NET MVC project, or an ASP.NET Web API project. You can use the MVC, Web API, Single-Page Application, Azure API App, Azure Mobile App, and Azure Mobile Service templates.
-
-1. Select the **Project > Add Connected Service...** menu command, or double-click the **Connected Services** node found under the project in Solution Explorer.
-
-1. On the **Connected Services** page, select **Authentication with Azure Active Directory**.
-
- ![Connected Services page](./media/vs-azure-active-directory/connected-services-add-active-directory.png)
-
-1. On the **Introduction** page, select **Next**. If you see errors on this page, refer to [Diagnosing errors with the Azure Active Directory Connected Service](vs-active-directory-error.md).
-
- ![Introduction page](./media/vs-azure-active-directory/configure-azure-ad-wizard-1.png)
-
-1. On the **Single-Sign On** page, select a domain from the **Domain** drop-down list. The list contains all domains accessible by the accounts listed in the Account Settings dialog of Visual Studio (**File > Account Settings...**). As an alternative, you can enter a domain name if you donΓÇÖt find the one youΓÇÖre looking for, such as `mydomain.onmicrosoft.com`. You can choose the option to create an Azure Active Directory app or use the settings from an existing Azure Active Directory app. Select **Next** when done.
-
- ![Single-sign on page](./media/vs-azure-active-directory/configure-azure-ad-wizard-2.png)
-
-1. On the **Directory Access** page, select the **Read directory data** option as desired. Developers typically include this option.
-
- ![Directory access page](./media/vs-azure-active-directory/configure-azure-ad-wizard-3.png)
-
-1. Select **Finish** to start modifications to your project to enable Azure AD authentication. Visual Studio shows progress during this time:
-
- ![Active Directory connected service progress](./media/vs-azure-active-directory/active-directory-connected-service-output.png)
-
-1. When the process is complete, Visual Studio opens your browser to one of the following articles, as appropriate to your project type:
-
- - [Get started with .NET MVC projects](vs-active-directory-dotnet-getting-started.md)
- - [Get started with WebAPI projects](vs-active-directory-webapi-getting-started.md)
-
-1. You can also see the Active Directory domain on the [Azure portal](https://go.microsoft.com/fwlink/p/?LinkID=525040).
-
-## How your project is modified
-
-When you add the connected service the wizard, Visual Studio adds Azure Active Directory and associated references to your project. Configuration files and code files in your project are also modified to add support for Azure AD. The specific modifications that Visual Studio makes depend on the project type. See the following articles for details:
--- [What happened to my .NET MVC project?](vs-active-directory-dotnet-what-happened.md)-- [What happened to my Web API project?](vs-active-directory-webapi-what-happened.md)-
-## Next steps
--- [Authentication scenarios for Azure Active Directory](./authentication-vs-authorization.md)-- [Add sign-in with Microsoft to an ASP.NET web app](quickstart-v2-aspnet-webapp.md)
active-directory Vs Active Directory Dotnet Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/vs-active-directory-dotnet-getting-started.md
- Title: Get started with Azure AD in .NET MVC projects | Azure
-description: How to get started using Azure Active Directory in .NET MVC projects after connecting to or creating an Azure AD using Visual Studio connected services
---- Previously updated : 03/12/2018----
-# Getting Started with Azure Active Directory (ASP.NET MVC Projects)
-
-> [!div class="op_single_selector"]
-> - [Getting Started](vs-active-directory-dotnet-getting-started.md)
-> - [What Happened](vs-active-directory-dotnet-what-happened.md)
-
-This article provides additional guidance after you've added Active Directory to an ASP.NET MVC project through the **Project > Connected Services** command of Visual Studio. If you've not already added the service to your project, you can do so at any time.
-
-See [What happened to my MVC project?](vs-active-directory-dotnet-what-happened.md) for the changes made to your project when adding the connected service.
-
-## Requiring authentication to access controllers
-
-All controllers in your project were adorned with the `[Authorize]` attribute. This attribute requires the user to be authenticated before accessing these controllers. To allow the controller to be accessed anonymously, remove this attribute from the controller. If you want to set the permissions at a more granular level, apply the attribute to each method that requires authorization instead of applying it to the controller class.
-
-## Adding SignIn / SignOut Controls
-
-To add the SignIn/SignOut controls to your view, you can use the `_LoginPartial.cshtml` partial view to add the functionality to one of your views. Here is an example of the functionality added to the standard `_Layout.cshtml` view. (Note the last element in the div with class navbar-collapse):
-
-```html
-<!DOCTYPE html>
- <html>
- <head>
- <meta charset="utf-8" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0">
- <title>@ViewBag.Title - My ASP.NET Application</title>
- @Styles.Render("~/Content/css")
- @Scripts.Render("~/bundles/modernizr")
-</head>
-<body>
- <div class="navbar navbar-inverse navbar-fixed-top">
- <div class="container">
- <div class="navbar-header">
- <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse">
- <span class="icon-bar"></span>
- <span class="icon-bar"></span>
- <span class="icon-bar"></span>
- </button>
- @Html.ActionLink("Application name", "Index", "Home", new { area = "" }, new { @class = "navbar-brand" })
- </div>
- <div class="navbar-collapse collapse">
- <ul class="nav navbar-nav">
- <li>@Html.ActionLink("Home", "Index", "Home")</li>
- <li>@Html.ActionLink("About", "About", "Home")</li>
- <li>@Html.ActionLink("Contact", "Contact", "Home")</li>
- </ul>
- @Html.Partial("_LoginPartial")
- </div>
- </div>
- </div>
- <div class="container body-content">
- @RenderBody()
- <hr />
- <footer>
- <p>&copy; @DateTime.Now.Year - My ASP.NET Application</p>
- </footer>
- </div>
- @Scripts.Render("~/bundles/jquery")
- @Scripts.Render("~/bundles/bootstrap")
- @RenderSection("scripts", required: false)
-</body>
-</html>
-```
-
-## Next steps
--- [Authentication scenarios for Azure Active Directory](./authentication-vs-authorization.md)-- [Add sign-in with Microsoft to an ASP.NET web app](quickstart-v2-aspnet-webapp.md)
active-directory Vs Active Directory Dotnet What Happened https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/vs-active-directory-dotnet-what-happened.md
- Title: Changes made to a MVC project when you connect to Azure AD
-description: Describes what happens to your MVC project when you connect to Azure AD by using Visual Studio connected services
---- Previously updated : 03/12/2018---
-# What happened to my MVC project (Visual Studio Azure Active Directory connected service)?
-
-> [!div class="op_single_selector"]
-> - [Getting Started](vs-active-directory-dotnet-getting-started.md)
-> - [What Happened](vs-active-directory-dotnet-what-happened.md)
-
-This article identifies the exact changes made to an ASP.NET MVC project when adding the [Azure Active Directory connected service using Visual Studio](vs-active-directory-add-connected-service.md).
-
-For information on working with the connected service, see [Getting Started](vs-active-directory-dotnet-getting-started.md).
-
-## Added references
-
-Affects the project file (*.NET references) and `packages.config` (NuGet references).
-
-| Type | Reference |
-| | |
-| .NET; NuGet | Microsoft.IdentityModel.Protocol.Extensions |
-| .NET; NuGet | Microsoft.Owin |
-| .NET; NuGet | Microsoft.Owin.Host.SystemWeb |
-| .NET; NuGet | Microsoft.Owin.Security |
-| .NET; NuGet | Microsoft.Owin.Security.Cookies |
-| .NET; NuGet | Microsoft.Owin.Security.OpenIdConnect |
-| .NET; NuGet | Owin |
-| .NET | System.IdentityModel |
-| .NET; NuGet | System.IdentityModel.Tokens.Jwt |
-| .NET | System.Runtime.Serialization |
-
-Additional references if you selected the **Read directory data** option:
-
-| Type | Reference |
-| | |
-| .NET; NuGet | EntityFramework |
-| .NET | EntityFramework.SqlServer (Visual Studio 2015 only) |
-| .NET; NuGet | Microsoft.Azure.ActiveDirectory.GraphClient |
-| .NET; NuGet | Microsoft.Data.Edm |
-| .NET; NuGet | Microsoft.Data.OData |
-| .NET; NuGet | Microsoft.Data.Services.Client |
-| .NET; NuGet | Microsoft.IdentityModel.Clients.ActiveDirectory |
-| .NET | Microsoft.IdentityModel.Clients.ActiveDirectory.WindowsForms (Visual Studio 2015 only) |
-| .NET; NuGet | System.Spatial |
-
-The following references are removed (ASP.NET 4 projects only, as in Visual Studio 2015):
-
-| Type | Reference |
-| | |
-| .NET; NuGet | Microsoft.AspNet.Identity.Core |
-| .NET; NuGet | Microsoft.AspNet.Identity.EntityFramework |
-| .NET; NuGet | Microsoft.AspNet.Identity.Owin |
-
-## Project file changes
--- Set the property `IISExpressSSLPort` to a distinct number.-- Set the property `WebProject_DirectoryAccessLevelKey` to 0, or 1 if you selected the **Read directory data** option.-- Set the property `IISUrl` to `https://localhost:<port>/` where `<port>` matches the `IISExpressSSLPort` value.-
-## web.config or app.config changes
--- Added the following configuration entries:-
- ```xml
- <appSettings>
- <add key="ida:ClientId" value="<ClientId from the new Azure AD app>" />
- <add key="ida:AADInstance" value="https://login.microsoftonline.com/" />
- <add key="ida:Domain" value="<your selected Azure domain>" />
- <add key="ida:TenantId" value="<the Id of your selected Azure AD tenant>" />
- <add key="ida:PostLogoutRedirectUri" value="<project start page, such as https://localhost:44335>" />
- </appSettings>
- ```
--- Added `<dependentAssembly>` elements under the `<runtime><assemblyBinding>` node for `System.IdentityModel.Tokens.Jwt` and `Microsoft.IdentityModel.Protocol.Extensions`.-
-Additional changes if you selected the **Read directory data** option:
--- Added the following configuration entry under `<appSettings>`:-
- ```xml
- <add key="ida:ClientSecret" value="<Azure AD app's new client secret>" />
- ```
--- Added the following elements under `<configuration>`; values for the project-mdf-file and project-catalog-id will vary:-
- ```xml
- <configSections>
- <!-- For more information on Entity Framework configuration, visit https://go.microsoft.com/fwlink/?LinkID=237468 -->
- <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
- </configSections>
-
- <connectionStrings>
- <add name="DefaultConnection" connectionString="Data Source=(localdb)\MSSQLLocalDB;AttachDbFilename=|DataDirectory|\<project-mdf-file>.mdf;Initial Catalog=<project-catalog-id>;Integrated Security=True" providerName="System.Data.SqlClient" />
- </connectionStrings>
-
- <entityFramework>
- <defaultConnectionFactory type="System.Data.Entity.Infrastructure.LocalDbConnectionFactory, EntityFramework">
- <parameters>
- <parameter value="mssqllocaldb" />
- </parameters>
- </defaultConnectionFactory>
- <providers>
- <provider invariantName="System.Data.SqlClient" type="System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer" />
- </providers>
- </entityFramework>
- ```
--- Added `<dependentAssembly>` elements under the `<runtime><assemblyBinding>` node for `Microsoft.Data.Services.Client`, `Microsoft.Data.Edm`, and `Microsoft.Data.OData`.-
-## Code changes and additions
--- Added the `[Authorize]` attribute to `Controllers/HomeController.cs` and any other existing controllers.--- Added an authentication startup class, `App_Start/Startup.Auth.cs`, containing startup logic for Azure AD authentication. If you selected the **Read directory data** option, this file also contains code to receive an OAuth code and exchange it for an access token.--- Added a controller class, `Controllers/AccountController.cs`, containing `SignIn` and `SignOut` methods.--- Added a partial view, `Views/Shared/_LoginPartial.cshtml`, containing an action link for `SignIn` and `SignOut`.--- Added a partial view, `Views/Account/SignoutCallback.cshtml`, containing HTML for sign-out UI.--- Updated the `Startup.Configuration` method to include a call to `ConfigureAuth(app)` if the class already existed; otherwise added a `Startup` class that includes calls the method.--- Added `Connected Services/AzureAD/ConnectedService.json` (Visual Studio 2017) or `Service References/Azure AD/ConnectedService.json` (Visual Studio 2015), containing information that Visual Studio uses to track the addition of the connected service.--- If you selected the **Read directory data** option, added `Models/ADALTokenCache.cs` and `Models/ApplicationDbContext.cs` to support token caching. Also added an additional controller and view to illustrate accessing user profile information using Azure graph APIs: `Controllers/UserProfileController.cs`, `Views/UserProfile/Index.cshtml`, and `Views/UserProfile/Relogin.cshtml`-
-### File backup (Visual Studio 2015)
-
-When adding the connected service, Visual Studio 2015 backs up changed and removed files. All affected files are saved in the folder `Backup/AzureAD`. Visual Studio 2017 and later does not create backups.
--- `Startup.cs`-- `App_Start\IdentityConfig.cs`-- `App_Start\Startup.Auth.cs`-- `Controllers\AccountController.cs`-- `Controllers\ManageController.cs`-- `Models\IdentityModels.cs`-- `Models\ManageViewModels.cs`-- `Views\Shared\_LoginPartial.cshtml`-
-## Changes on Azure
--- Created an Azure AD Application in the domain that you selected when adding the connected service.-- Updated the app to include the **Read directory data** permission if that option was selected.-
-[Learn more about Azure Active Directory](https://azure.microsoft.com/services/active-directory/).
-
-## Next steps
--- [Authentication scenarios for Azure Active Directory](./authentication-vs-authorization.md)-- [Add sign-in with Microsoft to an ASP.NET web app](quickstart-v2-aspnet-webapp.md)
active-directory Vs Active Directory Error https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/vs-active-directory-error.md
- Title: Diagnose errors with Azure AD connected service (Visual Studio)
-description: The active directory connected service detected an incompatible authentication type
---- Previously updated : 03/12/2018---
-# Diagnosing errors with the Azure Active Directory Connected Service
-
-While detecting previous authentication code, the Azure Active Directory connected service detected an incompatible authentication type.
-
-To correctly detect previous authentication code in a project, the project must be rebuilt. If you see this error and you don't have a previous authentication code in your project, rebuild and try again.
-
-## Project types
-
-The connected service checks the type of project youΓÇÖre developing so it can inject the right authentication logic into the project. If there's any controller that derives from `ApiController` in the project, the project is considered a WebAPI project. If there are only controllers that derive from `MVC.Controller` in the project, the project is considered an MVC project. The connected service doesn't support any other project type.
-
-## Compatible authentication code
-
-The connected service also checks for authentication settings that have been previously configured or are compatible with the service. If all settings are present, it's considered a re-entrant case, and the connected service opens display the settings. If only some of the settings are present, it's considered an error case.
-
-In an MVC project, the connected service checks for any of the following settings, which result from previous use of the service:
-
-```xml
-<add key="ida:ClientId" value="" />
-<add key="ida:Tenant" value="" />
-<add key="ida:AADInstance" value="" />
-<add key="ida:PostLogoutRedirectUri" value="" />
-```
-
-Also, the connected service checks for any of the following settings in a Web API project, which result from previous use of the service:
-
-```xml
-<add key="ida:ClientId" value="" />
-<add key="ida:Tenant" value="" />
-<add key="ida:Audience" value="" />
-```
-
-## Incompatible authentication code
-
-Finally, the connected service attempts to detect versions of authentication code that have been configured with previous versions of Visual Studio. If you received this error, it means your project contains an incompatible authentication type. The connected service detects the following types of authentication from previous versions of Visual Studio:
-
-* Windows Authentication
-* Individual User Accounts
-* Organizational Accounts
-
-To detect Windows Authentication in an MVC project, the connected looks for the `authentication` element in your `web.config` file.
-
-```xml
-<configuration>
- <system.web>
- <authentication mode="Windows" />
- </system.web>
-</configuration>
-```
-
-To detect Windows Authentication in a Web API project, the connected service looks for the `IISExpressWindowsAuthentication` element in your project's `.csproj` file:
-
-```xml
-<Project>
- <PropertyGroup>
- <IISExpressWindowsAuthentication>enabled</IISExpressWindowsAuthentication>
- </PropertyGroup>
-</Project>
-```
-
-To detect Individual User Accounts authentication, the connected service looks for the package element in your `packages.config` file.
-
-```xml
-<packages>
- <package id="Microsoft.AspNet.Identity.EntityFramework" version="2.1.0" targetFramework="net45" />
-</packages>
-```
-
-To detect an old form of Organizational Account authentication, the connected service looks for the following element in`web.config`:
-
-```xml
-<configuration>
- <appSettings>
- <add key="ida:Realm" value="***" />
- </appSettings>
-</configuration>
-```
-
-To change the authentication type, remove the incompatible authentication type and try adding the connected service again.
-
-For more information, see [Authentication Scenarios for Azure AD](./authentication-vs-authorization.md).
active-directory Vs Active Directory Webapi Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/vs-active-directory-webapi-getting-started.md
- Title: Get Started with Azure AD in Visual Studio WebApi projects
-description: How to get started using Azure Active Directory in WebApi projects after connecting to or creating an Azure AD using Visual Studio connected services
---- Previously updated : 03/12/2018---
-# Get Started with Azure Active Directory (WebApi projects)
-
-> [!div class="op_single_selector"]
-> - [Getting Started](vs-active-directory-webapi-getting-started.md)
-> - [What Happened](vs-active-directory-webapi-what-happened.md)
-
-This article provides additional guidance after you've added Active Directory to an ASP.NET WebAPI project through the **Project > Connected Services** command of Visual Studio. If you've not already added the service to your project, you can do so at any time.
-
-See [What happened to my WebAPI project?](vs-active-directory-webapi-what-happened.md) for the changes made to your project when adding the connected service.
-
-## Requiring authentication to access controllers
-
-All controllers in your project were adorned with the `[Authorize]` attribute. This attribute requires the user to be authenticated before accessing the APIs defined by these controllers. To allow the controller to be accessed anonymously, remove this attribute from the controller. If you want to set the permissions at a more granular level, apply the attribute to each method that requires authorization instead of applying it to the controller class.
-
-## Next steps
--- [Authentication scenarios for Azure Active Directory](./authentication-vs-authorization.md)-- [Add sign-in with Microsoft to an ASP.NET web app](quickstart-v2-aspnet-webapp.md)
active-directory Vs Active Directory Webapi What Happened https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/vs-active-directory-webapi-what-happened.md
- Title: Changes made to WebAPI projects when connecting to Azure AD
-description: Describes what happens to your WebAPI project when you connect to Azure AD using Visual Studio
---- Previously updated : 03/12/2018---
-# What happened to my WebAPI project (Visual Studio Azure Active Directory connected service)
-
-> [!div class="op_single_selector"]
-> - [Getting Started](vs-active-directory-webapi-getting-started.md)
-> - [What Happened](vs-active-directory-webapi-what-happened.md)
-
-This article identifies the exact changes made to ASP.NET WebAPI, ASP.NET Single-Page Application, and ASP.NET Azure API projects when adding the [Azure Active Directory connected service using Visual Studio](vs-active-directory-add-connected-service.md). Also applies to the ASP.NET Azure Mobile Service projects in Visual Studio 2015.
-
-For information on working with the connected service, see [Getting Started](vs-active-directory-webapi-getting-started.md).
-
-## Added references
-
-Affects the project file *.NET references) and `packages.config` (NuGet references).
-
-| Type | Reference |
-| | |
-| .NET; NuGet | Microsoft.Owin |
-| .NET; NuGet | Microsoft.Owin.Host.SystemWeb |
-| .NET; NuGet | Microsoft.Owin.Security |
-| .NET; NuGet | Microsoft.Owin.Security.ActiveDirectory |
-| .NET; NuGet | Microsoft.Owin.Security.Jwt |
-| .NET; NuGet | Microsoft.Owin.Security.OAuth |
-| .NET; NuGet | Owin |
-| .NET; NuGet | System.IdentityModel.Tokens.Jwt |
-
-Additional references if you selected the **Read directory data** option:
-
-| Type | Reference |
-| | |
-| .NET; NuGet | EntityFramework |
-| .NET | EntityFramework.SqlServer (Visual Studio 2015 only) |
-| .NET; NuGet | Microsoft.Azure.ActiveDirectory.GraphClient |
-| .NET; NuGet | Microsoft.Data.Edm |
-| .NET; NuGet | Microsoft.Data.OData |
-| .NET; NuGet | Microsoft.Data.Services.Client |
-| .NET; NuGet | Microsoft.IdentityModel.Clients.ActiveDirectory |
-| .NET | Microsoft.IdentityModel.Clients.ActiveDirectory.WindowsForms<br>(Visual Studio 2015 only) |
-| .NET; NuGet | System.Spatial |
-
-The following references are removed (ASP.NET 4 projects only, as in Visual Studio 2015):
-
-| Type | Reference |
-| | |
-| .NET; NuGet | Microsoft.AspNet.Identity.Core |
-| .NET; NuGet | Microsoft.AspNet.Identity.EntityFramework |
-| .NET; NuGet | Microsoft.AspNet.Identity.Owin |
-
-## Project file changes
--- Set the property `IISExpressSSLPort` to a distinct number.-- Set the property `WebProject_DirectoryAccessLevelKey` to 0, or 1 if you selected the **Read directory data** option.-- Set the property `IISUrl` to `https://localhost:<port>/` where `<port>` matches the `IISExpressSSLPort` value.-
-## web.config or app.config changes
--- Added the following configuration entries:-
- ```xml
- <appSettings>
- <add key="ida:ClientId" value="<ClientId from the new Azure AD app>" />
- <add key="ida:Tenant" value="<your selected Azure domain>" />
- <add key="ida:Audience" value="<your selected domain + / + project name>" />
- </appSettings>
- ```
--- Visual Studio 2017 only: Also added the following entry under `<appSettings>`"-
- ```xml
- <add key="ida:MetadataAddress" value="<domain URL + /federationmetadata/2007-06/federationmetadata.xml>" />
- ```
--- Added `<dependentAssembly>` elements under the `<runtime><assemblyBinding>` node for `System.IdentityModel.Tokens.Jwt`.--- If you selected the **Read directory data** option, added the following configuration entry under `<appSettings>`:-
- ```xml
- <add key="ida:Password" value="<Your Azure AD app's new password>" />
- ```
-
-## Code changes and additions
--- Added the `[Authorize]` attribute to `Controllers/ValueController.cs` and any other existing controllers.--- Added an authentication startup class, `App_Start/Startup.Auth.cs`, containing startup logic for Azure AD authentication, or modified it accordingly. If you selected the **Read directory data** option, this file also contains code to receive an OAuth code and exchange it for an access token.--- (Visual Studio 2015 with ASP.NET 4 app only) Removed `App_Start/IdentityConfig.cs` and added `Controllers/AccountController.cs`, `Models/IdentityModel.cs`, and `Providers/ApplicationAuthProvider.cs`.--- Added `Connected Services/AzureAD/ConnectedService.json` (Visual Studio 2017) or `Service References/Azure AD/ConnectedService.json` (Visual Studio 2015), containing information that Visual Studio uses to track the addition of the connected service.-
-### File backup (Visual Studio 2015)
-
-When adding the connected service, Visual Studio 2015 backs up changed and removed files. All affected files are saved in the folder `Backup/AzureAD`. Visual Studio 2017 does not create backups.
--- `Startup.cs`-- `App_Start\IdentityConfig.cs`-- `App_Start\Startup.Auth.cs`-- `Controllers\AccountController.cs`-- `Controllers\ManageController.cs`-- `Models\IdentityModels.cs`-- `Models\ApplicationOAuthProvider.cs`-
-## Changes on Azure
--- Created an Azure AD Application in the domain that you selected when adding the connected service.-- Updated the app to include the **Read directory data** permission if that option was selected.-
-[Learn more about Azure Active Directory](https://azure.microsoft.com/services/active-directory/).
-
-## Next steps
--- [Authentication scenarios for Azure Active Directory](./authentication-vs-authorization.md)-- [Add sign-in with Microsoft to an ASP.NET web app](quickstart-v2-aspnet-webapp.md)
active-directory 6 Secure Access Entitlement Managment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/6-secure-access-entitlement-managment.md
You can perform [Entitlement Management functions by using Microsoft Graph](/gra
* [Manage access packages](/graph/api/resources/accesspackage)
-* [Manage access reviews](/graph/api/resources/accessreviewsv2-root)
+* [Manage access reviews](/graph/api/resources/accessreviewsv2-overview)
* [Manage connected organizations](/graph/api/resources/connectedorganization)
active-directory Active Directory Data Storage Eu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-data-storage-eu.md
Previously updated : 09/15/2020 Last updated : 01/06/2022 # Identity data storage for European customers in Azure Active Directory
-Identity data is stored by Azure AD in a geographical location based on the address provided by your organization when subscribing for a Microsoft Online service such as Microsoft 365 and Azure. For information on where your identity data is stored, you can use the [Where is your data located?](https://www.microsoft.com/trustcenter/privacy/where-your-data-is-located) section of the Microsoft Trust Center.
+Identity data is stored by Azure AD in a geographical location based on the address provided by your organization when it subscribed for a Microsoft Online service such as Microsoft 365 and Azure. For information on where your identity data is stored, you can use the [Where is your data located?](https://www.microsoft.com/trustcenter/privacy/where-your-data-is-located) section of the Microsoft Trust Center.
For customers who provided an address in Europe, Azure AD keeps most of the identity data within European datacenters. This document provides information on any data that is stored outside of Europe by Azure AD services.
For customers who provided an address in Europe, Azure AD keeps most of the iden
For cloud-based Azure AD Multi-Factor Authentication, authentication is complete in the closest datacenter to the user. Datacenters for Azure AD Multi-Factor Authentication exist in North America, Europe, and Asia Pacific.
-* Multi-factor authentication using phone calls originate from US datacenters and are routed by global providers.
+* Multi-factor authentication using phone calls originate from datacenters in the customer's region and are routed by global providers.
* Multi-factor authentication using SMS is routed by global providers. * Multi-factor authentication requests using the Microsoft Authenticator app push notifications that originate from EU datacenters are processed in EU datacenters. * Device vendor-specific services, such as Apple Push Notifications, may be outside Europe.
For more information about what user information is collected by Azure Multi-Fac
## Password-based Single Sign-On for Enterprise Applications
-If a customer creates a new enterprise application (whether through Azure AD Gallery or non-Gallery) and enables password-based SSO, the Application sign in URL, and custom capture sign in fields are stored in the United States. For more information on this feature, please refer to [Configure password-based single sign-on](../manage-apps/configure-password-single-sign-on-non-gallery-applications.md)
+If a customer creates a new enterprise application (whether through Azure AD Gallery or non-Gallery) and enables password-based SSO, the Application sign in URL, and custom capture sign in fields are stored in the United States. For more information, see [Configure password-based single sign-on](../manage-apps/configure-password-single-sign-on-non-gallery-applications.md)
## Microsoft Azure Active Directory B2C (Azure AD B2C)
-Azure AD B2C policy configuration data and Key Containers are stored in U.S. datacenters. These do not contain any user personal data. For more info about policy configurations, see the [Azure Active Directory B2C: Built-in policies](../../active-directory-b2c/user-flow-overview.md) article.
+Azure AD B2C policy configuration data and Key Containers are stored in U.S. datacenters, which do not contain any user personal data. For more info about policy configurations, see the [Azure Active Directory B2C: Built-in policies](../../active-directory-b2c/user-flow-overview.md) article.
## Microsoft Azure Active Directory B2B (Azure AD B2B)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
The Azure AD provisioning service currently operates on a cyclic basis. The serv
**Service category:** Other **Product capability:** Entitlement Management
-A new delegated permission EntitlementManagement.Read.All is now available for use with the Entitlement Management API in Microsoft Graph beta. To find out more about the available APIs, see [Working with the Azure AD entitlement management API](/graph/api/resources/entitlementmanagement-root).
+A new delegated permission EntitlementManagement.Read.All is now available for use with the Entitlement Management API in Microsoft Graph beta. To find out more about the available APIs, see [Working with the Azure AD entitlement management API](/graph/api/resources/entitlementmanagement-overview).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
Now with the Windows 10 21H1 update, Windows Hello supports multiple cameras. Th
**Service category:** Access Reviews **Product capability:** Identity Governance
-Azure Active Directory access reviews MS Graph APIs are now in v1.0 support fully configurable access reviews features. [Learn more](/graph/api/resources/accessreviewsv2-root?view=graph-rest-1.0&preserve-view=true).
+Azure Active Directory access reviews MS Graph APIs are now in v1.0 support fully configurable access reviews features. [Learn more](/graph/api/resources/accessreviewsv2-overview?view=graph-rest-1.0&preserve-view=true).
active-directory Access Reviews External Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/access-reviews-external-users.md
This setting allows you to identify, block, and delete external identities from
## Next steps -- [Access reviews - Graph API](/graph/api/resources/accessreviewsv2-root)-- [Entitlement management - Graph API](/graph/api/resources/entitlementmanagement-root)
+- [Access reviews - Graph API](/graph/api/resources/accessreviewsv2-overview)
+- [Entitlement management - Graph API](/graph/api/resources/entitlementmanagement-overview)
active-directory Conditional Access Exclusion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/conditional-access-exclusion.md
that is excluded from the policy. Here is a recommended access review where memb
![Create an access review pane for example 2](./media/conditional-access-exclusion/create-access-review-2.png) >[!IMPORTANT]
->If you have many exclusion groups and therefore need to create multiple access reviews, we now have an API in the Microsoft Graph beta endpoint that allows you to create and manage them programmatically. To get started, see the [Azure AD access reviews API reference](/graph/api/resources/accessreviewsv2-root) and [Example of retrieving Azure AD access reviews via Microsoft Graph](https://techcommunity.microsoft.com/t5/Azure-Active-Directory/Example-of-retrieving-Azure-AD-access-reviews-via-Microsoft/td-p/236096).
+>If you have many exclusion groups and therefore need to create multiple access reviews, we now have an API in the Microsoft Graph beta endpoint that allows you to create and manage them programmatically. To get started, see the [Azure AD access reviews API reference](/graph/api/resources/accessreviewsv2-overview) and [Example of retrieving Azure AD access reviews via Microsoft Graph](https://techcommunity.microsoft.com/t5/Azure-Active-Directory/Example-of-retrieving-Azure-AD-access-reviews-via-Microsoft/td-p/236096).
## Access review results and audit logs
active-directory Deploy Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/deploy-access-reviews.md
Follow the instructions in the articles listed in the table.
## Use the Access Reviews API
-To interact with and manage reviewable resources, see [Microsoft Graph API methods](/graph/api/resources/accessreviewsv2-root) and [role and application permission authorization checks](/graph/api/resources/accessreviewsv2-root). The access reviews methods in the Microsoft Graph API are available for both application and user contexts. When you run scripts in the application context, the account used to run the API (the service principle) must be granted the AccessReview.Read.All permission to query access reviews information.
+To interact with and manage reviewable resources, see [Microsoft Graph API methods](/graph/api/resources/accessreviewsv2-overview) and [role and application permission authorization checks](/graph/api/resources/accessreviewsv2-overview). The access reviews methods in the Microsoft Graph API are available for both application and user contexts. When you run scripts in the application context, the account used to run the API (the service principle) must be granted the AccessReview.Read.All permission to query access reviews information.
Popular access reviews tasks to automate by using the Microsoft Graph API for access reviews are:
Popular access reviews tasks to automate by using the Microsoft Graph API for ac
* Collect decisions from an access review. * Collect decisions from completed reviews where the reviewer made a different decision than what the system recommended.
-When you create new Microsoft Graph API queries for automation, use the [Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer). You can build and explore your Microsoft Graph queries before you put them into scripts and code. This step can help you to quickly iterate your query so that you get exactly the results you're looking for, without changing the code of your script.
+When you create new Microsoft Graph API queries for automation, use [Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer) to build and explore your Microsoft Graph queries before you put them into scripts and code. This step can help you to quickly iterate your query so that you get exactly the results you're looking for, without changing the code of your script.
## Monitor access reviews
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-assignments.md
To use Azure AD entitlement management and assign users to access packages, you
## View assignments programmatically ### View assignments with Microsoft Graph
-You can also retrieve assignments in an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignments](/graph/api/accesspackageassignment-list?view=graph-rest-beta&preserve-view=true). While an identity governance administrator can retrieve access packages from multiple catalogs, if user is assigned only to catalog-specific delegated administrative roles, the request must supply a filter to indicate a specific access package, such as: `$filter=accessPackage/id eq 'a914b616-e04e-476b-aa37-91038f0b165b'`. An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API.
+You can also retrieve assignments in an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignments](/graph/api/entitlementmanagement-list-accesspackageassignments?view=graph-rest-beta&preserve-view=true). While an identity governance administrator can retrieve access packages from multiple catalogs, if user is assigned only to catalog-specific delegated administrative roles, the request must supply a filter to indicate a specific access package, such as: `$filter=accessPackage/id eq 'a914b616-e04e-476b-aa37-91038f0b165b'`. An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API.
### View assignments with PowerShell
Azure AD Entitlement Management also allows you to directly assign external user
## Directly assigning users programmatically ### Assign a user to an access package with Microsoft Graph
-You can also directly assign a user to an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to [create an accessPackageAssignmentRequest](/graph/api/accesspackageassignmentrequest-post?view=graph-rest-beta&preserve-view=true). In this request, the value of the `requestType` property should be `AdminAdd`, and the `accessPackageAssignment` property is a structure that contains the `targetId` of the user being assigned.
+You can also directly assign a user to an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to [create an accessPackageAssignmentRequest](/graph/api/entitlementmanagement-post-accesspackageassignmentrequests?view=graph-rest-beta&preserve-view=true). In this request, the value of the `requestType` property should be `AdminAdd`, and the `accessPackageAssignment` property is a structure that contains the `targetId` of the user being assigned.
### Assign a user to an access package with PowerShell
$req = New-MgEntitlementManagementAccessPackageAssignment -AccessPackageId $acce
## Remove an assignment programmatically ### Remove an assignment with Microsoft Graph
-You can also remove an assignment of a user to an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to [create an accessPackageAssignmentRequest](/graph/api/accesspackageassignmentrequest-post?view=graph-rest-beta&preserve-view=true). In this request, the value of the `requestType` property should be `AdminRemove`, and the `accessPackageAssignment` property is a structure that contains the `id` property identifying the `accessPackageAssignment` being removed.
+You can also remove an assignment of a user to an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to [create an accessPackageAssignmentRequest](/graph/api/entitlementmanagement-post-accesspackageassignmentrequests?view=graph-rest-beta&preserve-view=true). In this request, the value of the `requestType` property should be `AdminRemove`, and the `accessPackageAssignment` property is a structure that contains the `id` property identifying the `accessPackageAssignment` being removed.
### Remove an assignment with PowerShell
active-directory Entitlement Management Access Package Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-create.md
On the **Review + create** tab, you can review your settings and check for any v
## Creating an access package programmatically
-You can also create an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to
+You can also create an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to
-1. [List the accessPackageResources in the catalog](/graph/api/accesspackagecatalog-list?tabs=http&view=graph-rest-beta&preserve-view=true) and [create an accessPackageResourceRequest](/graph/api/accesspackageresourcerequest-post?tabs=http&view=graph-rest-beta&preserve-view=true) for any resources that are not yet in the catalog.
+1. [List the accessPackageResources in the catalog](/graph/api/entitlementmanagement-list-accesspackagecatalogs?tabs=http&view=graph-rest-beta&preserve-view=true) and [create an accessPackageResourceRequest](/graph/api/entitlementmanagement-post-accesspackageresourcerequests?tabs=http&view=graph-rest-beta&preserve-view=true) for any resources that are not yet in the catalog.
1. [List the accessPackageResourceRoles](/graph/api/accesspackage-list-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) of each accessPackageResource in an accessPackageCatalog. This list of roles will then be used to select a role, when subsequently creating an accessPackageResourceRoleScope. 1. [Create an accessPackage](/graph/tutorial-access-package-api).
-1. [Create an accessPackageAssignmentPolicy](/graph/api/accesspackageassignmentpolicy-post?tabs=http&view=graph-rest-beta&preserve-view=true).
+1. [Create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-accesspackageassignmentpolicies?tabs=http&view=graph-rest-beta&preserve-view=true).
1. [Create an accessPackageResourceRoleScope](/graph/api/accesspackage-post-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) for each resource role needed in the access package. ## Next steps
active-directory Entitlement Management Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-requests.md
In Azure AD entitlement management, you can see who has requested access package
If you have a set of users whose requests are in the "Partially Delivered" or "Failed" state, you can retry those requests by using the [reprocess functionality](entitlement-management-reprocess-access-package-requests.md). ### View assignments with Microsoft Graph
-You can also retrieve requests for an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignmentRequests](/graph/api/accesspackageassignmentrequest-list?view=graph-rest-beta&preserve-view=true). You can supply a filter to indicate a specific access package, such as: `$expand=accessPackage&$filter=accessPackage/id eq '9bbe5f7d-f1e7-4eb1-a586-38cdf6f8b1ea'`. An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API.
+You can also retrieve requests for an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignmentRequests](/graph/api/entitlementmanagement-list-accesspackageassignmentrequests?view=graph-rest-beta&preserve-view=true). You can supply a filter to indicate a specific access package, such as: `$expand=accessPackage&$filter=accessPackage/id eq '9bbe5f7d-f1e7-4eb1-a586-38cdf6f8b1ea'`. An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API.
## Remove request (Preview)
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-catalog-create.md
There are two ways to create a catalog programmatically.
### Create a catalog with Microsoft Graph
-You can create a catalog by using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to [create an accessPackageCatalog](/graph/api/accesspackagecatalog-post?view=graph-rest-beta&preserve-view=true).
+You can create a catalog by using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to [create an accessPackageCatalog](/graph/api/entitlementmanagement-post-accesspackagecatalogs?view=graph-rest-beta&preserve-view=true).
### Create a catalog with PowerShell
To require attributes for access requests:
### Add a resource to a catalog programmatically
-You can also add a resource to a catalog by using Microsoft Graph. A user in an appropriate role, or a catalog and resource owner, with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to [create an accessPackageResourceRequest](/graph/api/accesspackageresourcerequest-post?view=graph-rest-beta&preserve-view=true). An application with application permissions can't yet programmatically add a resource without a user context at the time of the request, however.
+You can also add a resource to a catalog by using Microsoft Graph. A user in an appropriate role, or a catalog and resource owner, with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to [create an accessPackageResourceRequest](/graph/api/entitlementmanagement-post-accesspackageresourcerequests?view=graph-rest-beta&preserve-view=true). An application with application permissions can't yet programmatically add a resource without a user context at the time of the request, however.
## Remove resources from a catalog
active-directory Plan Connect Topologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/plan-connect-topologies.md
It's possible to have more than one staging server when you want to have multipl
## Multiple Azure AD tenants We recommend having a single tenant in Azure AD for an organization. Before you plan to use multiple Azure AD tenants, see the article [Administrative units management in Azure AD](../roles/administrative-units.md). It covers common scenarios where you can use a single tenant.
-### (Public preview) Each object multiple times in an Azure AD tenant
+### (Public preview) Sync AD objects to multiple Azure AD tenants
![Diagram that shows a topology of multiple Azure A D tenants.](./media/plan-connect-topologies/multi-tenant-1.png)
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
Having a BIG-IP in front of the application enables us to overlay the service wi
The secure hybrid access solution for this scenario is made up of the following:
-**Application:** Backend service protected by Azure AD and BIG-IP SHA. The application host is domain-joined and so is integrated with Active Directory (AD).
+**Application:** BIG-IP published service to be protected by and Azure AD SHA. The application host is domain-joined and so is integrated with Active Directory (AD).
**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP APM.
Secure hybrid access for this scenario supports both SP and IdP initiated flows.
| Steps| Description| | -- |-|
-| 1| User connects to application endpoint (BIG-IP) |
-| 2| BIG-IP access policy redirects user to Azure AD (SAML IdP) |
+| 1| User connects to SAML SP endpoint for application (BIG-IP APM) |
+| 2| APM access policy redirects user to Azure AD (SAML IdP) |
| 3| Azure AD pre-authenticates user and applies any enforced CA policies | | 4| User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token | | 5| BIG-IP requests Kerberos ticket from KDC |
Prior BIG-IP experience isnΓÇÖt necessary, but you will need:
There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers latest Guided Configuration 16.1 offering an Easy button template.
-With the **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The end-to-end deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD MFA, without management overhead of having to do on a per app basis.
+With the **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The end-to-end deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
The advanced approach provides a more flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would also use this approach for scenarios not covered by the guided configuration templates.
The advanced approach provides a more flexible way of implementing SHA by manual
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the Microsoft identity platform. Registering with Azure AD establishes a trust relationship between your application and the IdP. BIG-IP must also be registered as a client in Azure AD, before the Easy Button wizard is trusted to access Microsoft Graph.
+Before a client or service can access Microsoft Graph, it must be trusted by the Microsoft identity platform by being registered with Azure AD. A BIG-IP must also be registered as a client in Azure AD, before the Easy Button wizard is trusted to access Microsoft Graph.
1. Sign-in to the [Azure AD portal](https://portal.azure.com/) using an account with Application Administrative rights
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD i
For this scenario, we have a legacy application using HTTP authorization headers to control access to protected content. Azure AD pre-authentication provides the user identifier, while other attributes fetched from an LDAP connected Human Resource (HR) system provide fine grained application permissions.
-Ideally, Azure AD should manage the application, but being legacy it does not support any form of modern authentication protocol. Modernization would take considerable effort, introducing inevitable costs and risk of potential downtime.
+Ideally, application access should be managed directly by Azure AD but being legacy it lacks any form of modern authentication protocol. Modernization would take considerable effort and time, introducing inevitable costs and risk of potential downtime.
-Instead, a BIG-IP Virtual Edition (VE) deployed between the public internet and the internal Azure VNet application is connected and will be used to gate inbound access to the application, along with Azure AD for its extensive choice of authentication and authorization capabilities.
+Instead, a BIG-IP deployed between the public internet and the internal application will be used to gate inbound access to the application.
-Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and header-based SSO. It significantly improves the overall security posture of the application, and allows the business to continue operating at pace, without interruption.
+Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application.
## Scenario architecture The secure hybrid access solution for this scenario is made up of:
-**Application:** Backend header-based service to be protected by Azure AD and BIG-IP secure hybrid access.
+**Application:** BIG-IP published service to be protected by and Azure AD SHA.
-**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP APM.
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP APM. Trough SSO, Azure AD provides the BIG-IP with any required session attributes.
-**HR system:** Legacy employee database acting as source of truth for application authorization
+**HR system:** Legacy employee database acting as source of truth for fine grained application permissions.
**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the backend application.
For scenarios where the Guided Configuration lacks the flexibility to achieve a
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the Microsoft identity platform. Registering with Azure AD establishes a trust relationship between your application and the identity provider. BIG-IP must also be registered as a client in Azure AD, before the Easy Button wizard is trusted to access Microsoft Graph.
+Before a client or service can access Microsoft Graph, it must be trusted by the Microsoft identity platform by being registered with Azure AD. A BIG-IP must also be registered as a client in Azure AD, before the Easy Button wizard is trusted to access Microsoft Graph.
1. Sign-in to the [Azure AD portal](https://portal.azure.com) using an account with Application Administrative rights
active-directory Qs Configure Rest Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/qs-configure-rest-vm.md
To create an Azure VM with the system-assigned managed identity enabled, your ac
4. Using Azure Cloud Shell, create a VM using CURL to call the Azure Resource Manager REST endpoint. The following example creates a VM named *myVM* with a system-assigned managed identity, as identified in the request body by the value `"identity":{"type":"SystemAssigned"}`. Replace `<ACCESS TOKEN>` with the value you received in the previous step when you requested a Bearer access token and the `<SUBSCRIPTION ID>` value as appropriate for your environment. ```bash
- curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-01' -X PUT -d '{"location":"westus","name":"myVM","identity":{"type":"SystemAssigned"},"properties":{"hardwareProfile":{"vmSize":"Standard_D2_v2"},"storageProfile":{"imageReference":{"sku":"2016-Datacenter","publisher":"MicrosoftWindowsServer","version":"latest","offer":"WindowsServer"},"osDisk":{"caching":"ReadWrite","managedDisk":{"storageAccountType":"Standard_LRS"},"name":"myVM3osdisk","createOption":"FromImage"},"dataDisks":[{"diskSizeGB":1023,"createOption":"Empty","lun":0},{"diskSizeGB":1023,"createOption":"Empty","lun":1}]},"osProfile":{"adminUsername":"azureuser","computerName":"myVM","adminPassword":"<SECURE PASSWORD STRING>"},"networkProfile":{"networkInterfaces":[{"id":"/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myNic","properties":{"primary":true}}]}}}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
+ curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-01' -X PUT -d '{"location":"westus","name":"myVM","identity":{"type":"SystemAssigned"},"properties":{"hardwareProfile":{"vmSize":"Standard_D2_v2"},"storageProfile":{"imageReference":{"sku":"2016-Datacenter","publisher":"MicrosoftWindowsServer","version":"latest","offer":"WindowsServer"},"osDisk":{"caching":"ReadWrite","managedDisk":{"storageAccountType":"StandardSSD_LRS"},"name":"myVM3osdisk","createOption":"FromImage"},"dataDisks":[{"diskSizeGB":1023,"createOption":"Empty","lun":0},{"diskSizeGB":1023,"createOption":"Empty","lun":1}]},"osProfile":{"adminUsername":"azureuser","computerName":"myVM","adminPassword":"<SECURE PASSWORD STRING>"},"networkProfile":{"networkInterfaces":[{"id":"/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myNic","properties":{"primary":true}}]}}}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
``` ```HTTP
To create an Azure VM with the system-assigned managed identity enabled, your ac
"osDisk":{ "caching":"ReadWrite", "managedDisk":{
- "storageAccountType":"Standard_LRS"
+ "storageAccountType":"StandardSSD_LRS"
}, "name":"myVM3osdisk", "createOption":"FromImage"
To assign a user-assigned identity to a VM, your account needs the [Virtual Mach
**API VERSION 2018-06-01** ```bash
- curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-01' -X PUT -d '{"location":"westus","name":"myVM","identity":{"type":"UserAssigned","identityIds":["/subscriptions/<SUBSCRIPTION ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1"]},"properties":{"hardwareProfile":{"vmSize":"Standard_D2_v2"},"storageProfile":{"imageReference":{"sku":"2016-Datacenter","publisher":"MicrosoftWindowsServer","version":"latest","offer":"WindowsServer"},"osDisk":{"caching":"ReadWrite","managedDisk":{"storageAccountType":"Standard_LRS"},"name":"myVM3osdisk","createOption":"FromImage"},"dataDisks":[{"diskSizeGB":1023,"createOption":"Empty","lun":0},{"diskSizeGB":1023,"createOption":"Empty","lun":1}]},"osProfile":{"adminUsername":"azureuser","computerName":"myVM","adminPassword":"myPassword12"},"networkProfile":{"networkInterfaces":[{"id":"/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myNic","properties":{"primary":true}}]}}}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
+ curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-01' -X PUT -d '{"location":"westus","name":"myVM","identity":{"type":"UserAssigned","identityIds":["/subscriptions/<SUBSCRIPTION ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1"]},"properties":{"hardwareProfile":{"vmSize":"Standard_D2_v2"},"storageProfile":{"imageReference":{"sku":"2016-Datacenter","publisher":"MicrosoftWindowsServer","version":"latest","offer":"WindowsServer"},"osDisk":{"caching":"ReadWrite","managedDisk":{"storageAccountType":"StandardSSD_LRS"},"name":"myVM3osdisk","createOption":"FromImage"},"dataDisks":[{"diskSizeGB":1023,"createOption":"Empty","lun":0},{"diskSizeGB":1023,"createOption":"Empty","lun":1}]},"osProfile":{"adminUsername":"azureuser","computerName":"myVM","adminPassword":"myPassword12"},"networkProfile":{"networkInterfaces":[{"id":"/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myNic","properties":{"primary":true}}]}}}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
``` ```HTTP
To assign a user-assigned identity to a VM, your account needs the [Virtual Mach
"osDisk":{ "caching":"ReadWrite", "managedDisk":{
- "storageAccountType":"Standard_LRS"
+ "storageAccountType":"StandardSSD_LRS"
}, "name":"myVM3osdisk", "createOption":"FromImage"
To assign a user-assigned identity to a VM, your account needs the [Virtual Mach
**API VERSION 2017-12-01** ```bash
- curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2017-12-01' -X PUT -d '{"location":"westus","name":"myVM","identity":{"type":"UserAssigned","identityIds":["/subscriptions/<SUBSCRIPTION ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1"]},"properties":{"hardwareProfile":{"vmSize":"Standard_D2_v2"},"storageProfile":{"imageReference":{"sku":"2016-Datacenter","publisher":"MicrosoftWindowsServer","version":"latest","offer":"WindowsServer"},"osDisk":{"caching":"ReadWrite","managedDisk":{"storageAccountType":"Standard_LRS"},"name":"myVM3osdisk","createOption":"FromImage"},"dataDisks":[{"diskSizeGB":1023,"createOption":"Empty","lun":0},{"diskSizeGB":1023,"createOption":"Empty","lun":1}]},"osProfile":{"adminUsername":"azureuser","computerName":"myVM","adminPassword":"myPassword12"},"networkProfile":{"networkInterfaces":[{"id":"/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myNic","properties":{"primary":true}}]}}}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
+ curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2017-12-01' -X PUT -d '{"location":"westus","name":"myVM","identity":{"type":"UserAssigned","identityIds":["/subscriptions/<SUBSCRIPTION ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1"]},"properties":{"hardwareProfile":{"vmSize":"Standard_D2_v2"},"storageProfile":{"imageReference":{"sku":"2016-Datacenter","publisher":"MicrosoftWindowsServer","version":"latest","offer":"WindowsServer"},"osDisk":{"caching":"ReadWrite","managedDisk":{"storageAccountType":"StandardSSD_LRS"},"name":"myVM3osdisk","createOption":"FromImage"},"dataDisks":[{"diskSizeGB":1023,"createOption":"Empty","lun":0},{"diskSizeGB":1023,"createOption":"Empty","lun":1}]},"osProfile":{"adminUsername":"azureuser","computerName":"myVM","adminPassword":"myPassword12"},"networkProfile":{"networkInterfaces":[{"id":"/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myNic","properties":{"primary":true}}]}}}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
``` ```HTTP
To assign a user-assigned identity to a VM, your account needs the [Virtual Mach
"osDisk":{ "caching":"ReadWrite", "managedDisk":{
- "storageAccountType":"Standard_LRS"
+ "storageAccountType":"StandardSSD_LRS"
}, "name":"myVM3osdisk", "createOption":"FromImage"
active-directory Qs Configure Rest Vmss https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/qs-configure-rest-vmss.md
To create a virtual machine scale set with system-assigned managed identity enab
4. Using Azure Cloud Shell, create a virtual machine scale set using CURL to call the Azure Resource Manager REST endpoint. The following example creates a virtual machine scale set named *myVMSS* in the *myResourceGroup* with a system-assigned managed identity, as identified in the request body by the value `"identity":{"type":"SystemAssigned"}`. Replace `<ACCESS TOKEN>` with the value you received in the previous step when you requested a Bearer access token and the `<SUBSCRIPTION ID>` value as appropriate for your environment. ```bash
- curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScaleSets/myVMSS?api-version=2018-06-01' -X PUT -d '{"sku":{"tier":"Standard","capacity":3,"name":"Standard_D1_v2"},"location":"eastus","identity":{"type":"SystemAssigned"},"properties":{"overprovision":true,"virtualMachineProfile":{"storageProfile":{"imageReference":{"sku":"2016-Datacenter","publisher":"MicrosoftWindowsServer","version":"latest","offer":"WindowsServer"},"osDisk":{"caching":"ReadWrite","managedDisk":{"storageAccountType":"Standard_LRS"},"createOption":"FromImage"}},"osProfile":{"computerNamePrefix":"myVMSS","adminUsername":"azureuser","adminPassword":"myPassword12"},"networkProfile":{"networkInterfaceConfigurations":[{"name":"myVMSS","properties":{"primary":true,"enableIPForwarding":true,"ipConfigurations":[{"name":"myVMSS","properties":{"subnet":{"id":"/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet"}}}]}}]}},"upgradePolicy":{"mode":"Manual"}}}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
+ curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScaleSets/myVMSS?api-version=2018-06-01' -X PUT -d '{"sku":{"tier":"Standard","capacity":3,"name":"Standard_D1_v2"},"location":"eastus","identity":{"type":"SystemAssigned"},"properties":{"overprovision":true,"virtualMachineProfile":{"storageProfile":{"imageReference":{"sku":"2016-Datacenter","publisher":"MicrosoftWindowsServer","version":"latest","offer":"WindowsServer"},"osDisk":{"caching":"ReadWrite","managedDisk":{"storageAccountType":"StandardSSD_LRS"},"createOption":"FromImage"}},"osProfile":{"computerNamePrefix":"myVMSS","adminUsername":"azureuser","adminPassword":"myPassword12"},"networkProfile":{"networkInterfaceConfigurations":[{"name":"myVMSS","properties":{"primary":true,"enableIPForwarding":true,"ipConfigurations":[{"name":"myVMSS","properties":{"subnet":{"id":"/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet"}}}]}}]}},"upgradePolicy":{"mode":"Manual"}}}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
``` ```HTTP
To create a virtual machine scale set with system-assigned managed identity enab
"osDisk":{ "caching":"ReadWrite", "managedDisk":{
- "storageAccountType":"Standard_LRS"
+ "storageAccountType":"StandardSSD_LRS"
}, "createOption":"FromImage" }
In this section, you learn how to add and remove user-assigned managed identity
**API VERSION 2018-06-01** ```bash
- curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScaleSets/myVMSS?api-version=2018-06-01' -X PUT -d '{"sku":{"tier":"Standard","capacity":3,"name":"Standard_D1_v2"},"location":"eastus","identity":{"type":"UserAssigned","userAssignedIdentities":{"/subscriptions/<SUBSCRIPTION ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1":{}}},"properties":{"overprovision":true,"virtualMachineProfile":{"storageProfile":{"imageReference":{"sku":"2016-Datacenter","publisher":"MicrosoftWindowsServer","version":"latest","offer":"WindowsServer"},"osDisk":{"caching":"ReadWrite","managedDisk":{"storageAccountType":"Standard_LRS"},"createOption":"FromImage"}},"osProfile":{"computerNamePrefix":"myVMSS","adminUsername":"azureuser","adminPassword":"myPassword12"},"networkProfile":{"networkInterfaceConfigurations":[{"name":"myVMSS","properties":{"primary":true,"enableIPForwarding":true,"ipConfigurations":[{"name":"myVMSS","properties":{"subnet":{"id":"/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet"}}}]}}]}},"upgradePolicy":{"mode":"Manual"}}}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
+ curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScaleSets/myVMSS?api-version=2018-06-01' -X PUT -d '{"sku":{"tier":"Standard","capacity":3,"name":"Standard_D1_v2"},"location":"eastus","identity":{"type":"UserAssigned","userAssignedIdentities":{"/subscriptions/<SUBSCRIPTION ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1":{}}},"properties":{"overprovision":true,"virtualMachineProfile":{"storageProfile":{"imageReference":{"sku":"2016-Datacenter","publisher":"MicrosoftWindowsServer","version":"latest","offer":"WindowsServer"},"osDisk":{"caching":"ReadWrite","managedDisk":{"storageAccountType":"StandardSSD_LRS"},"createOption":"FromImage"}},"osProfile":{"computerNamePrefix":"myVMSS","adminUsername":"azureuser","adminPassword":"myPassword12"},"networkProfile":{"networkInterfaceConfigurations":[{"name":"myVMSS","properties":{"primary":true,"enableIPForwarding":true,"ipConfigurations":[{"name":"myVMSS","properties":{"subnet":{"id":"/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet"}}}]}}]}},"upgradePolicy":{"mode":"Manual"}}}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
``` ```HTTP
In this section, you learn how to add and remove user-assigned managed identity
"osDisk":{ "caching":"ReadWrite", "managedDisk":{
- "storageAccountType":"Standard_LRS"
+ "storageAccountType":"StandardSSD_LRS"
}, "createOption":"FromImage" }
In this section, you learn how to add and remove user-assigned managed identity
**API VERSION 2017-12-01** ```bash
- curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScaleSets/myVMSS?api-version=2017-12-01' -X PUT -d '{"sku":{"tier":"Standard","capacity":3,"name":"Standard_D1_v2"},"location":"eastus","identity":{"type":"UserAssigned","identityIds":["/subscriptions/<SUBSCRIPTION ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1"]},"properties":{"overprovision":true,"virtualMachineProfile":{"storageProfile":{"imageReference":{"sku":"2016-Datacenter","publisher":"MicrosoftWindowsServer","version":"latest","offer":"WindowsServer"},"osDisk":{"caching":"ReadWrite","managedDisk":{"storageAccountType":"Standard_LRS"},"createOption":"FromImage"}},"osProfile":{"computerNamePrefix":"myVMSS","adminUsername":"azureuser","adminPassword":"myPassword12"},"networkProfile":{"networkInterfaceConfigurations":[{"name":"myVMSS","properties":{"primary":true,"enableIPForwarding":true,"ipConfigurations":[{"name":"myVMSS","properties":{"subnet":{"id":"/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet"}}}]}}]}},"upgradePolicy":{"mode":"Manual"}}}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
+ curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScaleSets/myVMSS?api-version=2017-12-01' -X PUT -d '{"sku":{"tier":"Standard","capacity":3,"name":"Standard_D1_v2"},"location":"eastus","identity":{"type":"UserAssigned","identityIds":["/subscriptions/<SUBSCRIPTION ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1"]},"properties":{"overprovision":true,"virtualMachineProfile":{"storageProfile":{"imageReference":{"sku":"2016-Datacenter","publisher":"MicrosoftWindowsServer","version":"latest","offer":"WindowsServer"},"osDisk":{"caching":"ReadWrite","managedDisk":{"storageAccountType":"StandardSSD_LRS"},"createOption":"FromImage"}},"osProfile":{"computerNamePrefix":"myVMSS","adminUsername":"azureuser","adminPassword":"myPassword12"},"networkProfile":{"networkInterfaceConfigurations":[{"name":"myVMSS","properties":{"primary":true,"enableIPForwarding":true,"ipConfigurations":[{"name":"myVMSS","properties":{"subnet":{"id":"/subscriptions/<SUBSCRIPTION ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet"}}}]}}]}},"upgradePolicy":{"mode":"Manual"}}}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
``` ```HTTP
In this section, you learn how to add and remove user-assigned managed identity
"osDisk":{ "caching":"ReadWrite", "managedDisk":{
- "storageAccountType":"Standard_LRS"
+ "storageAccountType":"StandardSSD_LRS"
}, "createOption":"FromImage" }
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
To access this property, you need an Azure Active Directory Premium edition.
To read this property, you need to grant the following rights: - AuditLog.Read.All-- Organization.Read.All
+- Directory.Read.All
### When does Azure AD update the property?
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-concept.md
Role-assignable groups are designed to help prevent potential breaches by having
- Only Global Administrators and Privileged Role Administrators can create a role-assignable group. - The membership type for role-assignable groups must be Assigned and can't be an Azure AD dynamic group. Automated population of dynamic groups could lead to an unwanted account being added to the group and thus assigned to the role. - By default, only Global Administrators and Privileged Role Administrators can manage the membership of a role-assignable group, but you can delegate the management of role-assignable groups by adding group owners.-- RoleManagement.ReadWrite.All Microsoft Graph permission is required to be able to manage the membership of such groups; Group.ReadWrite.All won't work.
+- RoleManagement.ReadWrite.Directory Microsoft Graph permission is required to be able to manage the membership of such groups; Group.ReadWrite.All won't work.
- To prevent elevation of privilege, only a Privileged Authentication Administrator or a Global Administrator can change the credentials or reset MFA for members and owners of a role-assignable group. - Group nesting is not supported. A group can't be added as a member of a role-assignable group.
active-directory Cloudpassage Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cloudpassage-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
![Screenshot shows the CloudPassage portal with the S S O Setup Documentation link called out.](./media/cloudpassage-tutorial/tutorial_cloudpassage_05.png) > [!NOTE]
- > These values are not real. Update these values with the actual Sign-On URL and Reply URL. Contact [CloudPassage Client support team](https://www.cloudpassage.com/company/contact/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Sign-On URL and Reply URL. Contact [CloudPassage Client support team](https://fidelissecurity.com/contact/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. CloudPassage application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
When you click the CloudPassage tile in the Access Panel, you should be automati
[15]: ./media/cloudpassage-tutorial/tutorial_cloudpassage_10.png [22]: ./media/cloudpassage-tutorial/tutorial_cloudpassage_15.png [23]: ./media/cloudpassage-tutorial/tutorial_cloudpassage_16.png
-[24]: ./media/cloudpassage-tutorial/tutorial_cloudpassage_17.png
+[24]: ./media/cloudpassage-tutorial/tutorial_cloudpassage_17.png
active-directory Google Apps Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/google-apps-tutorial.md
Previously updated : 06/24/2021 Last updated : 12/27/2021
Follow these steps to enable Azure AD SSO in the Azure portal.
| **Reply URL** | |--|
- | `https://www.google.com/acs` |
- | `https://www.google.com/a/<yourdomain.com>/acs` |
+ | `https://www.google.com` |
+ | `https://www.google.com/a/<yourdomain.com>` |
c. In the **Sign on URL** textbox, type a URL using the following pattern: `https://www.google.com/a/<yourdomain.com>/ServiceLogin?continue=https://mail.google.com`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Your Google Cloud (G Suite) Connector application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Google Cloud (G Suite) Connector expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
- ![image](common/default-attributes.png)
-
- > [!NOTE]
- > Ensure that the the SAML Response doesn't include any non-standard ASCII characters in the DisplayName and Surname attributes.
+ ![image](common/default-attributes.png)
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Open a new tab in your browser, and sign into the [Google Cloud (G Suite) Connector Admin Console](https://admin.google.com/) using your administrator account.
-2. Click **Security**. If you don't see the link, it may be hidden under the **More Controls** menu at the bottom of the screen.
-
- ![Click Security.](./media/google-apps-tutorial/gapps-security.png)
-
-3. On the **Security** page, click **Set up single sign-on (SSO).**
+1. Go to the **Menu -> Security -> Authentication -> SSO with third party IDP**.
- ![Click SSO.](./media/google-apps-tutorial/security-gapps.png)
+ ![G suite security page.](./media/google-apps-tutorial/security.png)
-4. Perform the following configuration changes:
+4. Perform the following configuration changes in the **Third-party SSO profile for your organization** tab:
- ![Configure SSO.](./media/google-apps-tutorial/configuration.png)
+ ![Configure SSO.](./media/google-apps-tutorial/sso-configuration.png)
- a. Select **Setup SSO with third-party identity provider**.
+ a. Turn ON the **SSO profile for your organization**.
b. In the **Sign-in page URL** field in Google Cloud (G Suite) Connector, paste the value of **Login URL** which you have copied from Azure portal.
active-directory Settlingmusic Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/settlingmusic-tutorial.md
Previously updated : 11/17/2021 Last updated : 12/22/2021 # Tutorial: Azure AD SSO integration with Settling music
Follow these steps to enable Azure AD SSO in the Azure portal.
6. On the **Set up Settling music** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Copy configuration URLs](./media/settlingmusic-tutorial/copy-configuration-urls.png)
+
+ > [!NOTE]
+ > Please use the below URL for the Logout URL.
+ ```Logout URL
+ https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0
+ ```
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
b. In the **Login URL of the ID provider** textbox, paste the value of **Login URL** which you have copied from Azure portal.
- c. In the **ID provider logout URL** textbox, paste the value of **Logout URL** which you have copied from Azure portal.
+ c. In the **ID provider logout URL** textbox, paste the value of **Logout URL** which is explained in [Configure Azure AD SSO](#configure-azure-ad-sso) section.
d. Click **Choose File** to upload the **Certificate (Base64)** which you have downloaded form Azure portal.
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-configuration.md
As you work with the node resource group, keep in mind that you can't:
- Specify names for the managed resources within the node resource group. - Modify or delete Azure-created tags of managed resources within the node resource group.
+## OIDC Issuer (Preview)
+
+This enables an OIDC Issuer URL of the provider which allows the API server to discover public signing keys.
++
+### Before you begin
+
+You must have the following resource installed:
+
+* The Azure CLI
+* The `aks-preview` extension version 0.5.50 or later
+* Kubernetes version 1.19.x or above
++
+#### Register the `EnableOIDCIssuerPreview` feature flag
+
+To use the OIDC Issuer feature, you must enable the `EnableOIDCIssuerPreview` feature flag on your subscription.
+
+```azurecli
+az feature register --name EnableOIDCIssuerPreview --namespace Microsoft.ContainerService
+```
+You can check on the registration status by using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableOIDCIssuerPreview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+#### Install the aks-preview CLI extension
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Create an AKS cluster with OIDC Issuer
+
+To create a cluster using the OIDC Issuer.
+
+```azurecli-interactive
+az group create --name myResourceGroup --location eastus
+az aks create -n aks -g myResourceGroup --enable-oidc-issuer
+```
+
+### Upgrade an AKS cluster with OIDC Issuer
+
+To upgrade a cluster to use OIDC Issuer.
+
+```azurecli-interactive
+az aks upgrade -n aks -g myResourceGroup --enable-oidc-issuer
+```
+ ## Next steps - Learn how [upgrade the node images](node-image-upgrade.md) in your cluster.
As you work with the node resource group, keep in mind that you can't:
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
-[aks-add-np-containerd]: windows-container-cli.md#add-a-windows-server-node-pool-with-containerd-preview
+[aks-add-np-containerd]: windows-container-cli.md#add-a-windows-server-node-pool-with-containerd-preview
aks Kubernetes Action https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-action.md
description: Learn how to use GitHub Actions to deploy your container to Kubern
Previously updated : 11/06/2020 Last updated : 01/05/2022
Copy this JSON object, which you can use to authenticate from GitHub.
Follow the steps to configure the secrets:
-1. In [GitHub](https://github.com/), browse to your repository, select **Settings > Secrets > Add a new secret**.
+1. In [GitHub](https://github.com/), browse to your repository, select **Settings > Secrets > New repository secret**.
- ![Screenshot shows the Add a new secret link for a repository.](media/kubernetes-action/secrets.png)
+ :::image type="content" source="media/kubernetes-action/secrets.png" alt-text="Screenshot shows the Add a new secret link for a repository.":::
2. Paste the contents of the above `az cli` command as the value of secret variable. For example, `AZURE_CREDENTIALS`.
Follow the steps to configure the secrets:
4. You will see the secrets as shown below once defined.
- ![Screenshot shows existing secrets for a repository.](media/kubernetes-action/kubernetes-secrets.png)
+ :::image type="content" source="media/kubernetes-action/kubernetes-secrets.png" alt-text="Screenshot shows existing secrets for a repository.":::
+ ## Build a container image and deploy to Azure Kubernetes Service cluster
-The build and push of the container images is done using `Azure/docker-login@v1` action.
+The build and push of the container images is done using `azure/docker-login@v1` action.
```yml
jobs:
- run: | docker build . -t ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }} docker push ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }}
+ working-directory: ./<path-to-Dockerfile-directory>
``` ### Deploy to Azure Kubernetes Service cluster
-To deploy a container image to AKS, you will need to use the `Azure/k8s-deploy@v1` action. This action has five parameters:
+To deploy a container image to AKS, you will need to use the `azure/k8s-deploy@v1` action. This action has five parameters:
| **Parameter** | **Explanation** | |||
Before you can deploy to AKS, you'll need to set target Kubernetes namespace and
```yaml # Create namespace if doesn't exist - run: |
- kubectl create namespace ${{ env.NAMESPACE }} --dry-run -o json | kubectl apply -f -
-
+ kubectl create namespace ${{ env.NAMESPACE }} --dry-run=client -o json | kubectl apply -f -
+ # Create image pull secret for ACR - uses: azure/k8s-create-secret@v1 with:
Before you can deploy to AKS, you'll need to set target Kubernetes namespace and
```
-Complete your deployment with the `k8s-deploy` action. Replace the environment variables with values for your application.
+Complete your deployment with the `azure/k8s-deploy@v1` action. Replace the environment variables with values for your application.
```yaml
jobs:
- run: | docker build . -t ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }} docker push ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }}
+ working-directory: ./<path-to-Dockerfile-directory>
# Set the target Azure Kubernetes Service (AKS) cluster. - uses: azure/aks-set-context@v1
jobs:
# Create namespace if doesn't exist - run: |
- kubectl create namespace ${{ env.NAMESPACE }} --dry-run -o json | kubectl apply -f -
+ kubectl create namespace ${{ env.NAMESPACE }} --dry-run=client -o json | kubectl apply -f -
# Create image pull secret for ACR - uses: azure/k8s-create-secret@v1
jobs:
container-registry-password: ${{ secrets.REGISTRY_PASSWORD }} secret-name: ${{ env.SECRET }} namespace: ${{ env.NAMESPACE }}
- force: true
+ arguments: --force true
# Deploy app to AKS - uses: azure/k8s-deploy@v1 with: manifests: |
- manifests/deployment.yml
- manifests/service.yml
+ ${{ github.workspace }}/manifests/deployment.yaml
+ ${{ github.workspace }}/manifests/service.yaml
images: | ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }} imagepullsecrets: |
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
Each number in the version indicates general compatibility with the previous ver
Aim to run the latest patch release of the minor version you're running. For example, your production cluster is on **`1.17.7`**. **`1.17.8`** is the latest available patch version available for the *1.17* series. You should upgrade to **`1.17.8`** as soon as possible to ensure your cluster is fully patched and supported.
-## Kubernetes Alias Minor Version (Preview)
+## Kubernetes version alias (Preview)
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] > [!NOTE]
-> Alias Minor Version requires Azure CLI version 2.31.0 or above with the aks-preview extension installed. Please use `az upgrade` to install the latest version of the CLI.
+> Kubernetes version alias requires Azure CLI version 2.31.0 or above with the aks-preview extension installed. Please use `az upgrade` to install the latest version of the CLI.
You will need the *aks-preview* Azure CLI extension version 0.5.49 or greater. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
app-service Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/firewall-integration.md
description: Learn how to integrate with Azure Firewall to secure outbound traff
ms.assetid: 955a4d84-94ca-418d-aa79-b57a5eb8cb85 Previously updated : 09/16/2021 Last updated : 01/05/2022
With an Azure Firewall, you automatically get everything below configured with t
| \*.ctldl.windowsupdate.com:443 | | \*.prod.microsoftmetrics.com:443 | | \*.dsms.core.windows.net:443 |
+| \*.prod.warm.ingest.monitor.core.windows.net |
### Linux dependencies
Linux is not available in US Gov regions and is thus not listed as an optional c
|\*.management.usgovcloudapi.net:443 | |\*.update.microsoft.com:443 | |\*.prod.microsoftmetrics.com:443 |
+| \*.prod.warm.ingest.monitor.core.usgovcloudapi.net |
<!--Image references--> [1]: ./media/firewall-integration/firewall-apprule.png
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-role-based-access-control.md
An Automation Contributor can manage all resources in the Automation account exc
|**Actions** |**Description** | |||
-|Microsoft.Automation/automationAccounts/*|Create and manage resources of all types under Automation account.|
|Microsoft.Authorization/*/read|Read roles and role assignments.| |Microsoft.Resources/deployments/*|Create and manage resource group deployments.| |Microsoft.Resources/subscriptions/resourceGroups/read|Read resource group deployments.|
A Log Analytics Contributor can read all monitoring data and edit monitoring set
|**Actions** |**Description** | ||| |*/read|Read resources of all types, except secrets.|
-|Microsoft.Automation/automationAccounts/*|Manage Automation accounts.|
|Microsoft.ClassicCompute/virtualMachines/extensions/*|Create and manage virtual machine extensions.| |Microsoft.ClassicStorage/storageAccounts/listKeys/action|List classic storage account keys.| |Microsoft.Compute/virtualMachines/extensions/*|Create and manage classic virtual machine extensions.|
Perform the following steps to create the Azure Automation custom role in the Az
"Microsoft.Insights/diagnosticSettings/*", "Microsoft.Resources/deployments/*", "Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.Automation/automationAccounts/*",
"Microsoft.Support/*" ], "notActions": [],
Perform the following steps to create the Azure Automation custom role with Powe
"Microsoft.Insights/diagnosticSettings/*", "Microsoft.Resources/deployments/*", "Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.Automation/automationAccounts/*",
"Microsoft.Support/*" ], "NotActions": [],
automation Desired State Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/desired-state-configuration.md
DSC configurations that take a long time to compile can cause this error.
You can make your DSC configurations parse faster by explicitly including the `ModuleName` parameter for any [Import-DSCResource](/powershell/dsc/configurations/import-dscresource) calls.
+## Scenario: Error while onboarding a machine
+
+#### Issue
+
+You receive a `agent has a problem` error when you onboard a machine.
+
+### Cause
+
+This is a known issue. You cannot assign the same configuration again as the node remains in pending state.
+
+### Resolution
+
+The work around is to apply different test configuration and apply the original configuration again.
+ ## Next steps If you don't see your problem here or you can't resolve your issue, try one of the following channels for additional support:
azure-maps Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/supported-languages.md
Title: Localization support with Microsoft Azure Maps
description: See which regions Azure Maps supports with services such as maps, search, routing, weather, and traffic incidents. Learn how to set up the View parameter. Previously updated : 12/07/2020 Last updated : 01/05/2022
# Localization support in Azure Maps
-Azure Maps supports various languages and views based on country/region. This article provides the supported languages and views to help guide your Azure Maps implementation.
+Azure Maps supports various languages and views based on country/region. This article provides the supported languages and views to help guide your Azure Maps implementation.
## Azure Maps supported languages
-Azure Maps have been localized in variety languages across its services. The following table provides the supported language codes for each service. 
+Azure Maps have been localized in variety languages across its services. The following table provides the supported language codes for each service.
-| ID | Name | Maps | Search | Routing | Weather | Traffic incidents | JS map control |
-|||:--:|::|:-:|:--:|:--:|:--:|
-| af-ZA | Afrikaans | | Γ£ô | Γ£ô | | | |
-| ar-SA | Arabic | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| bn-BD | Bangla (Bangladesh) | | | | Γ£ô | | |
-| bn-IN | Bangla (India) | | | | Γ£ô | | |
-| bs-BA | Bosnian | | | | Γ£ô | | |
-| eu-ES | Basque | | Γ£ô | | | | |
-| bg-BG | Bulgarian | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Γ£ô |
-| ca-ES | Catalan | | Γ£ô | | Γ£ô | | |
-| zh-HanS | Chinese (Simplified) | | zh-CN | | zh-CN | | |
-| zh-HanT | Chinese (Hong Kong SAR)| | | | zh-HK | | |
-| zh-HanT | Chinese (Taiwan) | zh-TW | zh-TW | zh-TW | zh-TW | | zh-TW |
-| hr-HR | Croatian | | Γ£ô | | Γ£ô | | |
-| cs-CZ | Czech | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| da-DK | Danish | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| nl-BE | Dutch (Belgium) | | Γ£ô | | Γ£ô | | |
-| nl-NL | Dutch (Netherlands) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| en-AU | English (Australia) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| en-NZ | English (New Zealand) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| en-GB | English (Great Britain)| Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| en-US | English (USA) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| et-EE | Estonian | | Γ£ô | | Γ£ô | Γ£ô | |
-| fil-PH | Filipino | | | | Γ£ô | | |
-| fi-FI | Finnish | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| fr-FR | French | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| fr-CA | French (Canada) | | Γ£ô | | Γ£ô | | |
-| gl-ES | Galician | | Γ£ô | | | | |
-| de-DE | German | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| el-GR | Greek | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| gu-IN | Gujarati | | | | Γ£ô | | |
-| he-IL | Hebrew | | Γ£ô | | Γ£ô | Γ£ô | |
-| hi-IN | Hindi | | | | Γ£ô | | |
-| hu-HU | Hungarian | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| is-IS | Icelandic | | | | Γ£ô | | |
-| id-ID | Indonesian | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| it-IT | Italian | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| ja-JP | Japanese | | | | Γ£ô | | |
-| kn-IN | Kannada | | | | Γ£ô | | |
-| kk-KZ | Kazakh | | Γ£ô | | Γ£ô | | |
-| ko-KR | Korean | Γ£ô | | Γ£ô | Γ£ô | | Γ£ô |
-| es-419 | Latin American Spanish | | Γ£ô | | | | |
-| lv-LV | Latvian | | Γ£ô | | Γ£ô | Γ£ô | |
-| lt-LT | Lithuanian | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| mk-MK | Macedonian | | | | Γ£ô | | |
-| ms-MY | Malay (Latin) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Γ£ô |
-| mr-IN | Marathi | | | | Γ£ô | | |
-| nb-NO | Norwegian Bokmål | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
-| NGT | Neutral Ground Truth - Official languages for all regions in local scripts if available | Γ£ô | | | | | Γ£ô |
-| NGT-Latn | Neutral Ground Truth - Latin exonyms. Latin script will be used if available | Γ£ô | | | | | Γ£ô |
-| pl-PL | Polish | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| pt-BR | Portuguese (Brazil) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Γ£ô |
-| pt-PT | Portuguese (Portugal) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| pa-IN | Punjabi | | | | Γ£ô | | |
-| ro-RO | Romanian | | Γ£ô | | Γ£ô | Γ£ô | |
-| ru-RU | Russian | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| sr-Cyrl-RS | Serbian (Cyrillic) | | sr-RS | | sr-RS | | |
-| sr-Latn-RS | Serbian (Latin) | | | | sr-latn | | |
-| sk-SK | Slovak | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| sl-SI | Slovenian | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Γ£ô |
-| es-ES | Spanish | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| es-MX | Spanish (Mexico) | Γ£ô | | Γ£ô | Γ£ô | | Γ£ô |
-| sv-SE | Swedish | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| ta-IN | Tamil (India) | | | | Γ£ô | | |
-| te-IN | Telugu (India) | | | | Γ£ô | | |
-| th-TH | Thai | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| tr-TR | Turkish | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| uk-UA | Ukrainian | | Γ£ô | | Γ£ô | | |
-| ur-PK | Urdu | | | | Γ£ô | | |
-| uz-Latn-UZ | Uzbek | | | | Γ£ô | | |
-| vi-VN | Vietnamese | | Γ£ô | | Γ£ô | | |
+| Code | Name | Maps | Search | Routing | Traffic | Weather |
+|||:-:|::|:-:|:-:|:-:|
+| af-ZA | Afrikaans | | Γ£ô | Γ£ô | | |
+| ar | Arabic | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| bg-BG | Bulgarian | Γ£ô | Γ£ô | Γ£ô | | Γ£ô |
+| bn-BD | Bangla (Bangladesh) | | | | | Γ£ô |
+| bn-IN | Bangla (India) | | | | | Γ£ô |
+| bs-BA | Bosnian | | | | | Γ£ô |
+| ca-ES | Catalan | | Γ£ô | | | Γ£ô |
+| cs-CZ | Czech | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| da-DK | Danish | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| de-DE | German | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| el-GR | Greek | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| en-AU | English (Australia) | Γ£ô | Γ£ô | | | Γ£ô |
+| en-GB | English (Great Britain) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| en-NZ | English (New Zealand) | Γ£ô | Γ£ô | | Γ£ô | Γ£ô |
+| en-US | English (USA) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| es-419 | Spanish (Latin America) | | Γ£ô | | | Γ£ô |
+| es-ES | Spanish (Spain) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| es-MX | Spanish (Mexico) | Γ£ô | | Γ£ô | | Γ£ô |
+| et-EE | Estonian | | Γ£ô | | Γ£ô | Γ£ô |
+| eu-ES | Basque | | Γ£ô | | | |
+| fi-FI | Finnish | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| fil-PH | Filipino | | | | | Γ£ô |
+| fr-CA | French (Canada) | | Γ£ô | | | Γ£ô |
+| fr-FR | French (France) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| gl-ES | Galician | | Γ£ô | | | |
+| gu-IN | Gujarati | | | | | Γ£ô |
+| he-IL | Hebrew | | Γ£ô | | Γ£ô | Γ£ô |
+| hi-IN | Hindi | | | | | Γ£ô |
+| hr-HR | Croatian | | Γ£ô | | | Γ£ô |
+| hu-HU | Hungarian | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| id-ID | Indonesian | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| is-IS | Icelandic | | | | | Γ£ô |
+| it-IT | Italian | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| ja-JP | Japanese | | | | | Γ£ô |
+| kk-KZ | Kazakh | | Γ£ô | | | Γ£ô |
+| kn-IN | Kannada | | | | | Γ£ô |
+| ko-KR | Korean | Γ£ô | | Γ£ô | | Γ£ô |
+| lt-LT | Lithuanian | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| lv-LV | Latvian | | Γ£ô | | Γ£ô | Γ£ô |
+| mk-MK | Macedonian | | | | | Γ£ô |
+| mr-IN | Marathi | | | | | Γ£ô |
+| ms-MY | Malay | Γ£ô | Γ£ô | Γ£ô | | Γ£ô |
+| nb-NO | Norwegian Bokmål | ✓ | ✓ | ✓ | ✓ | ✓ |
+| NGT | Neutral Ground Truth (Local)<sup>1</sup> | Γ£ô | Γ£ô | | | |
+| NGT-Latn | Neutral Ground Truth (Latin)<sup>2</sup> | Γ£ô | Γ£ô | | | |
+| nl-BE | Dutch (Belgium) | | Γ£ô | | | Γ£ô |
+| nl-NL | Dutch (Netherlands) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| pa-IN | Punjabi | | | | | Γ£ô |
+| pl-PL | Polish | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| pt-BR | Portuguese (Brazil) | Γ£ô | Γ£ô | Γ£ô | | Γ£ô |
+| pt-PT | Portuguese (Portugal) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| ro-RO | Romanian | | Γ£ô | | Γ£ô | Γ£ô |
+| ru-RU | Russian | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| sk-SK | Slovak | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| sl-SI | Slovenian | Γ£ô | Γ£ô | Γ£ô | | Γ£ô |
+| sr-Cyrl-RS | Serbian (Cyrillic) | | Γ£ô | | | Γ£ô |
+| sr-Latn-RS | Serbian (Latin) | | | | | Γ£ô |
+| sv-SE | Swedish | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| ta-IN | Tamil | | | | | Γ£ô |
+| te-IN | Telugu | | | | | Γ£ô |
+| th-TH | Thai | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| tr-TR | Turkish | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| uk-UA | Ukrainian | | Γ£ô | | | Γ£ô |
+| ur-PK | Urdu | | | | | Γ£ô |
+| uz-Latn-UZ | Uzbek | | | | | Γ£ô |
+| vi-VN | Vietnamese | | Γ£ô | | | Γ£ô |
+| zh-HanS-CN | Chinese (Simplified, China) | Γ£ô | Γ£ô | | | Γ£ô |
+| zh-HanT-HK | Chinese (Traditional, Hong Kong SAR) | | | | | Γ£ô |
+| zh-HanT-TW | Chinese (Traditional, Taiwan) | Γ£ô | Γ£ô | Γ£ô | | Γ£ô |
+
+<sup>1</sup> Neutral Ground Truth (Local) - Official languages for all regions in local scripts if available<br>
+<sup>2</sup> Neutral Ground Truth (Latin) - Latin exonyms will be used if available
## Azure Maps supported views
Azure Maps have been localized in variety languages across its services. The fol
> * Morocco > * Pakistan >
-> After August 1, 2019, the **View** parameter will define the returned map content for the new regions/countries listed above. Azure Maps **View** parameter (also referred to as "user region parameter") is a two letter ISO-3166 Country Code that will show the correct maps for that country/region specifying which set of geopolitically disputed content is returned via Azure Maps services, including borders and labels displayed on the map.
+> After August 1, 2019, the **View** parameter will define the returned map content for the new regions/countries listed above. Azure Maps **View** parameter (also referred to as "user region parameter") is a two letter ISO-3166 Country Code that will show the correct maps for that country/region specifying which set of geopolitically disputed content is returned via Azure Maps services, including borders and labels displayed on the map.
Make sure you set up the **View** parameter as required for the REST APIs and the SDKs, which your services are using. ### Rest APIs
-Ensure that you have set up the View parameter as required. View parameter specifies which set of geopolitically disputed content is returned via Azure Maps services.
+Ensure that you have set up the View parameter as required. View parameter specifies which set of geopolitically disputed content is returned via Azure Maps services.
Affected Azure Maps REST
By default, the View parameter is set to **Unified**, even if you haven't define
The following table provides supported views.
-| View | Description | Maps | Search | JS Map Control |
-|--|-|:--:|::|:--:|
-| AE | United Arab Emirates (Arabic View) | Γ£ô | | Γ£ô |
-| AR | Argentina (Argentinian View) | Γ£ô | Γ£ô | Γ£ô |
-| BH | Bahrain (Arabic View) | Γ£ô | | Γ£ô |
-| IN | India (Indian View) | Γ£ô | Γ£ô | Γ£ô |
-| IQ | Iraq (Arabic View) | Γ£ô | | Γ£ô |
-| JO | Jordan (Arabic View) | Γ£ô | | Γ£ô |
-| KW | Kuwait (Arabic View) | Γ£ô | | Γ£ô |
-| LB | Lebanon (Arabic View) | Γ£ô | | Γ£ô |
-| MA | Morocco (Moroccan View) | Γ£ô | Γ£ô | Γ£ô |
-| OM | Oman (Arabic View) | Γ£ô | | Γ£ô |
-| PK | Pakistan (Pakistani View) | Γ£ô | Γ£ô | Γ£ô |
-| PS | Palestinian Authority (Arabic View) | Γ£ô | | Γ£ô |
-| QA | Qatar (Arabic View) | Γ£ô | | Γ£ô |
-| SA | Saudi Arabia (Arabic View) | Γ£ô | | Γ£ô |
-| SY | Syria (Arabic View) | Γ£ô | | Γ£ô |
-| YE | Yemen (Arabic View) | Γ£ô | | Γ£ô |
-| Auto | Return the map data based on the IP address of the request.| Γ£ô | Γ£ô | Γ£ô |
-| Unified | Unified View (Others) | Γ£ô | Γ£ô | Γ£ô |
+| View | Description | Maps | Search |
+||-|:-:|::|
+| AE | United Arab Emirates (Arabic View) | Γ£ô | |
+| AR | Argentina (Argentinian View) | Γ£ô | Γ£ô |
+| BH | Bahrain (Arabic View) | Γ£ô | |
+| IN | India (Indian View) | Γ£ô | Γ£ô |
+| IQ | Iraq (Arabic View) | Γ£ô | |
+| JO | Jordan (Arabic View) | Γ£ô | |
+| KW | Kuwait (Arabic View) | Γ£ô | |
+| LB | Lebanon (Arabic View) | Γ£ô | |
+| MA | Morocco (Moroccan View) | Γ£ô | Γ£ô |
+| OM | Oman (Arabic View) | Γ£ô | |
+| PK | Pakistan (Pakistani View) | Γ£ô | Γ£ô |
+| PS | Palestinian Authority (Arabic View) | Γ£ô | |
+| QA | Qatar (Arabic View) | Γ£ô | |
+| SA | Saudi Arabia (Arabic View) | Γ£ô | |
+| SY | Syria (Arabic View) | Γ£ô | |
+| YE | Yemen (Arabic View) | Γ£ô | |
+| Auto | Automatically detect based on request | Γ£ô | Γ£ô |
+| Unified | Unified View (Others) | Γ£ô | Γ£ô |
azure-monitor Azure Monitor Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-install.md
It is strongly recommended to update to GA+ versions listed below instead of usi
| July 2021 | <ul><li>Support for direct proxies</li><li>Support for Log Analytics gateway</li></ul> [Learn more](https://azure.microsoft.com/updates/general-availability-azure-monitor-agent-and-data-collection-rules-now-support-direct-proxies-and-log-analytics-gateway/) | 1.1.1.0 | 1.10.5.0 | | August 2021 | Fixed issue allowing Azure Monitor Metrics as the only destination | 1.1.2.0 | 1.10.9.0<sup>1</sup> | | September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Addressed regression introduced in 1.1.3.1<sup>2</sup> for Arc Windows servers</li></ul> | 1.1.3.2 | 1.12.2.0 <sup>2</sup> |
+| December 2021 | Fixed issues impacting Linux Arc-enabled servers | N/A | 1.14.7.0 |
<sup>1</sup> Do not use AMA Linux version 1.10.7.0 <sup>2</sup> Known regression where it's not working on Arc-enabled servers
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
In addition to consolidating this functionality into a single agent, the Azure M
### Current limitations When compared with the existing agents, this new agent doesn't yet have full parity. - **Comparison with Log Analytics agents (MMA/OMS):**
- - Not all Log Analytics solutions are supported today. See [what's supported](#supported-services-and-features).
- - No support for Azure Private Links.
- - No support for collecting file based logs or IIS logs.
+ - Not all Log Analytics solutions are supported today. [View supported features and services](#supported-services-and-features).
+ - No support for Azure Private Links.
+ - No support for collecting file based logs or IIS logs.
+ - **Comparison with Azure Diagnostics extensions (WAD/LAD):** - No support for Event Hubs and Storage accounts as destinations. - No support for collecting file based logs, IIS logs, ETW events, .NET events and crash dumps.
The Azure Monitor agent replaces the [legacy agents for Azure Monitor](agents-ov
- **Environment requirements:** The Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today. Support for future operating system versions, environment support, and networking requirements will most likely be provided in this new agent. Assess whether your environment is supported by the Azure Monitor agent. If not, you might need to stay with the current agent. If the Azure Monitor agent supports your current environment, consider transitioning to it.-- **Current and new feature requirements:** The Azure Monitor agent introduces several new capabilities, such as filtering, scoping, and multi-homing. But it isn't at parity yet with the current agents for other functionality, such as custom log collection and integration with all solutions. ([See the solutions in preview](../faq.yml).)
-
+- **Current and new feature requirements:** The Azure Monitor agent introduces several new capabilities, such as filtering, scoping, and multi-homing. But it isn't at parity yet with the current agents for other functionality, such as custom log collection and integration with all solutions. ([View supported features and services](#supported-services-and-features).)
+ Most new capabilities in Azure Monitor will be made available only with the Azure Monitor agent. Over time, more functionality will be available only in the new agent. Consider whether the Azure Monitor agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent. If the Azure Monitor agent has all the core capabilities you require, consider transitioning to it. If there are critical features that you require, continue with the current agent until the Azure Monitor agent reaches parity.
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
You can use the Azure portal to create a data collection rule and associate virt
> [!NOTE] > If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
-In the **Azure Monitor** menu in the Azure portal, select **Data Collection Rules** from the **Settings** section. Click **Create** to create a new Data Collection Rule and assignment.
+In the **Monitor** menu in the Azure portal, select **Data Collection Rules** from the **Settings** section. Click **Create** to create a new Data Collection Rule and assignment.
[![Data Collection Rules](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png#lightbox)
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/api-filtering-sampling.md
To filter telemetry, you write a telemetry processor and register it with `Telem
```csharp using Microsoft.ApplicationInsights.Channel; using Microsoft.ApplicationInsights.Extensibility;
+ using Microsoft.ApplicationInsights.DataContracts;
public class SuccessfulDependencyFilter : ITelemetryProcessor {
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-telemetry-processors.md
This section lists some common span attributes that telemetry processors can use
| Attribute | Type | Description | ||||
-| `db.system` | string | Identifier for the database management system (DBMS) product being used. |
+| `db.system` | string | Identifier for the database management system (DBMS) product being used. See [list of identifiers](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/database.md#connection-level-attributes). |
| `db.connection_string` | string | Connection string used to connect to the database. It's recommended to remove embedded credentials.| | `db.user` | string | Username for accessing the database. | | `db.name` | string | String used to report the name of the database being accessed. For commands that switch the database, this string should be set to the target database, even if the command fails.|
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
or configuring [telemetry processors](./java-standalone-telemetry-processors.md)
## Multiple applications in a single JVM
-Currently, Application Insights Java 3.x only supports a single
-[connection string and role name](./java-standalone-config.md#connection-string-and-role-name)
-per running process. In particular, you can't have multiple tomcat web apps in the same tomcat deployment
-using different connection strings or different role names yet.
+This use case is supported in Application Insights Java 3.x using [Instrumentation keys overrides (preview)](./java-standalone-config.md#instrumentation-keys-overrides-preview).
## Operation names
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/activity-log.md
If a log profile already exists, you first need to remove the existing log profi
```azurecli-interactive az monitor log-profiles create --name "default" --location null --locations "global" "eastus" "westus" --categories "Delete" "Write" "Action" --enabled false --days 0 --service-bus-rule-id "/subscriptions/<YOUR SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.EventHub/namespaces/<EVENT HUB NAME SPACE>/authorizationrules/RootManageSharedAccessKey" ```- | Property | Required | Description | | | | | | name |Yes |Name of your log profile. |
Diagnostic settings send the same data as the legacy method used to send the Act
The columns in the following table have been deprecated in the updated schema. They still exist in *AzureActivity* but they will have no data. The replacement for these columns are not new, but they contain the same data as the deprecated column. They are in a different format, so you may need to modify log queries that use them.
-| Deprecated column | Replacement column |
-|:|:|
-| ActivityStatus | ActivityStatusValue |
-| ActivitySubstatus | ActivitySubstatusValue |
-| Category | CategoryValue |
-| OperationName | OperationNameValue |
-| ResourceProvider | ResourceProviderValue |
+|Activity Log JSON | Log Analytics column name<br/>*(older deprecated)* | New Log Analytics column name | Notes |
+|:|:|:|:|
+|category | Category | CategoryValue ||
+|status<br/><br/>*values are (success, start, accept, failure)* |ActivityStatus <br/><br/>*values same as JSON* |ActivityStatusValue<br/><br/>*values change to (succeeded, started, accepted, failed)* |The valid values change as shown|
+|subStatus |ActivitySubstatus |ActivitySubstatusValue||
+|operationName | OperationName | OperationNameValue |REST API localizes operation name value. Log Analytics UI always shows English. |
+|resourceProviderName | ResourceProvider | ResourceProviderValue ||
> [!IMPORTANT] > In some cases, the values in these columns may be in all uppercase. If you have a query that includes these columns, you should use the [=~ operator](/azure/kusto/query/datatypes-string-operators) to do a case insensitive comparison.
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-enable.md
description: Enable SQL insights in Azure Monitor
Previously updated : 11/5/2021 Last updated : 1/6/2022 # Enable SQL insights (preview)
The Azure virtual machines has the following requirements.
- Operating system: Ubuntu 18.04 - Recommended minimum Azure virtual machine sizes: Standard_B2s (2 cpus, 4 GiB memory) -- Supported regions: Any [region supported by the Azure Monitor agent](../agents/azure-monitor-agent-overview.md#supported-regions)
+- Deployed in any Azure region [supported](../agents/azure-monitor-agent-overview.md#supported-regions) by the Azure Monitor agent, and meeting all Azure Monitor agent [prerequisites](../agents/azure-monitor-agent-install.md#prerequisites).
> [!NOTE] > The Standard_B2s (2 cpus, 4 GiB memory) virtual machine size will support up to 100 connection strings. You shouldn't allocate more than 100 connections to a single virtual machine.
azure-netapp-files Monitor Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/monitor-azure-netapp-files.md
+
+ Title: Ways to monitor Azure NetApp Files | Microsoft Docs
+description: Describes ways to monitor Azure NetApp Files, including the Activity log, metrics, and capacity utilization monitoring.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 01/06/2022++
+# Ways to monitor Azure NetApp Files
+
+This article describes ways to monitor Azure NetApp Files.
+
+## Azure Activity log
+
+The Activity log provides insight into subscription-level events. For instance, you can get information about when a resource is modified or when a virtual machine is started. You can view the activity log in the Azure portal or retrieve entries with PowerShell and CLI. This article provides details on viewing the Activity log and sending it to different destinations.
+
+To understand how Activity log works, see [Azure Activity log](../azure-monitor/essentials/activity-log.md).
+
+For Activity log warnings for Azure NetApp Files volumes, see [Activity log warnings for Azure NetApp Files volumes](troubleshoot-volumes.md#activity-log-warnings-for-volumes).
+
+## Azure NetApp Files metrics
+
+Azure NetApp Files provides metrics on allocated storage, actual storage usage, volume IOPS, and latency. By analyzing these metrics, you can gain a better understanding on the usage pattern and volume performance of your NetApp accounts.
+
+You can find metrics for a capacity pool or volume by selecting the **capacity pool** or **volume**. Then click **Metric** to view the available metrics.
+
+For more information about Azure NetApp Files metrics, see [Metrics for Azure NetApp Files](azure-netapp-files-metrics.md).
+
+## Capacity utilization monitoring
+
+It's important to monitor capacity regularly. You can monitor capacity utilization at the VM level. You can check the used and available capacity of a volume by using Windows or Linux clients. You can also configure alerts by using `ANFCapacityManager`.
+
+For more information, see [Monitor capacity utilization](volume-hard-quota-guidelines.md#how-to-operationalize-the-volume-hard-quota-change).
+
+## Next steps
+
+* [Azure Activity log](../azure-monitor/essentials/activity-log.md)
+* [Activity log warnings for Azure NetApp Files volumes](troubleshoot-volumes.md#activity-log-warnings-for-volumes)
+* [Metrics for Azure NetApp Files](azure-netapp-files-metrics.md)
+* [Monitor capacity utilization](volume-hard-quota-guidelines.md#how-to-operationalize-the-volume-hard-quota-change)
azure-netapp-files Troubleshoot Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/troubleshoot-volumes.md
na Previously updated : 10/04/2021 Last updated : 01/06/2022 # Troubleshoot volume errors for Azure NetApp Files
This section explains the causes of some of the common allocation failures and s
|Out of storage or networking capacity in a region for regular volumes. <br> Error message: `There are currently insufficient resources available to create [or extend] a volume in this region. Please retry the operation. If the problem persists, contact Support.` | The error indicates that there are insufficient resources available in the region to create or resize volumes. <br> Try one of the following workarounds: <ul><li>Create the volume under a new VNet. Doing so will avoid hitting networking-related resource limits.</li> <li>Retry after some time. Resources may have been freed in the cluster, region, or zone in the interim.</li></ul> | |Out of storage capacity when creating a volume with network features set to `Standard`. <br> Error message: `No storage available with Standard network features, for the provided VNet.` | The error indicates that there are insufficient resources available in the region to create volumes with `Standard` networking features. <br> Try one of the following workarounds: <ul><li>If `Standard` network features are not required, create the volume with `Basic` network features.</li> <li>Try creating the volume under a new VNet. Doing so will avoid hitting networking-related resource limits</li><li>Retry after some time. Resources may have been freed in the cluster, region, or zone in the interim.</li></ul> |
+## Activity log warnings for volumes
+
+| Warnings | Resolutions |
+|-|-|
+| The `Microsoft.NetApp/netAppAccounts/capacityPools/volumes/ScaleUp` operation displays a warning: <br> `Percentage Volume Consumed Size reached 90%` | The used size of an Azure NetApp Files volume has reached 90% of the volume quota. You should [resize the volume](azure-netapp-files-resize-capacity-pools-or-volumes.md) soon. |
+ ## Next steps * [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Blockchain/blockchainMembers | [listApiKeys](/rest/api/blockchain/2019-06-01-preview/blockchainmembers/listapikeys) | | Microsoft.Blockchain/blockchainMembers/transactionNodes | [listApiKeys](/rest/api/blockchain/2019-06-01-preview/transactionnodes/listapikeys) | | Microsoft.BotService/botServices/channels | [listChannelWithKeys](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/botservice/resource-manager/Microsoft.BotService/stable/2020-06-02/botservice.json#L553) |
-| Microsoft.Cache/redis | [listKeys](/rest/api/redis/redis/listkeys) |
+| Microsoft.Cache/redis | [listKeys](/rest/api/redis/2021-06-01/redis/list-keys) |
| Microsoft.CognitiveServices/accounts | [listKeys](/rest/api/cognitiveservices/accountmanagement/accounts/listkeys) | | Microsoft.ContainerRegistry/registries | [listBuildSourceUploadUrl](/rest/api/containerregistry/registries%20(tasks)/get-build-source-upload-url) | | Microsoft.ContainerRegistry/registries | [listCredentials](/rest/api/containerregistry/registries/listcredentials) |
azure-resource-manager Quickstart Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/quickstart-private-module-registry.md
Learn how to publish Bicep modules to private modules registry, and how to call
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-To work with module registries, you must have [Bicep CLI](./install.md#deployment-environment) version **0.4.1008** or later. To use with [Azure CLI](/azure/install-azure-cli), you must also have Azure CLI version **2.31.0** or later; to use with [Azure PowerShell](/powershell/azure/install-az-ps), you must also have Azure PowerShell version **7.0.0** or later.
+To work with module registries, you must have [Bicep CLI](./install.md#deployment-environment) version **0.4.1008** or later. To use with [Azure CLI](/cli/azure/install-azure-cli), you must also have Azure CLI version **2.31.0** or later; to use with [Azure PowerShell](/powershell/azure/install-az-ps), you must also have Azure PowerShell version **7.0.0** or later.
A Bicep registry is hosted on [Azure Container Registry (ACR)](../../container-registry/container-registry-intro.md). To create one, see [Quickstart: Create a container registry by using a Bicep file](../../container-registry/container-registry-get-started-bicep.md).
azure-resource-manager Linked Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/linked-templates.md
Title: Link templates for deployment description: Describes how to use linked templates in an Azure Resource Manager template (ARM template) to create a modular template solution. Shows how to pass parameters values, specify a parameter file, and dynamically created URLs. Previously updated : 09/10/2021 Last updated : 01/06/2022
For more information, see:
## Dependencies
-As with other resource types, you can set dependencies between the linked templates. If the resources in one linked template must be deployed before resources in a second linked template, set the second template dependent on the first.
+As with other resource types, you can set dependencies between the nested/linked templates. If the resources in one nested/linked template must be deployed before resources in a second nested/linked template, set the second template dependent on the first.
:::code language="json" source="~/resourcemanager-templates/azure-resource-manager/linkedtemplates/linked-dependency.json" highlight="10,22,24":::
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Blockchain/blockchainMembers | [listApiKeys](/rest/api/blockchain/2019-06-01-preview/blockchainmembers/listapikeys) | | Microsoft.Blockchain/blockchainMembers/transactionNodes | [listApiKeys](/rest/api/blockchain/2019-06-01-preview/transactionnodes/listapikeys) | | Microsoft.BotService/botServices/channels | [listChannelWithKeys](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/botservice/resource-manager/Microsoft.BotService/stable/2020-06-02/botservice.json#L553) |
-| Microsoft.Cache/redis | [listKeys](/rest/api/redis/redis/listkeys) |
+| Microsoft.Cache/redis | [listKeys](/rest/api/redis/2021-06-01/redis/list-keys) |
| Microsoft.CognitiveServices/accounts | [listKeys](/rest/api/cognitiveservices/accountmanagement/accounts/listkeys) | | Microsoft.ContainerRegistry/registries | [listBuildSourceUploadUrl](/rest/api/containerregistry/registries%20(tasks)/get-build-source-upload-url) | | Microsoft.ContainerRegistry/registries | [listCredentials](/rest/api/containerregistry/registries/listcredentials) |
azure-signalr Concept Upstream https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/concept-upstream.md
When you select `ManagedIdentity`, you must enable a managed identity in Azure S
## Create upstream settings via the Azure portal
+> [!NOTE]
+> Integration with App Service Environment is currently not supported.
+ 1. Go to Azure SignalR Service. 2. Select **Settings** and switch **Service Mode** to **Serverless**. The upstream settings will appear:
Hex_encoded(HMAC_SHA256(accessKey, connection-id))
- [Managed identities for Azure SignalR Service](howto-use-managed-identity.md) - [Azure Functions development and configuration with Azure SignalR Service](signalr-concept-serverless-development-config.md) - [Handle messages from SignalR Service (Trigger binding)](../azure-functions/functions-bindings-signalr-service-trigger.md)-- [SignalR Service Trigger binding sample](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/BidirectionChat)
+- [SignalR Service Trigger binding sample](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/BidirectionChat)
azure-sql Az Cli Script Samples Content Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/az-cli-script-samples-content-guide.md
You can configure Azure SQL Database and SQL Managed Instance by using the <a hr
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)]
+## Samples
## [Azure SQL Database](#tab/single-database)
The following table includes links to Azure CLI script examples for Azure SQL Ma
For additional SQL Managed Instance examples, see the [create](/archive/blogs/sqlserverstorageengine/create-azure-sql-managed-instance-using-azure-cli), [update](/archive/blogs/sqlserverstorageengine/modify-azure-sql-database-managed-instance-using-azure-cli), [move a database](/archive/blogs/sqlserverstorageengine/cross-instance-point-in-time-restore-in-azure-sql-database-managed-instance), and [working with](https://medium.com/azure-sqldb-managed-instance/working-with-sql-managed-instance-using-azure-cli-611795fe0b44) scripts. Learn more about the [SQL Managed Instance Azure CLI API](../managed-instance/api-references-create-manage-instance.md#azure-cli-create-and-configure-managed-instances).++
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md
Previously updated : 12/10/2021 Last updated : 01/05/2022 # Tutorial: Add an Azure SQL Database elastic pool to a failover group
This portion of the tutorial uses the following PowerShell cmdlet:
# [Azure CLI](#tab/azure-cli)
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli echo "Cleaning up resources by removing the resource group..."
azure-sql Failover Group Add Single Database Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-single-database-tutorial.md
Previously updated : 12/10/2021 Last updated : 01/05/2022 # Tutorial: Add an Azure SQL Database to an autofailover group
This portion of the tutorial uses the following PowerShell cmdlets:
# [Azure CLI](#tab/azure-cli)
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli echo "Cleaning up resources by removing the resource group..."
azure-sql Add Database To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/add-database-to-failover-group-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use Azure CLI to add a database to a failover group
Last updated 12/23/2021
This Azure CLI script example creates a database in Azure SQL Database, creates a failover group, adds the database to it, and tests failover.
-If you choose to install and use Azure CLI locally, this topic requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Launch Azure Cloud Shell
-
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-
-When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-
-### Sign in to Azure
-
-Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-single-db-to-failover-group-az-cli.sh" range="4-47":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Add Elastic Pool To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/add-elastic-pool-to-failover-group-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use CLI to add an Azure SQL Database elastic pool to a failover group
-This Azure CLI script example creates a single database, adds it to an elastic pool, creates a failover group, and tests failover.
-If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+This Azure CLI script example creates a single database, adds it to an elastic pool, creates a failover group, and tests failover.
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Launch Azure Cloud Shell
-
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-
-When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-
-### Sign in to Azure
-
-Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh" range="4-62":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Auditing Threat Detection Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/auditing-threat-detection-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use CLI to configure SQL Database auditing and Advanced Threat Protection
-This Azure CLI script example configures SQL Database auditing and Advanced Threat Protection.
-If you choose to install and use Azure CLI locally, this topic requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+This Azure CLI script example configures SQL Database auditing and Advanced Threat Protection.
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Launch Azure Cloud Shell
-
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-
-When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-
-### Sign in to Azure
-
-Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/database-auditing-and-threat-detection/database-auditing-and-threat-detection.sh" range="4-37":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Backup Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/backup-database-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use CLI to backup an Azure SQL single database to an Azure storage container
-This Azure CLI example backs up a database in SQL Database to an Azure storage container.
-If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+This Azure CLI example backs up a database in SQL Database to an Azure storage container.
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Launch Azure Cloud Shell
-
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-
-When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-
-### Sign in to Azure
-
-Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/backup-database/backup-database.sh" range="4-40":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Copy Database To New Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/copy-database-to-new-server-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use CLI to copy a database in Azure SQL Database to a new server
-This Azure CLI script example creates a copy of an existing database in a new server.
-If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+This Azure CLI script example creates a copy of an existing database in a new server.
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Launch Azure Cloud Shell
-
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-
-When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-
-### Sign in to Azure
-
-Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/copy-database-to-new-server/copy-database-to-new-server.sh" range="4-36":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $targetResourceGroup
azure-sql Create And Configure Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/create-and-configure-database-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use Azure CLI to create a single database and configure a firewall rule [!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
-This Azure CLI script example creates a single database in Azure SQL Database and configures a server-level firewall rule. After the script has been successfully run, the database can be accessed from all Azure services and the configured IP address.
+This Azure CLI script example creates a single database in Azure SQL Database and configures a server-level firewall rule. After the script has been successfully run, the database can be accessed from all Azure services and the allowed IP address range.
-If you choose to install and use Azure CLI locally, this topic requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
-
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Launch Azure Cloud Shell
-
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-
-When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-
-### Sign in to Azure
-
-Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/create-and-configure-database/create-and-configure-database.sh" range="4-33":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Create And Configure Database Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/create-and-configure-database-powershell.md
Last updated 03/12/2019
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
-This Azure PowerShell script example creates a single database in Azure SQL Database and configures a server-level firewall rule. After the script has been successfully run, the database can be accessed from all Azure services and the configured IP address.
+This Azure PowerShell script example creates a single database in Azure SQL Database and configures a server-level firewall rule. After the script has been successfully run, the database can be accessed from all Azure services and the allowed IP address range.
[!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [updated-for-az](../../../../includes/updated-for-az.md)]
azure-sql Import From Bacpac Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/import-from-bacpac-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use CLI to import a BACPAC file into a database in SQL Database
-This Azure CLI script example imports a database from a *.bacpac* file into a database in SQL Database.
-If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+This Azure CLI script example imports a database from a *.bacpac* file into a database in SQL Database.
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Sign in to Azure
-
-For this script, use Azure CLI locally as it takes too long to run in Cloud Shell. Use the following script to sign in using a specific subscription. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/import-from-bacpac/import-from-bacpac.sh" range="4-48":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Monitor And Scale Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/monitor-and-scale-database-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use the Azure CLI to monitor and scale a single database in Azure SQL Database
Last updated 12/23/2021
This Azure CLI script example scales a single database in Azure SQL Database to a different compute size after querying the size information of the database.
-If you choose to install and use the Azure CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Launch Azure Cloud Shell
-
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-
-When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-
-### Sign in to Azure
-
-Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script
For more information, see [set active subscription](/cli/azure/account#az_accoun
> [!TIP] > Use [az sql db op list](/cli/azure/sql/db/op?#az_sql_db_op_list) to get a list of operations performed on the database, and use [az sql db op cancel](/cli/azure/sql/db/op#az_sql_db_op_cancel) to cancel an update operation on the database.
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Move Database Between Elastic Pools Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/move-database-between-elastic-pools-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use Azure CLI to move a database in SQL Database in a SQL elastic pool
-This Azure CLI script example creates two elastic pools, moves a pooled database in SQL Database from one SQL elastic pool into another SQL elastic pool, and then moves the pooled database out of the SQL elastic pool to be a single database in SQL Database.
-If you choose to install and Azure CLI locally, this topic requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+This Azure CLI script example creates two elastic pools, moves a pooled database in SQL Database from one SQL elastic pool into another SQL elastic pool, and then moves the pooled database out of the SQL elastic pool to be a single database in SQL Database.
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Launch Azure Cloud Shell
-
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-
-When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-
-### Sign in to Azure
-
-Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/move-database-between-pools/move-database-between-pools.sh" range="4-39":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Restore Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/restore-database-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use CLI to restore a single database in Azure SQL Database to an earlier point in time
-This Azure CLI example restores a single database in Azure SQL Database to a specific point in time.
-If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+This Azure CLI example restores a single database in Azure SQL Database to a specific point in time.
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Sign in to Azure
-
-For this script, use Azure CLI locally as it takes too long to run in Cloud Shell. Use the following script to sign in using a specific subscription. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/restore-database/restore-database.sh" range="4-39":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Scale Pool Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/scale-pool-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use the Azure CLI to scale an elastic pool in Azure SQL Database
Last updated 12/23/2021
This Azure CLI script example creates elastic pools in Azure SQL Database, moves pooled databases, and changes elastic pool compute sizes.
-If you choose to install and use the Azure CLI locally, this topic requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
-
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Launch Azure Cloud Shell
-
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-
-When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-
-### Sign in to Azure
-
-Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/scale-pool/scale-pool.sh" range="4-35":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Setup Geodr Failover Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-failover-database-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use CLI to configure active geo-replication for a single database in Azure SQL Database
-This Azure CLI script example configures active geo-replication for a single database and fails it over to a secondary replica of the database.
-If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+This Azure CLI script example configures active geo-replication for a single database and fails it over to a secondary replica of the database.
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Launch Azure Cloud Shell
-
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-
-When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-
-### Sign in to Azure
-
-Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/setup-geodr-and-failover/setup-geodr-and-failover-single-database.sh" range="4-46":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Setup Geodr Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-failover-group-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use CLI to configure a failover group for a group of databases in Azure SQL Database
-This Azure CLI script example configures a failover group for a group of databases in Azure SQL Database and fails it over to a secondary Azure SQL Database.
-If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Launch Azure Cloud Shell
-
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-
-When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-
-### Sign in to Azure
-
-Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/setup-geodr-and-failover/setup-geodr-and-failover-database-failover-group.sh" range="4-45":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $failoverResourceGroup -y
azure-sql Setup Geodr Failover Pool Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-failover-pool-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use CLI to configure active geo-replication for a pooled database in Azure SQL Database
-This Azure CLI script example configures active geo-replication for a pooled database in Azure SQL Database and fails it over to the secondary replica of the database.
-If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+This Azure CLI script example configures active geo-replication for a pooled database in Azure SQL Database and fails it over to the secondary replica of the database.
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Launch Azure Cloud Shell
-
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-
-When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-
-### Sign in to Azure
-
-Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/setup-geodr-and-failover/setup-geodr-and-failover-elastic-pool.sh" range="4-47":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Single Database Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-create-quickstart.md
Previously updated : 12/09/2021 Last updated : 01/05/2022 # Quickstart: Create an Azure SQL Database single database
To create a single database in the Azure portal, this quickstart starts at the A
# [Azure CLI](#tab/azure-cli)
-You can create an Azure resource group, server, and single database using the Azure command-line interface (Azure CLI). If you don't want to use the Azure Cloud Shell, [install Azure CLI](/cli/azure/install-azure-cli) on your computer.
-
-### Launch Azure Cloud Shell
-
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-
-When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-
-### Sign in to Azure
-
-Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
+You can create an Azure resource group, server, and single database using the Azure command-line interface (Azure CLI).
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Set parameter values
You can create an Azure resource group, server, and single database using the Az
The following Azure CLI code blocks create a resource group, server, single database, and server-level IP firewall rule for access to the server. Make sure to record the generated resource group and server names, so you can manage these resources later.
-### Launch Azure Cloud Shell
-
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-
-When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-
-### Sign in to Azure
-
-Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Set parameter values
To delete **myResourceGroup** and all its resources using the Azure portal:
# [Azure CLI](#tab/azure-cli)
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command - unless you have an ongoing need for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
az group delete --name $resourceGroup
# [Azure CLI (sql up)](#tab/azure-cli-sql-up)
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Quickstart Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/quickstart-content-reference-guide.md
Last updated 07/11/2019 # Getting started with Azure SQL Managed Instance+ [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)] [Azure SQL Managed Instance](sql-managed-instance-paas-overview.md) creates a database with near 100% compatibility with the latest SQL Server (Enterprise Edition) database engine, providing a native [virtual network (VNet)](../../virtual-network/virtual-networks-overview.md) implementation that addresses common security concerns, and a [business model](https://azure.microsoft.com/pricing/details/sql-database/) favorable for existing SQL Server customers.
azure-sql Create Configure Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/create-configure-managed-instance-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use CLI to create an Azure SQL Managed Instance + This Azure CLI script example creates an Azure SQL Managed Instance in a dedicated subnet within a new virtual network. It also configures a route table and a network security group for the virtual network. Once the script has been successfully run, the managed instance can be accessed from within the virtual network or from an on-premises environment. See [Configure Azure VM to connect to an Azure SQL Managed Instance](../../../azure-sql/managed-instance/connect-vm-instance-configure.md) and [Configure a point-to-site connection to an Azure SQL Managed Instance from on-premises](../../../azure-sql/managed-instance/point-to-site-p2s-configure.md). > [!IMPORTANT] > For limitations, see [supported regions](../../../azure-sql/managed-instance/resource-limits.md#supported-regions) and [supported subscription types](../../../azure-sql/managed-instance/resource-limits.md#supported-subscription-types).
-If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
## Sample script
-### Sign in to Azure
-
-For this script, use Azure CLI locally as it takes too long to run in Cloud Shell. Use the following script to sign in using a specific subscription. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/managed-instance/create-managed-instance.sh" range="4-51":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Restore Geo Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/restore-geo-backup-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Use CLI to restore a Managed Instance database to another geo-region
-This Azure CLI script example restores an Azure SQL Managed Instance database from a remote geo-region (geo-restore) to a point in time.
-If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+This Azure CLI script example restores an Azure SQL Managed Instance database from a remote geo-region (geo-restore) to a point in time.
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
+This sample requires an existing pair of managed instances, see [Use Azure CLI to create an Azure SQL Managed Instance](create-configure-managed-instance-cli.md) to create a pair of managed instances in different regions.
-## Prerequisites
-
-An existing pair of managed instances, see [Use Azure CLI to create an Azure SQL Managed Instance](create-configure-managed-instance-cli.md) to create a pair of managed instances in different regions.
## Sample script
-### Sign in to Azure
-
-For this script, use Azure CLI locally as it takes too long to run in Cloud Shell. Use the following script to sign in using a specific subscription. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/sql-managed-instance-restore-geo-backup/restore-geo-backup-cli.sh" range="4-28":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Transparent Data Encryption Byok Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/transparent-data-encryption-byok-sql-managed-instance-cli.md
Previously updated : 12/23/2021 Last updated : 01/05/2022 # Manage Transparent Data Encryption in a Managed Instance using your own key from Azure Key Vault
-This Azure CLI script example configures Transparent Data Encryption (TDE) with customer-managed key for Azure SQL Managed Instance, using a key from Azure Key Vault. This is often referred to as a Bring Your Own Key scenario for TDE. To learn more about the TDE with customer-managed key, see [TDE Bring Your Own Key to Azure SQL](../../../azure-sql/database/transparent-data-encryption-byok-overview.md).
-
-If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-> [!IMPORTANT]
-> When running Bash on Windows, run this script from within a Docker container.
+This Azure CLI script example configures Transparent Data Encryption (TDE) with customer-managed key for Azure SQL Managed Instance, using a key from Azure Key Vault. This is often referred to as a Bring Your Own Key scenario for TDE. To learn more about the TDE with customer-managed key, see [TDE Bring Your Own Key to Azure SQL](../../../azure-sql/database/transparent-data-encryption-byok-overview.md).
-## Prerequisites
+This sample requires an existing Managed Instance, see [Use Azure CLI to create an Azure SQL Managed Instance](create-configure-managed-instance-cli.md).
-An existing Managed Instance, see [Use Azure CLI to create an Azure SQL Managed Instance](create-configure-managed-instance-cli.md).
## Sample script
-### Sign in to Azure
-
-For this script, use Azure CLI locally as it takes too long to run in Cloud Shell. Use the following script to sign in using a specific subscription. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-
-```azurecli-interactive
-subscription="<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script :::code language="azurecli" source="~/azure_cli_scripts/sql-database/transparent-data-encryption/setup-tde-byok-sqlmi.sh" range="4-41":::
-### Clean up resources
+## Clean up resources
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
```azurecli az group delete --name $resourceGroup
azure-sql Application Patterns Development Strategies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/application-patterns-development-strategies.md
In n-tier hybrid application pattern, you can implement the following workflow i
* With secure point-to-site connection, you can establish network connectivity between your virtual network in Azure and your individual computers running anywhere. It is mostly recommended for development and test purposes. For information on how to connect to SQL Server in Azure, see [Connect to a SQL Server virtual machine on Azure](ways-to-connect-to-sql.md).
-4. Set up scheduled jobs and alerts that back up on-premises data in a virtual machine disk in Azure. For more information, see [SQL Server Backup and Restore with Azure Blob storage service](/sql/relational-databases/backup-restore/sql-server-backup-and-restore-with-microsoft-azure-blob-storage-service) and [Backup and Restore for SQL Server on Azure Virtual Machines](../../../azure-sql/virtual-machines/windows/backup-restore.md).
+4. Set up scheduled jobs and alerts that back up on-premises data in a virtual machine disk in Azure. For more information, see [SQL Server Backup and Restore with Azure Blob Storage](/sql/relational-databases/backup-restore/sql-server-backup-and-restore-with-microsoft-azure-blob-storage-service) and [Backup and Restore for SQL Server on Azure Virtual Machines](../../../azure-sql/virtual-machines/windows/backup-restore.md).
5. Depending on your applicationΓÇÖs needs, you can implement one of the following three common scenarios: 1. You can keep your web server, application server, and insensitive data in a database server in Azure whereas you keep the sensitive data on-premises.
The following table provides a comparison of traditional web development with Az
For more information on choosing between these programming methods, see [Azure Web Apps, Cloud Services, and VMs: When to use which](/azure/architecture/guide/technology-choices/compute-decision-tree). ## Next steps
-For more information on running SQL Server on Azure Virtual Machines, see [SQL Server on Azure Virtual Machines Overview](sql-server-on-azure-vm-iaas-what-is-overview.md).
+For more information on running SQL Server on Azure Virtual Machines, see [SQL Server on Azure Virtual Machines Overview](sql-server-on-azure-vm-iaas-what-is-overview.md).
azure-sql Sql Assessment For Sql Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-assessment-for-sql-vm.md
There are three charts on the **Trends** page to show changes over time: all iss
If there are multiple runs in a single day, only the latest run is included in the graphs on the **Trends** page.
-## Known issues
+## Known Issues
You may encounter some of the following known issues when using SQL assessments.
Refer to the [deployment history](../../../azure-resource-manager/templates/depl
### Failed assessments
-**Assessment run failed** -
-This indicates that the SQL IaaS extension encountered a problem while running assessment. The detailed error message will be available in the extension log inside the VM at `C:\WindowsAzure\Logs\Plugins\Microsoft.SqlServer.Management.SqlIaaSAgent\2.0.X.Y` where `2.0.X.Y `is the latest version folder present.
-
-**Upload result to Log Analytics workspace failed** -
-This indicates the Microsoft Monitoring Agent (MMA) was unable to upload the results in a time-bound manner. Ensure the MMA extension is [provisioned correctly](../../../azure-monitor/visualize/vmext-troubleshoot.md) and refer to the Connectivity issues and Data collection issues listed in this [troubleshooting guide](../../../azure-monitor/agents/agent-windows-troubleshoot.md).
+If the assessment or uploading the results failed for some reason, the status of that run will indicate the failure. Clicking on the status will open a context pane where you can see the details about the failure and possible ways to remediate the issue.
>[!TIP] >If you have enforced TLS 1.0 or higher in Windows and disabled older SSL protocols as described [here](/troubleshoot/windows-server/windows-security/restrict-cryptographic-algorithms-protocols-schannel#schannel-specific-registry-keys), then you must also ensure that .NET Framework is [configured](../../../azure-monitor/agents/agent-windows.md#configure-agent-to-use-tls-12) to use strong cryptography.
-**Result expired due to Log Analytics workspace data retention** -
-This indicates that the results are no longer retained in the Log Analytics workspace based on its retention policy. You can [change the retention period](../../../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period) for the workspace
- ## Next steps - To register your SQL Server VM with the SQL Server IaaS extension to SQL Server on Azure VMs, see the articles for [Automatic installation](sql-agent-extension-automatic-registration-all-vms.md), [Single VMs](sql-agent-extension-manually-register-single-vm.md), or [VMs in bulk](sql-agent-extension-manually-register-vms-bulk.md).
azure-video-analyzer Audio Effects Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/audio-effects-detection.md
Title: Audio effects detection
-description: Audio Effects Detection is one of Azure Video Analyzer for Media AI capabilities. It can detects a various of acoustics events and classify them into different acoustic categories (such as Gunshot, Screaming, Crowd Reaction and more).
-
+description: Audio Effects Detection is one of Azure Video Analyzer for Media AI capabilities. It can detects a various of acoustics events and classify them into different acoustic categories (for example, gunshot, screaming, crowd reaction and more).
Previously updated : 05/12/2021 Last updated : 01/04/2022
-# Audio effects detection (preview)
+# Audio effects detection
-**Audio Effects Detection** is one of Azure Video Analyzer for Media AI capabilities. It can detects a various of acoustics events and classify them into different acoustic categories (such as Gunshot, Screaming, Crowd Reaction and more).
-
-Audio Events Detection can be used in many domains. Two examples are:
+**Audio effects detection** is one of Azure Video Analyzer for Media AI capabilities. It can detects a various of acoustics events and classify them into different acoustic categories (such as dog barking, crowd reactions, laugher and more).
-* Using Audio Effects Detection is the domain of **Public Safety & Justice**. Audio Effects Detection can detect and classify Gunshots, Explosion and Glass-Shattering. Therefore, it can be implemented in a smart-city system or in other public environments that include cameras and microphones. Offering a fast and accurate detection of violence incidents.
-* In the **Media & Entertainment** domain, companies with a large set of video archives can easily improve their accessibility scenarios, by enhancing their video transcription with non-speech effects to provide more context for people who are hard of hearing.
+Some scenarios where this feature is useful:
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/audio-effects-detection/audio-effects.jpg" alt-text="Audio Effects image":::
-<br/>*Example of the Video Analyzer for Media Audio Effects Detection output*
+- Companies with a large set of video archives can easily improve accessibility with audio effects detection. The feature provides more context for persons who are hard of hearing, and enhances video transcription with non-speech effects.
+- In the Media & Entertainment domain, the detection feature can improve efficiency when creating raw data for content creators. Important moments in promos and trailers (such as laughter, crowd reactions, gunshot, or explosion) can be identified by using **audio effects detection**.
+- In the Public Safety & Justice domain, the feature can detect and classify gunshots, explosions, and glass shattering. It can be implemented in a smart-city system or in other public environments that include cameras and microphones to offer fast and accurate detection of violence incidents.
## Supported audio categories
-**Audio Effect Detection** can detect and classify 8 different categories. In the next table, you can find the different categories split in to the different VI presets, divided to **Standard** and **Advanced**. For more information, see [pricing](https://azure.microsoft.com/pricing/details/media-services/).
+**Audio effect detection** can detect and classify 7 different categories. In the next table, you can find the different categories split in to the different presets, divided to **Standard** and **Advanced**. For more information, see [pricing](https://azure.microsoft.com/pricing/details/media-services/).
|Indexing type |Standard indexing| Advanced indexing| |||| |**Preset Name** |**"Audio Only"** <br/>**"Video + Audio"** |**"Advance Audio"**<br/> **"Advance Video + Audio"**| |**Appear in insights pane**|| V|
-|Crowd Reaction |V| V|
+| Crowd Reactions || V|
| Silence| V| V|
-| Gunshot ||V |
+| Gunshot or explosion ||V |
| Breaking glass ||V|
-| Alarm ringing|| V |
-| Siren Wailing|| V |
+| Alarm or siren|| V |
| Laughter|| V |
-| Dog Barking|| V|
+| Dog barking|| V|
## Result formats
The `name` parameter will be presented in the language in which the JSON was ind
```json audioEffects: [{ id: 0,
- type: "Gunshot",
+ type: "Gunshot or explosion",
name: "Gunshot", instances: [{ confidence: 0.649,
audioEffects: [{
], ```
-## How to index Audio Effects
+## How to index audio effects
-In order to set the index process to include the detection of Audio Effects, the user should chose one of the Advanced presets under "Video + audio indexing" menu as can be seen below.
+In order to set the index process to include the detection of audio effects, the user should chose one of the **Advanced** presets under **Video + audio indexing** menu as can be seen below.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/audio-effects-detection/index-audio-effect.png" alt-text="Index Audio Effects image"::: ## Closed Caption
-When Audio Effects are retrieved in the closed caption files, they will be retrieved in square brackets the following structure:
+When audio effects are retrieved in the closed caption files, they will be retrieved in square brackets the following structure:
|Type| Example| |||
-|SRT |00:00:00,000 00:00:03,671<br/>[Gunshot]|
-|VTT |00:00:00.000 00:00:03.671<br/>[Gunshot]|
-|TTML|Confidence: 0.9047 <br/> `<p begin="00:00:00.000" end="00:00:03.671">[Gunshot]</p>`|
-|TXT |[Gunshot]|
-|CSV |0.9047,00:00:00.000,00:00:03.671, [Gunshot]|
+|SRT |00:00:00,000 00:00:03,671<br/>[Gunshot or explosion]|
+|VTT |00:00:00.000 00:00:03.671<br/>[Gunshot or explosion]|
+|TTML|Confidence: 0.9047 <br/> `<p begin="00:00:00.000" end="00:00:03.671">[Gunshot or explosion]</p>`|
+|TXT |[Gunshot or explosion]|
+|CSV |0.9047,00:00:00.000,00:00:03.671, [Gunshot or explosion]|
Audio Effects in closed captions file will be retrieved with the following logic employed:
Audio Effects in closed captions file will be retrieved with the following logic
## Adding audio effects in closed caption files
-Audio effects can be added to the closed captions files supported by Azure Video Analyzer via the [Get video captions API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Captions) by choosing true in the `includeAudioEffects` parameter or via the video.ai portal experience by selecting **Download** -> **Closed Captions** -> **Include Audio Effects**.
+Audio effects can be added to the closed captions files supported by Azure Video Analyzer for Media via the [Get video captions API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Captions) by choosing true in the `includeAudioEffects` parameter or via the video.ai portal experience by selecting **Download** -> **Closed Captions** -> **Include Audio Effects**.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/audio-effects-detection/close-caption.jpg" alt-text="Audio Effects in CC":::
Audio effects can be added to the closed captions files supported by Azure Video
## Limitations and assumptions
-* The model works on non-speech segments only.
-* The model is currently working for a single category at a time. For example, a crying and speech on the background or gunshot + explosion are not supported for now.
-* The model is currently not supporting cases when there is a loud music on background.
-* Minimal segment length ΓÇô 2 seconds.
+* The audio effects are detected when present in non-speech segments only.
+* The model is optimized for cases where there is no loud background music.
+* Low quality audio may impact the detection results .
+* Minimal non-speech section duration is 2 seconds.
+* Music that is characterized with repetitive and/or linearly scanned frequency can be mistakenly classified as Alarm or siren.
+* The model is currently optimized for natural and non-synthetic gunshot and explosions sounds.
+* Door knocks and door slams can sometimes be mistakenly labeled as gunshot and explosions.
+* Prolonged shouting and human physical effort sounds can sometimes be mistakenly detected.
+* Group of people laughing can sometime be classified as both Laughter and Crowd reactions.
## Next steps
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/release-notes.md
Title: Azure Video Analyzer for Media (formerly Video Indexer) release notes | M
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Analyzer for Media (formerly Video Indexer). Previously updated : 01/03/2022 Last updated : 01/04/2022
To stay up-to-date with the most recent Azure Video Analyzer for Media (former V
* Bug fixes * Deprecated functionality
+## January 2022
+
+### Improved audio effects detection
+
+The audio effects detection capability was improved to have a better detection rate over the following classes:
+
+* Crowd reactions (cheering, clapping, and booing),
+* Gunshot or explosion,
+* Laughter
+
+For more information, see [Audio effects detection](audio-effects-detection.md).
+ ## December 2021 ### The projects feature is now GA
azure-video-analyzer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview.md
Title: What is Azure Video Analyzer for Media (formerly Video Indexer)? description: This article gives an overview of the Azure Video Analyzer for Media (formerly Video Indexer) service. Previously updated : 12/10/2021 Last updated : 01/04/2022
The following list shows the insights you can retrieve from your videos using Vi
* **Audio effects** (preview): Detects the following audio effects in the non-speech segments of the content: Gunshot, Glass shatter, Alarm, Siren, Explosion, Dog Bark, Screaming, Laughter, Crowd reactions (cheering, clapping, and booing) and Silence. Note: the full set of events is available only when choosing ΓÇÿAdvanced Audio AnalysisΓÇÖ in upload preset, otherwise only ΓÇÿSilenceΓÇÖ and ΓÇÿCrowd reactionΓÇÖ will be available. * **Emotion detection**: Identifies emotions based on speech (what's being said) and voice tonality (how it's being said). The emotion could be joy, sadness, anger, or fear. * **Translation**: Creates translations of the audio transcript to 54 different languages.
-* **Audio effects detection** (preview): Detects various acoustics events and classifies them into different acoustic categories (such as Gunshot, Screaming, Crowd Reaction and more). The detected acoustic events are in the closed captions file. The file can be downloaded from the Video Analyzer for Media portal. For more information, see [Audio effects detection](audio-effects-detection.md).
+* **Audio effects detection**: Detects the following audio effects in the non-speech segments of the content: alarm or siren, dog barking, crowd reactions (cheering, clapping, and booing), gunshot or explosion, laughter, breaking glass, and silence.
+
+ The detected acoustic events are in the closed captions file. The file can be downloaded from the Video Analyzer for Media portal. For more information, see [Audio effects detection](audio-effects-detection.md).
+
+ > [!NOTE]
+ > The full set of events is available only when you choose **Advanced Audio Analysis** when uploading a file, in upload preset. By default, only silence is detected.
### Audio and video insights (multi-channels)
backup Backup Azure Vms Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-vms-automation.md
Title: Back up and recover Azure VMs with PowerShell description: Describes how to back up and recover Azure VMs using Azure Backup with PowerShell Previously updated : 09/11/2019 Last updated : 01/04/2022 +++ # Back up and restore Azure VMs with PowerShell
Enable-AzRecoveryServicesBackupProtection -Policy $pol -Name "V2VM" -ResourceGro
> If you're using the Azure Government cloud, then use the value `ff281ffe-705c-4f53-9f37-a40e6f2c68f3` for the parameter **ServicePrincipalName** in [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy) cmdlet. >
-If you want to selectively backup few disks and exclude others as mentioned in [these scenarios](selective-disk-backup-restore.md#scenarios), you can configure protection and backup only the relevant disks as documented [here](selective-disk-backup-restore.md#enable-backup-with-powershell).
+If you want to selectively back up a few disks and exclude others as mentioned in [these scenarios](selective-disk-backup-restore.md#scenarios), you can configure protection and backup only the relevant disks as documented [here](selective-disk-backup-restore.md#enable-backup-with-powershell).
## Monitoring a backup job
Set-AzureRmRecoveryServicesBackupProtectionPolicy -policy $bkpPol
### Exclude disks for a protected VM
-Azure VM backup provides a capability to selectively exclude or include disks which is helpful in [these scenarios](selective-disk-backup-restore.md#scenarios). If the virtual machine is already protected by Azure VM backup and if all disks are backed up, then you can modify the protection to selectively include or exclude disks as mentioned [here](selective-disk-backup-restore.md#modify-protection-for-already-backed-up-vms-with-powershell).
+Azure VM backup provides a capability to selectively exclude or include disks which are helpful in [these scenarios](selective-disk-backup-restore.md#scenarios). If the virtual machine is already protected by Azure VM backup and if all disks are backed up, then you can modify the protection to selectively include or exclude disks as mentioned [here](selective-disk-backup-restore.md#modify-protection-for-already-backed-up-vms-with-powershell).
### Trigger a backup
$details = Get-AzRecoveryServicesBackupJobDetail -Job $restorejob -VaultId $targ
Azure Backup also allows you to use managed identity (MSI) during restore operation to access storage accounts where disks have to be restored to. This option is currently supported only for managed disk restore.
-If you wish to use the vault's system assigned managed identity to restore disks, pass an additional flag ***-UseSystemAssignedIdentity*** to the Restore-AzRecoveryServicesBackupItem command. If you wish to use a user-assigned managed identity, pass a parameter ***-UserAssignedIdentityId*** with the ARM id of the vault's managed identity as the value of the parameter. Refer to [this article](encryption-at-rest-with-cmk.md#enable-managed-identity-for-your-recovery-services-vault) to learn how to enable managed identity for your vaults.
+If you wish to use the vault's system assigned managed identity to restore disks, pass an additional flag ***-UseSystemAssignedIdentity*** to the Restore-AzRecoveryServicesBackupItem command. If you wish to use a user-assigned managed identity, pass a parameter ***-UserAssignedIdentityId*** with the Azure Resource Manager ID of the vault's managed identity as the value of the parameter. Refer to [this article](encryption-at-rest-with-cmk.md#enable-managed-identity-for-your-recovery-services-vault) to learn how to enable managed identity for your vaults.
#### Restore selective disks
If cross-region restore is enabled on the vault with which you've protected your
V2VM CrossRegionRestore InProgress 2/8/2021 4:24:57 PM 2d071b07-8f7c-4368-bc39-98c7fb2983f7 ```
+#### Cross-zonal restore
+
+You can restore [Azure zone pinned VMs](../virtual-machines/windows/create-portal-availability-zone.md) in any [availability zones](../availability-zones/az-overview.md) of the same region.
+
+To restore a VM to another zone, specify the `TargetZoneNumber` parameter in the [Restore-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/restore-azrecoveryservicesbackupitem) cmdlet.
+
+```powershell
+$restorejob = Restore-AzRecoveryServicesBackupItem -RecoveryPoint $rp[0] -StorageAccountName "DestAccount" -StorageAccountResourceGroupName "DestRG" -VaultId $targetVault.ID -TargetZoneNumber 3
+```
+The output will be similar to the following example:
+
+```output
+WorkloadName Operation Status StartTime EndTime JobID
+ - --
+zonevmeus2 Restore InProgress 1/3/2022 10:27:20 AM b2298...
+```
+
+Cross-zonal restore is supported only in scenarios where:
+
+- The source VM is zone pinned and is NOT encrypted.
+- The recovery point is present in vault tier only. Snapshots only or snapshot and vault tier are not supported.
+- The recovery option is to create a new VM or restore disks. Replace disks option replaces source data; therefore, the availability zone option is not applicable.
+- Creating VM/disks in the same region when vault's storage redundancy is ZRS. Note that it doesn't work if vault's storage redundancy is GRS, even though the source VM is zone pinned.
+- Creating VM/disks in the paired region when vault's storage redundancy is enabled for Cross-Region Restore and if the paired region supports zones.
+ ## Replace disks in Azure VM To replace the disks and configuration information, perform the following steps:
The template isn't directly accessible since it's under a customer's storage acc
### Create a VM using the config file
-The following section lists steps necessary to create a VM using "VMConfig" file.
+The following section lists steps necessary to create a VM using _VMConfig_ file.
> [!NOTE] > It's highly recommended to use the deployment template detailed above to create a VM. This section (Points 1-6) will be deprecated soon.
backup Tutorial Restore Disk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-restore-disk.md
Title: Tutorial - Restore a VM with Azure CLI description: Learn how to restore a disk and create a recover a VM in Azure with Backup and Recovery Services. Previously updated : 01/31/2019 Last updated : 01/05/2022 +++ # Restore a VM with Azure CLI
If the backed-up VM has managed disks and if the intent is to restore managed di
This will restore managed disks as unmanaged disks to the given storage account and won't be leveraging the 'instant' restore functionality. In future versions of CLI, it will be mandatory to provide either the **target-resource-group** parameter or **restore-as-unmanaged-disk** parameter.
+### Restore disks to secondary region
+
+The backup data replicates to the secondary region when you enable cross-region restore on the vault you've protected your VMs. You can use the backup data to perform a restore operation.
+
+To restore disks to the secondary region, use the `--use-secondary-region` flag in the [az backup restore restore-disks](/cli/azure/backup/restore#az_backup_restore_restore_disks) command. Ensure that you specify a target storage account that's located in the secondary region.
+
+```azurecli-interactive
+az backup restore restore-disks \
+ --resource-group myResourceGroup \
+ --vault-name myRecoveryServicesVault \
+ --container-name myVM \
+ --item-name myVM \
+ --storage-account targetStorageAccountID \
+ --rp-name myRecoveryPointName \
+ --target-resource-group targetRG
+ --use-secondary-region
+```
+
+### Cross-zonal restore
+
+You can restore [Azure zone pinned VMs](../virtual-machines/windows/create-portal-availability-zone.md) in any [availability zones](../availability-zones/az-overview.md) of the same region.
+
+To restore a VM to another zone, specify the `TargetZoneNumber` parameter in the [az backup restore restore-disks](/cli/azure/backup/restore#az_backup_restore_restore_disks) command.
+
+```azurecli-interactive
+az backup restore restore-disks \
+ --resource-group myResourceGroup \
+ --vault-name myRecoveryServicesVault \
+ --container-name myVM \
+ --item-name myVM \
+ --storage-account targetStorageAccountID \
+ --rp-name myRecoveryPointName \
+ --target-resource-group targetRG
+ --target-zone 3
+```
+
+Cross-zonal restore is supported only in scenarios where:
+
+- The source VM is zone pinned and is NOT encrypted.
+- The recovery point is present in vault tier only. Snapshots only or snapshot and vault tier are not supported.
+- The recovery option is to create a new VM or restore disks. Replace disks option replaces source data; therefore, the availability zone option is not applicable.
+- Creating VM/disks in the same region when vault's storage redundancy is ZRS. Note that it doesn't work if vault's storage redundancy is GRS, even though the source VM is zone pinned.
+- Creating VM/disks in the paired region when vault's storage redundancy is enabled for Cross-Region Restore and if the paired region supports zones.
+ ### Unmanaged disks restore If the backed-up VM has unmanaged disks and if the intent is to restore disks from the recovery point, you first provide an Azure storage account. This storage account is used to store the VM configuration and the deployment template that can be later used to deploy the VM from the restored disks. By default, the unmanaged disks will be restored to their original storage accounts. If you wish to restore all unmanaged disks to one single place, then the given storage account can also be used as a staging location for those disks too.
When the *Status* of the restore job reports *Completed*, the necessary informat
Azure Backup also allows you to use managed identity (MSI) during restore operation to access storage accounts where disks have to be restored to. This option is currently supported only for managed disk restore.
-If you wish to use the vault's system assigned managed identity to restore disks, pass an additional flag ***--mi-system-assigned*** to the [az backup restore restore-disks](/cli/azure/backup/restore#az_backup_restore_restore_disks) command. If you wish to use a user-assigned managed identity, pass a parameter ***--mi-user-assigned*** with the ARM id of the vault's managed identity as the value of the parameter. Refer to [this article](encryption-at-rest-with-cmk.md#enable-managed-identity-for-your-recovery-services-vault) to learn how to enable managed identity for your vaults.
+If you wish to use the vault's system assigned managed identity to restore disks, pass an additional flag ***--mi-system-assigned*** to the [az backup restore restore-disks](/cli/azure/backup/restore#az_backup_restore_restore_disks) command. If you wish to use a user-assigned managed identity, pass a parameter ***--mi-user-assigned*** with the Azure Resource Manager ID of the vault's managed identity as the value of the parameter. Refer to [this article](encryption-at-rest-with-cmk.md#enable-managed-identity-for-your-recovery-services-vault) to learn how to enable managed identity for your vaults.
## Create a VM from the restored disk
cloud-shell Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/quickstart-powershell.md
Next time when you use PowerShell in Cloud Shell, the `helloworld.ps1` file will
## Use custom profile You can customize your PowerShell environment, by creating PowerShell profile(s) - `profile.ps1` (or `Microsoft.PowerShell_profile.ps1`).
-Save it under `$profile.CurrentUserAllHosts` (or `$profile.CurrentUserAllHosts`), so that it can be loaded in every PowerShell in Cloud Shell session.
+Save it under `$profile.CurrentUserAllHosts` (or `$profile.CurrentUserCurrentHost`), so that it can be loaded in every PowerShell in Cloud Shell session.
For how to create a profile, refer to [About Profiles][profile].
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
Below table lists out the prebuilt neural voices supported in each language. You
| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyouNeural` | Child voice, optimized for story narrating | | Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunxiNeural` | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunyangNeural` | Optimized for news reading,<br /> multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunyeNeural` | Optimized for story narrating |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunyeNeural` | Optimized for story narrating,<br /> multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
| Chinese (Taiwanese Mandarin) | `zh-TW` | Female | `zh-TW-HsiaoChenNeural` | General | | Chinese (Taiwanese Mandarin) | `zh-TW` | Female | `zh-TW-HsiaoYuNeural` | General | | Chinese (Taiwanese Mandarin) | `zh-TW` | Male | `zh-TW-YunJheNeural` | General |
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/what-are-cognitive-services.md
keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services, cognitive understanding, cognitive features Previously updated : 10/08/2021 Last updated : 01/05/2022
The catalog of cognitive services that provide cognitive understanding is catego
* Speech * Language * Decision
-* Search
The following sections in this article provide a list of services that are part of these five pillars.
The following sections in this article provide a list of services that are part
|[Content Moderator](./content-moderator/overview.md "Content Moderator")|Content Moderator provides monitoring for possible offensive, undesirable, and risky content. See [Content Moderator quickstart](./content-moderator/client-libraries.md) to get started with the service.| |[Personalizer](./personalizer/index.yml "Personalizer")|Personalizer allows you to choose the best experience to show to your users, learning from their real-time behavior. See [Personalizer quickstart](./personalizer/quickstart-personalizer-sdk.md) to get started with the service.|
-## Search APIs
-
-> [!NOTE]
-> Looking for [Azure Cognitive Search](../search/index.yml)? Although it uses Cognitive Services for some tasks, it's a different search technology that supports other scenarios.
-
-|Service Name|Service Description|
-|:--|:|
-|[Bing News Search](/azure/cognitive-services/bing-news-search/ "Bing News Search")|Bing News Search returns a list of news articles determined to be relevant to the user's query.|
-|[Bing Video Search](/azure/cognitive-services/Bing-Video-Search/ "Bing Video Search")|Bing Video Search returns a list of videos determined to be relevant to the user's query.|
-|[Bing Web Search](./bing-web-search/index.yml "Bing Web Search")|Bing Web Search returns a list of search results determined to be relevant to the user's query.|
-|[Bing Autosuggest](/azure/cognitive-services/Bing-Autosuggest "Bing Autosuggest")|Bing Autosuggest allows you to send a partial search query term to Bing and get back a list of suggested queries.|
-|[Bing Custom Search](/azure/cognitive-services/bing-custom-search "Bing Custom Search")|Bing Custom Search allows you to create tailored search experiences for topics that you care about.|
-|[Bing Entity Search](/azure/cognitive-services/bing-entities-search/ "Bing Entity Search")|Bing Entity Search returns information about entities that Bing determines are relevant to a user's query.|
-|[Bing Image Search](/azure/cognitive-services/bing-image-search "Bing Image Search")|Bing Image Search returns a display of images determined to be relevant to the user's query.|
-|[Bing Visual Search](/azure/cognitive-services/bing-visual-search "Bing Visual Search")|Bing Visual Search returns insights about an image such as visually similar images, shopping sources for products found in the image, and related searches.|
-|[Bing Local Business Search](/azure/cognitive-services/bing-local-business-search/ "Bing Local Business Search")| Bing Local Business Search API enables your applications to find contact and location information about local businesses based on search queries.|
-|[Bing Spell Check](/azure/cognitive-services/bing-spell-check/ "Bing Spell Check")|Bing Spell Check allows you to perform contextual grammar and spell checking.|
- ## Get started with Cognitive Services Start by creating a Cognitive Services resource with hands-on quickstarts using the following methods:
communication-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/authentication.md
# Authenticate to Azure Communication Services
-Every client interaction with Azure Communication Services needs to be authenticated. In a typical architecture, see [client and server architecture](./client-and-server-architecture.md), *access keys* or *managed identities* are used for authentication.
+Every client interaction with Azure Communication Services needs to be authenticated. In a typical architecture, see [client and server architecture](./client-and-server-architecture.md), *access keys* or *Azure AD authentication* are used for server-side authentication.
Another type of authentication uses *user access tokens* to authenticate against services that require user participation. For example, the chat or calling service utilizes *user access tokens* to allow users to be added in a thread and have conversations with each other.
The following table shows the Azure Communication Services SDKs and their authen
| SDK | Authentication option | | -- | -|
-| Identity | Access Key or Managed Identity |
-| SMS | Access Key or Managed Identity |
-| Phone Numbers | Access Key or Managed Identity |
+| Identity | Access Key or Azure AD authentication |
+| SMS | Access Key or Azure AD authentication |
+| Phone Numbers | Access Key or Azure AD authentication |
| Calling | User Access Token | | Chat | User Access Token |
Since the access key is part of the connection string of your resource, authenti
If you wish to call ACS' APIs manually using an access key, then you will need to sign the request. Signing the request is explained, in detail, within a [tutorial](../tutorials/hmac-header-tutorial.md).
-### Managed Identity
+### Azure AD authentication
-Managed Identities, provides superior security and ease of use over other authorization options. For example, by using Azure AD, you avoid having to store your account access key within your code, as you do with Access Key authorization. While you can continue to use Access Key authorization with communication services applications, Microsoft recommends moving to Azure AD where possible.
+The Azure platform provides role-based access (Azure RBAC) to control access to the resources. Azure RBAC security principal represents a user, group, service principal, or managed identity that is requesting access to Azure resources. Azure AD authentication provides superior security and ease of use over other authorization options. For example, by using managed identity, you avoid having to store your account access key within your code, as you do with Access Key authorization. While you can continue to use Access Key authorization with communication services applications, Microsoft recommends moving to Azure AD where possible.
-To set up a managed identity, [create a registered application from the Azure CLI](../quickstarts/identity/service-principal-from-cli.md). Then, the endpoint and credentials can be used to authenticate the SDKs. See examples of how [managed identity](../quickstarts/identity/service-principal.md) is used.
+To set up a service principal, [create a registered application from the Azure CLI](../quickstarts/identity/service-principal-from-cli.md). Then, the endpoint and credentials can be used to authenticate the SDKs. See examples of how [service principal](../quickstarts/identity/service-principal.md) is used.
+
+Communication services support Azure AD authentication but do not support managed identity for Communication services resources. You can find more details, about the managed identity support in the [Azure Active Directory documentation](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/services-support-managed-identities).
### User Access Tokens
-User access tokens are generated using the Identity SDK and are associated with users created in the Identity SDK. See an example of how to [create users and generate tokens](../quickstarts/access-tokens.md). Then, user access tokens are used to authenticate participants added to conversations in the Chat or Calling SDK. For more information, see [add chat to your app](../quickstarts/chat/get-started.md). User access token authentication is different compared to access key and managed identity authentication in that it is used to authenticate a user rather than a secured Azure resource.
+User access tokens are generated using the Identity SDK and are associated with users created in the Identity SDK. See an example of how to [create users and generate tokens](../quickstarts/access-tokens.md). Then, user access tokens are used to authenticate participants added to conversations in the Chat or Calling SDK. For more information, see [add chat to your app](../quickstarts/chat/get-started.md). User access token authentication is different compared to access key and Azure AD authentication in that it is used to authenticate a user rather than a secured Azure resource.
## Using identity for monitoring and metrics
The user identity is intended to act as a primary key for logs and metrics colle
> [!div class="nextstepaction"] > [Create and manage Communication Services resources](../quickstarts/create-communication-resource.md)
-> [Create an Azure Active Directory managed identity application from the Azure CLI](../quickstarts/identity/service-principal-from-cli.md)
+> [Create an Azure Active Directory service principal application from the Azure CLI](../quickstarts/identity/service-principal-from-cli.md)
> [Create User Access Tokens](../quickstarts/access-tokens.md) For more information, see the following articles:
communication-services Client And Server Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/client-and-server-architecture.md
This page illustrates typical architectural components and dataflows in various
## User access management
-Azure Communication Services clients must present `user access tokens` to access Communication Services resources securely. `User access tokens` should be generated and managed by a trusted service due to the sensitive nature of the token and the connection string or managed identity necessary to generate them. Failure to properly manage access tokens can result in additional charges due to misuse of resources.
+Azure Communication Services clients must present `user access tokens` to access Communication Services resources securely. `User access tokens` should be generated and managed by a trusted service due to the sensitive nature of the token and the connection string or Azure AD authentication secrets necessary to generate them. Failure to properly manage access tokens can result in additional charges due to misuse of resources.
:::image type="content" source="../media/scenarios/architecture_v2_identity.svg" alt-text="Diagram showing user access token architecture.":::
communication-services Identity Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/identity-model.md
If you want to remove a user's ability to access specific functionality, revoke
In Azure Communication Services, a rotation of access keys revokes all active access tokens that were created by using a former access key. All identities lose access to Azure Communication Services, and they must issue new access tokens.
-We recommend issuing access tokens in your server-side service and not in the client's application. The reasoning is that issuing requires an access key or a managed identity. For security reasons, sharing access keys with the client's application isn't recommended.
+We recommend issuing access tokens in your server-side service and not in the client's application. The reasoning is that issuing requires an access key or Azure AD authentication. Sharing secrets with the client's application isn't recommended for security reasons.
The client application should use a trusted service endpoint that can authenticate your clients. The endpoint should issue access tokens on their behalf. For more information, see [Client and server architecture](./client-and-server-architecture.md).
communication-services Calling Chat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/interop/calling-chat.md
Calling a Teams user using [microsoftTeamsUserId](/javascript/api/@azure/communi
const teamsCallee = { microsoftTeamsUserId: '<Teams User AAD Object ID>' } const call = callAgent.startCall([teamsCallee]); ```
-
+**Voice and video calling events**
+
+[Communication Services voice and video calling events](/azure/event-grid/communication-services-voice-video-events) are raised for calls between a Communication Services user and Teams users.
+ **Limitations and known issues** - Teams users must be in "TeamsOnly" mode. Skype for Business users can't receive 1:1 calls from Communication Services users. - Escalation to a group call isn't supported.
communication-services Join Teams Meeting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/join-teams-meeting.md
Microsoft will indicate to you via the Azure Communication Services API that rec
- PowerPoint presentations are not rendered for Communication Services users. - Teams meetings support up to 1000 participants, but the Azure Communication Services Calling SDK currently only supports 350 participants and Chat SDK supports 250 participants. - With [Cloud Video Interop for Microsoft Teams](/microsoftteams/cloud-video-interop), some devices have seen issues when a Communication Services user shares their screen.
+- [Communication Services voice and video calling events](/azure/event-grid/communication-services-voice-video-events) are not raised for Teams meeting.
- Features such as reactions, raised hand, together mode, and breakout rooms are only available for Teams users. - Communication Services users cannot interact with poll or Q&A apps in meetings. - Communication Services won't have access to all chat features supported by Teams. They can send and receive text messages, use typing indicators, read receipts and other features supported by Chat SDK. However features like file sharing, reply or react to a message are not supported for Communication Services users.
communication-services Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/identity/service-principal.md
This quickstart shows you how to authorize access to the Identity and SMS SDKs f
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free) - An active Azure Communication Services resource, see [create a Communication Services resource](../create-communication-resource.md) if you do not have one. - To send an SMS you will need a [Phone Number](../telephony/get-phone-number.md).-- A setup Service Principal for a development environment, see [Authorize access with managed identity](./service-principal-from-cli.md)
+- A setup Service Principal for a development environment, see [Authorize access with service principal](./service-principal-from-cli.md)
::: zone pivot="programming-language-csharp" [!INCLUDE [.NET](./includes/active-directory/service-principal-net.md)]
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/microservices-dapr-azure-resource-manager.md
In this tutorial, you deploy the same applications from the Dapr [Hello World](h
::: zone pivot="container-apps-bicep"
-* [Bicep](/azure-resource-manager/bicep/install)
+* [Bicep](/azure/azure-resource-manager/bicep/install)
::: zone-end
cost-management-billing Download Azure Invoice Daily Usage Date https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/download-azure-invoice-daily-usage-date.md
tags: billing
Previously updated : 10/28/2021 Last updated : 01/06/2022 # Download or view your Azure billing invoice
-For most subscriptions, you can download your invoice from the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) or have it sent in email. If you're an Azure customer with an Enterprise Agreement (EA customer), you can't download your organization's invoices. Invoices are sent to whoever is set up to receive invoices for the enrollment.
+For most subscriptions, you can download your invoice from the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) or have it sent in email.
+
+If you're an Azure customer with a direct Enterprise Agreement (EA customer), you download your organization's invoices using the information at [Download or view your Azure billing invoice](direct-ea-azure-usage-charges-invoices.md#download-or-view-your-azure-billing-invoice). For indirect EA customers, see [Azure Enterprise enrollment invoices](ea-portal-enrollment-invoices.md).
Only certain roles have permission to get billing invoice, like the Account Administrator or Enterprise Administrator. To learn more about getting access to billing information, see [Manage access to Azure billing using roles](manage-billing-access.md).
defender-for-cloud Defender For Sql Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-sql-introduction.md
Title: Microsoft Defender for SQL - the benefits and features description: Learn about the benefits and features of Microsoft Defender for SQL. Previously updated : 11/09/2021 Last updated : 01/06/2022
Microsoft Defender for SQL includes two Microsoft Defender plans that extend Mic
- [Azure Arc-enabled SQL Server (preview)](/sql/sql-server/azure-arc/overview) - [SQL Server running on Windows machines without Azure Arc](../azure-monitor/agents/agent-windows.md)
+When you enable either of these plans, all supported resources that exist within the subscription are protected. Future resources created on the same subscription will also be protected.
## What are the benefits of Microsoft Defender for SQL?
defender-for-cloud Just In Time Access Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/just-in-time-access-usage.md
Title: Just-in-time virtual machine access in Microsoft Defender for Cloud | Microsoft Docs description: Learn how just-in-time VM access (JIT) in Microsoft Defender for Cloud helps you control access to your Azure virtual machines. Previously updated : 11/09/2021 Last updated : 01/06/2022 # Secure your management ports with just-in-time access
This page teaches you how to include JIT in your security program. You'll learn
|Aspect|Details| |-|:-|
-|Release state:|General availability (GA)|
-|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md)|
-|Supported VMs:|:::image type="icon" source="./medi)|
-|Required roles and permissions:|**Reader** and **SecurityReader** roles can both view the JIT status and parameters.<br>To create custom roles that can work with JIT, see [What permissions are needed to configure and use JIT?](just-in-time-access-overview.md#what-permissions-are-needed-to-configure-and-use-jit).<br>To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Defender for Cloud GitHub community pages.|
-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts|
+| Release state: | General availability (GA) |
+| Supported VMs: | :::image type="icon" source="./medi). |
+| Required roles and permissions: | **Reader** and **SecurityReader** roles can both view the JIT status and parameters.<br>To create custom roles that can work with JIT, see [What permissions are needed to configure and use JIT?](just-in-time-access-overview.md#what-permissions-are-needed-to-configure-and-use-jit).<br>To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Defender for Cloud GitHub community pages. |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts |
|||
+<sup><a name="footnote1"></a>1</sup> For any VM protected by Azure Firewall, JIT will only fully protect the machine if it's in the same VNET as the firewall. VMs using VNET peering will not be fully protected.
## Enable JIT VM access <a name="jit-configure"></a>
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/release-notes-archive.md
Title: Archive of what's new in Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud from six months ago and earlier. Previously updated : 12/09/2021 Last updated : 01/04/2022 # Archive for what's new in Defender for Cloud?
This page provides you with information about:
- Bug fixes - Deprecated functionality
+## July 2021
+
+Updates in July include:
+
+- [Azure Sentinel connector now includes optional bi-directional alert synchronization (in preview)](#azure-sentinel-connector-now-includes-optional-bi-directional-alert-synchronization-in-preview)
+- [Logical reorganization of Azure Defender for Resource Manager alerts](#logical-reorganization-of-azure-defender-for-resource-manager-alerts)
+- [Enhancements to recommendation to enable Azure Disk Encryption (ADE)](#enhancements-to-recommendation-to-enable-azure-disk-encryption-ade)
+- [Continuous export of secure score and regulatory compliance data released for general availability (GA)](#continuous-export-of-secure-score-and-regulatory-compliance-data-released-for-general-availability-ga)
+- [Workflow automations can be triggered by changes to regulatory compliance assessments (GA)](#workflow-automations-can-be-triggered-by-changes-to-regulatory-compliance-assessments-ga)
+- [Assessments API field 'FirstEvaluationDate' and 'StatusChangeDate' now available in workspace schemas and logic apps](#assessments-api-field-firstevaluationdate-and-statuschangedate-now-available-in-workspace-schemas-and-logic-apps)
+- ['Compliance over time' workbook template added to Azure Monitor Workbooks gallery](#compliance-over-time-workbook-template-added-to-azure-monitor-workbooks-gallery)
+
+### Azure Sentinel connector now includes optional bi-directional alert synchronization (in preview)
+
+Security Center natively integrates with [Azure Sentinel](../sentinel/index.yml), Azure's cloud-native SIEM and SOAR solution.
+
+Azure Sentinel includes built-in connectors for Azure Security Center at the subscription and tenant levels. Learn more in [Stream alerts to Azure Sentinel](export-to-siem.md#stream-alerts-to-microsoft-sentinel).
+
+When you connect Azure Defender to Azure Sentinel, the status of Azure Defender alerts that get ingested into Azure Sentinel is synchronized between the two services. So, for example, when an alert is closed in Azure Defender, that alert will display as closed in Azure Sentinel as well. Changing the status of an alert in Azure Defender "won't"* affect the status of any Azure Sentinel **incidents** that contain the synchronized Azure Sentinel alert, only that of the synchronized alert itself.
+
+Enabling this preview feature, **bi-directional alert synchronization**, will automatically sync the status of the original Azure Defender alerts with Azure Sentinel incidents that contain the copies of those Azure Defender alerts. So, for example, when an Azure Sentinel incident containing an Azure Defender alert is closed, Azure Defender will automatically close the corresponding original alert.
+
+Learn more in [Connect Azure Defender alerts from Azure Security Center](../sentinel/connect-azure-security-center.md).
+
+### Logical reorganization of Azure Defender for Resource Manager alerts
+
+The alerts listed below were provided as part of the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) plan.
+
+As part of a logical reorganization of some of the Azure Defender plans, we've moved some alerts from **Azure Defender for Resource Manager** to **Azure Defender for servers**.
+
+The alerts are organized according to two main principles:
+
+- Alerts that provide control-plane protection - across many Azure resource types - are part of Azure Defender for Resource Manager
+- Alerts that protect specific workloads are in the Azure Defender plan that relates to the corresponding workload
+
+These are the alerts that were part of Azure Defender for Resource Manager, and which, as a result of this change, are now part of Azure Defender for servers:
+
+- ARM_AmBroadFilesExclusion
+- ARM_AmDisablementAndCodeExecution
+- ARM_AmDisablement
+- ARM_AmFileExclusionAndCodeExecution
+- ARM_AmTempFileExclusionAndCodeExecution
+- ARM_AmTempFileExclusion
+- ARM_AmRealtimeProtectionDisabled
+- ARM_AmTempRealtimeProtectionDisablement
+- ARM_AmRealtimeProtectionDisablementAndCodeExec
+- ARM_AmMalwareCampaignRelatedExclusion
+- ARM_AmTemporarilyDisablement
+- ARM_UnusualAmFileExclusion
+- ARM_CustomScriptExtensionSuspiciousCmd
+- ARM_CustomScriptExtensionSuspiciousEntryPoint
+- ARM_CustomScriptExtensionSuspiciousPayload
+- ARM_CustomScriptExtensionSuspiciousFailure
+- ARM_CustomScriptExtensionUnusualDeletion
+- ARM_CustomScriptExtensionUnusualExecution
+- ARM_VMAccessUnusualConfigReset
+- ARM_VMAccessUnusualPasswordReset
+- ARM_VMAccessUnusualSSHReset
+
+Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for servers](defender-for-servers-introduction.md) plans.
++
+### Enhancements to recommendation to enable Azure Disk Encryption (ADE)
+
+Following user feedback, we've renamed the recommendation **Disk encryption should be applied on virtual machines**.
+
+The new recommendation uses the same assessment ID and is called **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources**.
+
+The description has also been updated to better explain the purpose of this hardening recommendation:
+
+| Recommendation | Description | Severity |
+|--|--|:--:|
+| **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources** | By default, a virtual machineΓÇÖs OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches arenΓÇÖt encrypted, and data isnΓÇÖt encrypted when flowing between compute and storage resources. For a comparison of different disk encryption technologies in Azure, see https://aka.ms/diskencryptioncomparison.<br>Use Azure Disk Encryption to encrypt all this data. Disregard this recommendation if: (1) youΓÇÖre using the encryption-at-host feature, or (2) server-side encryption on Managed Disks meets your security requirements. Learn more in Server-side encryption of Azure Disk Storage. | High |
+| | | |
++
+### Continuous export of secure score and regulatory compliance data released for general availability (GA)
+
+[Continuous export](continuous-export.md) provides the mechanism for exporting your security alerts and recommendations for tracking with other monitoring tools in your environment.
+
+When you set up your continuous export, you configure what is exported, and where it will go. Learn more in the [overview of continuous export](continuous-export.md).
+
+We've enhanced and expanded this feature over time:
+
+- In November 2020, we added the **preview** option to stream changes to your **secure score**.<br/>For full details, see [Secure score is now available in continuous export (preview)](release-notes-archive.md#secure-score-is-now-available-in-continuous-export-preview).
+
+- In December 2020, we added the **preview** option to stream changes to your **regulatory compliance assessment data**.<br/>For full details, see [Continuous export gets new data types (preview)](release-notes-archive.md#continuous-export-gets-new-data-types-and-improved-deployifnotexist-policies).
+
+With this update, these two options are released for general availability (GA).
++
+### Workflow automations can be triggered by changes to regulatory compliance assessments (GA)
+
+In February 2021, we added a **preview** third data type to the trigger options for your workflow automations: changes to regulatory compliance assessments. Learn more in [Workflow automations can be triggered by changes to regulatory compliance assessments](release-notes-archive.md#workflow-automations-can-be-triggered-by-changes-to-regulatory-compliance-assessments-in-preview).
+
+With this update, this trigger option is released for general availability (GA).
+
+Learn how to use the workflow automation tools in [Automate responses to Security Center triggers](workflow-automation.md).
++
+### Assessments API field 'FirstEvaluationDate' and 'StatusChangeDate' now available in workspace schemas and logic apps
+
+In May 2021, we updated the Assessment API with two new fields, **FirstEvaluationDate** and **StatusChangeDate**. For full details, see [Assessments API expanded with two new fields](release-notes-archive.md#assessments-api-expanded-with-two-new-fields).
+
+Those fields were accessible through the REST API, Azure Resource Graph, continuous export, and in CSV exports.
+
+With this change, we're making the information available in the Log Analytics workspace schema and from logic apps.
++
+### 'Compliance over time' workbook template added to Azure Monitor Workbooks gallery
+
+In March, we announced the integrated Azure Monitor Workbooks experience in Security Center (see [Azure Monitor Workbooks integrated into Security Center and three templates provided](release-notes-archive.md#azure-monitor-workbooks-integrated-into-security-center-and-three-templates-provided)).
+
+The initial release included three templates to build dynamic and visual reports about your organization's security posture.
+
+We've now added a workbook dedicated to tracking a subscription's compliance with the regulatory or industry standards applied to it.
+
+Learn about using these reports or building your own in [Create rich, interactive reports of Security Center data](custom-dashboards-azure-workbooks.md).
++ ## June 2021 Updates in June include:
Learn more about Security Center's vulnerability scanners:
The severity of the recommendation **Sensitive data in your SQL databases should be classified** has been changed from **High** to **Low**.
-This is part of the ongoing changes to this recommendation announced in [Enhancements to recommendation to classify sensitive data in SQL databases](upcoming-changes.md#enhancements-to-recommendation-to-classify-sensitive-data-in-sql-databases).
+This is part of an ongoing change to this recommendation announced in our upcoming changes page.
### New recommendations to enable trusted launch capabilities (in preview)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 12/14/2021 Last updated : 01/06/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
> [!TIP] > If you're looking for items older than six months, you'll find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md). +
+## January 2022
+
+Updates in January include:
+
+- [Recommendations to enable Microsoft Defender plans on workspaces (in preview)](#recommendations-to-enable-microsoft-defender-plans-on-workspaces-in-preview)
+- [Auto provision Log Analytics agent to Azure Arc-enabled machines (preview)](#auto-provision-log-analytics-agent-to-azure-arc-enabled-machines-preview)
+- [Deprecated the recommendation to classify sensitive data in SQL databases](#deprecated-the-recommendation-to-classify-sensitive-data-in-sql-databases)
++
+### Recommendations to enable Microsoft Defender plans on workspaces (in preview)
+
+To benefit from all of the security features available from [Microsoft Defender for servers](defender-for-servers-introduction.md) and [Microsoft Defender for SQL on machines](defender-for-sql-introduction.md), the plans must be enabled on **both** the subscription and workspace levels.
+
+When a machine is in a subscription with one of these plan enabled, you'll be billed for the full protections. However, if that machine is reporting to a workspace *without* the plan enabled, you won't actually receive those benefits.
+
+We've added two recommendations that highlight workspaces without these plans enabled, that nevertheless have machines reporting to them from subscriptions that *do* have the plan enabled.
+
+The two recommendations, which both offer automated remediation (the 'Fix' action), are:
+
+|Recommendation |Description |Severity |
+||||
+|[Microsoft Defender for servers should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1ce68079-b783-4404-b341-d2851d6f0fa2) |Microsoft Defender for servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Introduction to Microsoft Defender for servers</a>.<br />(No related policy) |Medium |
+|[Microsoft Defender for SQL on machines should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9c320f1-03a0-4d2b-9a37-84b3bdc2e281) |Microsoft Defender for servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Introduction to Microsoft Defender for servers</a>.<br />(No related policy) |Medium |
+||||
++
+### Auto provision Log Analytics agent to Azure Arc-enabled machines (preview)
+
+Defender for Cloud uses the Log Analytics agent to gather security-related data from machines. The agent reads various security-related configurations and event logs and copies the data to your workspace for analysis.
+
+Defender for Cloud's auto provisioning settings have a toggle for each type of supported extension, including the Log Analytics agent.
+
+In a further expansion of our hybrid cloud features, we've added an option to auto provision the Log Analytics agent to machines connected to Azure Arc.
+
+As with the other other auto provisioning options, this is configured at the subscription level.
+
+When you enable this option, you'll be prompted for the workspace.
+
+> [!NOTE]
+> For this preview, you can't select the default workspaces that was created by Defender for Cloud. To ensure you receive the full set of security features available for the Azure Arc-enabled servers, verify that you have the relevant security solution installed on the selected workspace.
++
+### Deprecated the recommendation to classify sensitive data in SQL databases
+
+We've removed the recommendation **Sensitive data in your SQL databases should be classified** as part of an overhaul of how Defender for Cloud identifies and protects sensitive date in your cloud resources.
+
+Advance notice of this change appeared for the last six months in the [Important upcoming changes to Microsoft Defender for Cloud](upcoming-changes.md) page.
++ ## December 2021 Updates in December include:
Learn more in [Introduction to Azure Security Benchmark](/security/benchmark/azu
### Microsoft Sentinel connector's optional bi-directional alert synchronization released for general availability (GA)
-In July, [we announced](#azure-sentinel-connector-now-includes-optional-bi-directional-alert-synchronization-in-preview) a preview feature, **bi-directional alert synchronization**, for the built-in connector in [Microsoft Sentinel](../sentinel/index.yml) (Microsoft's cloud-native SIEM and SOAR solution). This feature is now released for general availability (GA).
+In July, [we announced](release-notes-archive.md#azure-sentinel-connector-now-includes-optional-bi-directional-alert-synchronization-in-preview) a preview feature, **bi-directional alert synchronization**, for the built-in connector in [Microsoft Sentinel](../sentinel/index.yml) (Microsoft's cloud-native SIEM and SOAR solution). This feature is now released for general availability (GA).
When you connect Microsoft Defender for Cloud to Microsoft Sentinel, the status of security alerts is synchronized between the two services. So, for example, when an alert is closed in Defender for Cloud, that alert will display as closed in Microsoft Sentinel as well. Changing the status of an alert in Defender for Cloud won't affect the status of any Microsoft Sentinel **incidents** that contain the synchronized Microsoft Sentinel alert, only that of the synchronized alert itself.
For full details, including sample Kusto queries for Azure Resource Graph, see [
### Changed prefix of some alert types from "ARM_" to "VM_"
-In July 2021, we announced a [logical reorganization of Azure Defender for Resource Manager alerts](release-notes.md#logical-reorganization-of-azure-defender-for-resource-manager-alerts)
+In July 2021, we announced a [logical reorganization of Azure Defender for Resource Manager alerts](release-notes-archive.md#logical-reorganization-of-azure-defender-for-resource-manager-alerts)
As part of a logical reorganization of some of the Azure Defender plans, we moved twenty-one alerts from [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) to [Azure Defender for servers](defender-for-servers-introduction.md).
The recommendations page now has two tabs to provide alternate ways to view the
- **All recommendations** - Use this tab to view the list of recommendations as a flat list. This tab is also great for understanding which initiative (including regulatory compliance standards) generated the recommendation. Learn more about initiatives and their relationship to recommendations in [What are security policies, initiatives, and recommendations?](security-policy-concept.md). :::image type="content" source="media/release-notes/recommendations-tabs.png" alt-text="Tabs to change the view of the recommendations list in Azure Security Center.":::-
-## July 2021
-
-Updates in July include:
--- [Azure Sentinel connector now includes optional bi-directional alert synchronization (in preview)](#azure-sentinel-connector-now-includes-optional-bi-directional-alert-synchronization-in-preview)-- [Logical reorganization of Azure Defender for Resource Manager alerts](#logical-reorganization-of-azure-defender-for-resource-manager-alerts) -- [Enhancements to recommendation to enable Azure Disk Encryption (ADE)](#enhancements-to-recommendation-to-enable-azure-disk-encryption-ade) -- [Continuous export of secure score and regulatory compliance data released for general availability (GA)](#continuous-export-of-secure-score-and-regulatory-compliance-data-released-for-general-availability-ga)-- [Workflow automations can be triggered by changes to regulatory compliance assessments (GA)](#workflow-automations-can-be-triggered-by-changes-to-regulatory-compliance-assessments-ga)-- [Assessments API field 'FirstEvaluationDate' and 'StatusChangeDate' now available in workspace schemas and logic apps](#assessments-api-field-firstevaluationdate-and-statuschangedate-now-available-in-workspace-schemas-and-logic-apps)-- ['Compliance over time' workbook template added to Azure Monitor Workbooks gallery](#compliance-over-time-workbook-template-added-to-azure-monitor-workbooks-gallery)-
-### Azure Sentinel connector now includes optional bi-directional alert synchronization (in preview)
-
-Security Center natively integrates with [Azure Sentinel](../sentinel/index.yml), Azure's cloud-native SIEM and SOAR solution.
-
-Azure Sentinel includes built-in connectors for Azure Security Center at the subscription and tenant levels. Learn more in [Stream alerts to Azure Sentinel](export-to-siem.md#stream-alerts-to-microsoft-sentinel).
-
-When you connect Azure Defender to Azure Sentinel, the status of Azure Defender alerts that get ingested into Azure Sentinel is synchronized between the two services. So, for example, when an alert is closed in Azure Defender, that alert will display as closed in Azure Sentinel as well. Changing the status of an alert in Azure Defender "won't"* affect the status of any Azure Sentinel **incidents** that contain the synchronized Azure Sentinel alert, only that of the synchronized alert itself.
-
-Enabling this preview feature, **bi-directional alert synchronization**, will automatically sync the status of the original Azure Defender alerts with Azure Sentinel incidents that contain the copies of those Azure Defender alerts. So, for example, when an Azure Sentinel incident containing an Azure Defender alert is closed, Azure Defender will automatically close the corresponding original alert.
-
-Learn more in [Connect Azure Defender alerts from Azure Security Center](../sentinel/connect-azure-security-center.md).
-
-### Logical reorganization of Azure Defender for Resource Manager alerts
-
-The alerts listed below were provided as part of the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) plan.
-
-As part of a logical reorganization of some of the Azure Defender plans, we've moved some alerts from **Azure Defender for Resource Manager** to **Azure Defender for servers**.
-
-The alerts are organized according to two main principles:
--- Alerts that provide control-plane protection - across many Azure resource types - are part of Azure Defender for Resource Manager-- Alerts that protect specific workloads are in the Azure Defender plan that relates to the corresponding workload-
-These are the alerts that were part of Azure Defender for Resource Manager, and which, as a result of this change, are now part of Azure Defender for servers:
--- ARM_AmBroadFilesExclusion-- ARM_AmDisablementAndCodeExecution-- ARM_AmDisablement-- ARM_AmFileExclusionAndCodeExecution-- ARM_AmTempFileExclusionAndCodeExecution-- ARM_AmTempFileExclusion-- ARM_AmRealtimeProtectionDisabled-- ARM_AmTempRealtimeProtectionDisablement-- ARM_AmRealtimeProtectionDisablementAndCodeExec-- ARM_AmMalwareCampaignRelatedExclusion-- ARM_AmTemporarilyDisablement-- ARM_UnusualAmFileExclusion-- ARM_CustomScriptExtensionSuspiciousCmd-- ARM_CustomScriptExtensionSuspiciousEntryPoint-- ARM_CustomScriptExtensionSuspiciousPayload-- ARM_CustomScriptExtensionSuspiciousFailure-- ARM_CustomScriptExtensionUnusualDeletion-- ARM_CustomScriptExtensionUnusualExecution-- ARM_VMAccessUnusualConfigReset-- ARM_VMAccessUnusualPasswordReset-- ARM_VMAccessUnusualSSHReset-
-Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for servers](defender-for-servers-introduction.md) plans.
--
-### Enhancements to recommendation to enable Azure Disk Encryption (ADE)
-
-Following user feedback, we've renamed the recommendation **Disk encryption should be applied on virtual machines**.
-
-The new recommendation uses the same assessment ID and is called **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources**.
-
-The description has also been updated to better explain the purpose of this hardening recommendation:
-
-| Recommendation | Description | Severity |
-|--|--|:--:|
-| **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources** | By default, a virtual machineΓÇÖs OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches arenΓÇÖt encrypted, and data isnΓÇÖt encrypted when flowing between compute and storage resources. For a comparison of different disk encryption technologies in Azure, see https://aka.ms/diskencryptioncomparison.<br>Use Azure Disk Encryption to encrypt all this data. Disregard this recommendation if: (1) youΓÇÖre using the encryption-at-host feature, or (2) server-side encryption on Managed Disks meets your security requirements. Learn more in Server-side encryption of Azure Disk Storage. | High |
-| | | |
--
-### Continuous export of secure score and regulatory compliance data released for general availability (GA)
-
-[Continuous export](continuous-export.md) provides the mechanism for exporting your security alerts and recommendations for tracking with other monitoring tools in your environment.
-
-When you set up your continuous export, you configure what is exported, and where it will go. Learn more in the [overview of continuous export](continuous-export.md).
-
-We've enhanced and expanded this feature over time:
--- In November 2020, we added the **preview** option to stream changes to your **secure score**.<br/>For full details, see [Secure score is now available in continuous export (preview)](release-notes-archive.md#secure-score-is-now-available-in-continuous-export-preview).--- In December 2020, we added the **preview** option to stream changes to your **regulatory compliance assessment data**.<br/>For full details, see [Continuous export gets new data types (preview)](release-notes-archive.md#continuous-export-gets-new-data-types-and-improved-deployifnotexist-policies).-
-With this update, these two options are released for general availability (GA).
--
-### Workflow automations can be triggered by changes to regulatory compliance assessments (GA)
-
-In February 2021, we added a **preview** third data type to the trigger options for your workflow automations: changes to regulatory compliance assessments. Learn more in [Workflow automations can be triggered by changes to regulatory compliance assessments](release-notes-archive.md#workflow-automations-can-be-triggered-by-changes-to-regulatory-compliance-assessments-in-preview).
-
-With this update, this trigger option is released for general availability (GA).
-
-Learn how to use the workflow automation tools in [Automate responses to Security Center triggers](workflow-automation.md).
--
-### Assessments API field 'FirstEvaluationDate' and 'StatusChangeDate' now available in workspace schemas and logic apps
-
-In May 2021, we updated the Assessment API with two new fields, **FirstEvaluationDate** and **StatusChangeDate**. For full details, see [Assessments API expanded with two new fields](release-notes-archive.md#assessments-api-expanded-with-two-new-fields).
-
-Those fields were accessible through the REST API, Azure Resource Graph, continuous export, and in CSV exports.
-
-With this change, we're making the information available in the Log Analytics workspace schema and from logic apps.
--
-### 'Compliance over time' workbook template added to Azure Monitor Workbooks gallery
-
-In March, we announced the integrated Azure Monitor Workbooks experience in Security Center (see [Azure Monitor Workbooks integrated into Security Center and three templates provided](release-notes-archive.md#azure-monitor-workbooks-integrated-into-security-center-and-three-templates-provided)).
-
-The initial release included three templates to build dynamic and visual reports about your organization's security posture.
-
-We've now added a workbook dedicated to tracking a subscription's compliance with the regulatory or industry standards applied to it.
-
-Learn about using these reports or building your own in [Create rich, interactive reports of Security Center data](custom-dashboards-azure-workbooks.md).
-
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 01/05/2022 Last updated : 01/06/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | February 2022 | | [Deprecating the recommendation to use service principals to protect your subscriptions](#deprecating-the-recommendation-to-use-service-principals-to-protect-your-subscriptions) | February 2022 | | [Deprecating the recommendations to install the network traffic data collection agent](#deprecating-the-recommendations-to-install-the-network-traffic-data-collection-agent) | February 2022 |
-| [Enhancements to recommendation to classify sensitive data in SQL databases](#enhancements-to-recommendation-to-classify-sensitive-data-in-sql-databases) | Q1 2022 |
| [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | March 2022 | | | |
Changes in our roadmap and priorities have removed the need for the network traf
-### Enhancements to recommendation to classify sensitive data in SQL databases
-
-**Estimated date for change:** Q1 2022
-
-The recommendation **Sensitive data in your SQL databases should be classified** in the **Apply data classification** security control will be replaced with a new version that's better aligned with Microsoft's data classification strategy. As a result the recommendation's ID will also change (currently, it's b0df6f56-862d-4730-8597-38c0fd4ebd59).
-- ### Changes to recommendations for managing endpoint protection solutions **Estimated date for change:** March 2022
defender-for-iot How To Import Device Information https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-import-device-information.md
Title: Import device information description: Defender for IoT sensors monitor and analyze mirrored traffic. In these cases, you might want to import data to enrich information on devices already detected. Previously updated : 11/09/2021 Last updated : 01/06/2022 # Import device information to a sensor
-A Microsoft Defender for IoT sensor monitors and analyzes mirrored traffic. In some cases, because of organization-specific network configuration policies, some information might not be transmitted.
+Sensors monitor and analyzes mirrored traffic. In some cases, because of organization-specific network configuration policies, some information might not be transmitted.
In these cases, you might want to import data to enrich information on devices that are already detected. Two options are available for importing information to sensors:
In these cases, you might want to import data to enrich information on devices t
This section describes how to import device names, types, groups, or Purdue layers to the device map. You do this from the map.
-Here are the import requirements:
+**Import requirements**
- **Names**: Can be up to 30 characters.
Here are the import requirements:
- **Device Group**: Create a new group of up to 30 characters.
-> [!NOTE]
-> To avoid conflicts, don't import the data that you exported from one sensor to another sensor.
+**To avoid conflicts, don't import the data that you exported from one sensor to another sensor.**
-To import:
+**To import:**
1. On the side menu, select **Devices**.
To import:
This section describes how to import the device IP address, OS, patch level, or authorization status to the device map. You do this from the **Import Settings** dialog box.
-To import the IP address, OS, and patch level:
+**To import the IP address, OS, and patch level:**
-1. Download the [devices_info_2.2.8 and up.csv](https://cyberx-labs.zendesk.com/hc/en-us/articles/360008658272-How-To-Import-Data) file from the [Help Center](https://cyberx-labs.zendesk.com/hc/en-us) and enter the information as follows:
+1. Download the [Devices settings file](https://download.microsoft.com/download/8/2/3/823c55c4-7659-4236-bfda-cc2427be2cee/CSS/devices_info_2.2.8%20and%20up.xlsx) and enter the information as follows:
- **IP Address**: Enter the device IP address.
To import the IP address, OS, and patch level:
3. To upload the required configuration, in the **Device Info** section, select **Add** and upload the CSV file that you prepared.
-To import the authorization status:
+**To import the authorization status:**
-1. Download and save the [authorized_devices.csv](https://cyberx-labs.zendesk.com/hc/en-us/articles/360008658272-How-To-Import-Data) file from the Defender for IoT help center. Verify that you saved the file as a CSV.
+1. Download the [Authorization file](https://download.microsoft.com/download/8/2/3/823c55c4-7659-4236-bfda-cc2427be2cee/CSS/authorized_devices%20-%20example.csv) and save. Verify that you saved the file as a CSV.
2. Enter the information as:
To import the authorization status:
When the information is imported, you receive alerts about unauthorized devices for all the devices that don't appear on this list.
-## Import device information to the sensor
-
-The sensor monitors and analyzes mirrored traffic. In some cases, because of organization-specific network configuration policies, some information might not be transmitted.
-
-In these cases, you might want to import data to enrich device information on devices that are already detected. Two options are available for importing information to sensors:
--- **Import from the Map**: Update the device name, type, group, or Purdue layer to the map.--- **Import from Import Settings**: Import device OS, IP address, patch level, or authorization status.-
-### Import from the map
-
-This section describes how to import device names, types, groups, or Purdue layers to the device map. You do this from the map.
-
-Here are the import requirements:
--- **Names**: Can be up to 30 characters.--- **Type** or **Purdue Layer**: Use the options that appear in the **Device Properties** dialog box. (Right-click the device and select **View Properties**.)--- **Device Group**: Create a new group of up to 30 characters. -
-> [!NOTE]
-> To avoid conflicts, don't import the data that you exported from one sensor to another sensor.
-
-To import:
-
-1. On the side menu, select **Devices**.
-
-2. In the upper-right corner of the **Devices** window, select :::image type="icon" source="media/how-to-import-device-information/file-icon.png" border="false":::.
-
- :::image type="content" source="media/how-to-import-device-information/device-window-v2.png" alt-text="The window to pick your device from.":::
-
-3. Select **Export Devices**. An extensive range of information appears in the exported file. Examples include protocols that the device uses and the device authorization status.
-
- :::image type="content" source="media/how-to-import-device-information/sample-exported-file.png" alt-text="The information in the exported file.":::
-
-4. In the CSV file, only change the device name, type, group, and Purdue layer. Then save the file.
-
- Use capitalization standards shown in the exported file. For example, for the Purdue layer, use all first-letter capitalization.
-
-5. From the **Import/Export** drop-down menu in the **device** window, select **Import Devices**.
-
- :::image type="content" source="media/how-to-import-device-information/import-assets-v2.png" alt-text="Import your devices.":::
-
-6. Select **Import Devices** and select the CSV file that you want to import. The import status messages appear on the screen until the **Import Devices** dialog box closes.
-
-### Import from import settings
-
-This section describes how to import the device IP address, OS, patch level, or authorization status to the device map. You do this from the **Import Settings** dialog box.
-
-To import the IP address, OS, and patch level:
-
-1. Download the [devices_info_2.2.8 and up.csv](https://cyberx-labs.zendesk.com/hc/en-us/articles/360008658272-How-To-Import-Data) file from the [Help Center](https://cyberx-labs.zendesk.com/hc/en-us) and enter the information as follows:
-
- - **IP Address**: The device IP address.
-
- - **Operating System**: Select from the drop-down list.
-
- - **Date of Last Update**: Use the YYYY-MM-DD format.
-
- :::image type="content" source="media/how-to-import-device-information/last-update-screen.png" alt-text="The content on the screen.":::
-
-2. On the side menu, select **Import Settings**.
-
- :::image type="content" source="media/how-to-import-device-information/import-settings-screen-v2.png" alt-text="Fill out the import settings screen.":::
-
-3. To upload the required configuration, in the **Device Info** section, select **Add** and upload the CSV file that you've prepared.
-
-To import the authorization status:
-
-1. Download and save the [authorized_devices - examples.csv](https://cyberx-labs.zendesk.com/hc/en-us/articles/360008658272-How-To-Import-Data) file from the Defender for IoT help center. Verify that you saved the file as a CSV.
-
-2. Enter the information as:
-
- - **IP Address**: The device IP address.
-
- - **Name**: The authorized device name. Verify that names are accurate. Names given to the devices in the imported list overwrite names shown in the device map.
-
- :::image type="content" source="media/how-to-import-device-information/device-map-file.png" alt-text="The import list to the device map.":::
-
-3. On the side menu, select **Import Settings**.
-
-4. In the **Authorized Devices** section, select **Add** and upload the CSV file that you saved.
-
-When the information is imported, you receive alerts about unauthorized devices for all the devices that don't appear on this list.
## See also
digital-twins Concepts Data Ingress Egress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-data-ingress-egress.md
Azure Digital Twins implements **at least once** delivery for data emitted to eg
## Next steps Learn more about endpoints and routing events to external
-* [Routing Azure Digital Twins events](concepts-route-events.md)
+* [Endpoints and event routes](concepts-route-events.md)
See how to set up Azure Digital Twins to ingest data from IoT Hub: * [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md)
digital-twins Concepts Event Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-event-notifications.md
Here's an example telemetry message body:
## Next steps Learn about delivering events to different destinations, using endpoints and routes:
-* [Event routes](concepts-route-events.md)
+* [Endpoints and event routes](concepts-route-events.md)
digital-twins Concepts Route Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-route-events.md
# Mandatory fields. Title: Event routes
+ Title: Endpoints and event routes
description: Learn how to route events within Azure Digital Twins and to other Azure Services.
digital-twins How To Manage Routes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes.md
When an endpoint can't deliver an event within a certain time period or after tr
You can set up the necessary storage resources using the [Azure portal](https://ms.portal.azure.com/#home) or the [Azure Digital Twins CLI](/cli/azure/dt). However, to create an endpoint with dead-lettering enabled, you'll need use the [Azure Digital Twins CLI](/cli/azure/dt) or [control plane APIs](concepts-apis-sdks.md#overview-control-plane-apis).
-To learn more about dead-lettering, see [Event routes](concepts-route-events.md#dead-letter-events). For instructions on how to set up an endpoint with dead-lettering, continue through the rest of this section.
+To learn more about dead-lettering, see [Endpoints and event routes](concepts-route-events.md#dead-letter-events). For instructions on how to set up an endpoint with dead-lettering, continue through the rest of this section.
#### Set up storage resources
Here is an example of a dead-letter message for a [twin create notification](con
## Create an event route
-To actually send data from Azure Digital Twins to an endpoint, you'll need to define an **event route**. These routes let developers wire up event flow, throughout the system and to downstream services. A single route can allow multiple notifications and event types to be selected. Read more about event routes in [Routing Azure Digital Twins events](concepts-route-events.md).
+To actually send data from Azure Digital Twins to an endpoint, you'll need to define an **event route**. These routes let developers wire up event flow, throughout the system and to downstream services. A single route can allow multiple notifications and event types to be selected. Read more about event routes in [Endpoints and event routes](concepts-route-events.md).
**Prerequisite**: You need to create endpoints as described earlier in this article before you can move on to creating a route. You can proceed to creating an event route once your endpoints are finished setting up.
digital-twins How To Route With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-route-with-managed-identity.md
az dt create --dt-name <name-of-existing-instance> --resource-group <resource-gr
## Assign Azure roles to the identity
-Once a system-assigned identity is created for your Azure Digital Twins instance, you'll need to assign it appropriate roles to authenticate with different types of [endpoints](concepts-route-events.md) for forwarding events to supported destinations. This section describes the role options and how to assign them to the system-assigned identity.
+Once a system-assigned identity is created for your Azure Digital Twins instance, you'll need to assign it appropriate roles to authenticate with different types of [endpoints](concepts-route-events.md) for routing events to supported destinations. This section describes the role options and how to assign them to the system-assigned identity.
>[!NOTE] > This is an important stepΓÇöwithout it, the identity won't be able to access your endpoints and events won't be delivered.
digital-twins How To Set Up Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-portal.md
This version of this article goes through these steps manually, one by one, usin
Here are the additional options you can configure during setup, using the other tabs in the **Create Resource** process. * **Networking**: In this tab, you can enable private endpoints with [Azure Private Link](../private-link/private-link-overview.md) to eliminate public network exposure to your instance. For instructions, see [Enable private access with Private Link (preview)](./how-to-enable-private-link.md?tabs=portal#add-a-private-endpoint-during-instance-creation).
-* **Advanced**: In this tab, you can enable a system-managed identity for your instance that can be used when forwarding events to [endpoints](concepts-route-events.md). For more information about using system-managed identities with Azure Digital Twins, see [Security for Azure Digital Twins solutions](concepts-security.md#managed-identity-for-accessing-other-resources).
+* **Advanced**: In this tab, you can enable a system-managed identity for your instance that can be used when forwarding events along [event routes](concepts-route-events.md). For more information about using system-managed identities with Azure Digital Twins, see [Security for Azure Digital Twins solutions](concepts-security.md#managed-identity-for-accessing-other-resources).
* **Tags**: In this tab, you can add tags to your instance to help you organize it among your Azure resources. For more about Azure resource tags, see [Tag resources, resource groups, and subscriptions for logical organization](../azure-resource-manager/management/tag-resources.md). ### Verify success and collect important values
event-grid System Topics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/system-topics.md
Title: System topics in Azure Event Grid description: Describes system topics in Azure Event Grid. Previously updated : 12/16/2021 Last updated : 01/05/2022 # System topics in Azure Event Grid A system topic in Event Grid represents one or more events published by Azure services such as Azure Storage and Azure Event Hubs. For example, a system topic may represent **all blob events** or only **blob created** and **blob deleted** events published for a **specific storage account**. In this example, when a blob is uploaded to the storage account, the Azure Storage service publishes a **blob created** event to the system topic in Event Grid, which then forwards the event to topic's [subscribers](event-handlers.md) that receive and process the event. > [!NOTE]
-> Only Azure services can publish events to system topics. Therefore, you don't get an endpoint or access keys that you can use to publish events like you do for custom topics or domains.
+> - Only Azure services can publish events to system topics. Therefore, you don't get an endpoint or access keys that you can use to publish events like you do for [custom topics](custom-topics.md) or [event domains](event-domains.md).
## Azure services that support system topics
-Here is the current list of Azure services that support creation of system topics on them.
+Here's the current list of Azure services that support creation of system topics on them.
- [Azure API Management](event-schema-api-management.md) - [Azure App Configuration](event-schema-app-configuration.md)
In the past, a system topic was implicit and wasn't exposed for simplicity. Syst
- [Set up diagnostic logs for system topics](enable-diagnostic-logs-topic.md#enable-diagnostic-logs-for-event-grid-system-topics) - Set up alerts on publish and delivery failures
+> [!NOTE]
+> Azure Event Grid creates a system topic resource in the same Azure subscription that has the event source. For example, if you create a system topic for a storage account *ContosoStorage* in an Azure subscription *ContosoSubscription*, Event Grid creates the system topic in the *ContosoSubscription*. It's not possible to create a system topic in an Azure subscription that's different from the event source's Azure subscription.
+ ## Lifecycle of system topics You can create a system topic in two ways: - Create an [event subscription on an Azure resource as an extension resource](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update), which automatically creates a system topic with the name in the format: `<Azure resource name>-<GUID>`. The system topic created in this way is automatically deleted when the last event subscription for the topic is deleted. - Create a system topic for an Azure resource, and then create an event subscription for that system topic. When you use this method, you can specify a name for the system topic. The system topic isn't deleted automatically when the last event subscription is deleted. You need to manually delete it.
- When you use the Azure portal, you are always using this method. When you create an event subscription using the [**Events** page of an Azure resource](blob-event-quickstart-portal.md#subscribe-to-the-blob-storage), the system topic is created first and then the subscription for the topic is created. You can explicitly create a system topic first by using the [**Event Grid System Topics** page](create-view-manage-system-topics.md#create-a-system-topic) and then create a subscription for that topic.
+ When you use the Azure portal, you're always using this method. When you create an event subscription using the [**Events** page of an Azure resource](blob-event-quickstart-portal.md#subscribe-to-the-blob-storage), the system topic is created first and then the subscription for the topic is created. You can explicitly create a system topic first by using the [**Event Grid System Topics** page](create-view-manage-system-topics.md#create-a-system-topic) and then create a subscription for that topic.
-When you use [CLI](create-view-manage-system-topics-cli.md), [REST](/rest/api/eventgrid/version2021-12-01/event-subscriptions/create-or-update), or [Azure Resource Manager template](create-view-manage-system-topics-arm.md), you can choose either of the above methods. We recommend that you create a system topic first and then create a subscription on the topic, as this is the latest way of creating system topics.
+When you use [CLI](create-view-manage-system-topics-cli.md), [REST](/rest/api/eventgrid/version2021-12-01/event-subscriptions/create-or-update), or [Azure Resource Manager template](create-view-manage-system-topics-arm.md), you can choose either of the above methods. We recommend that you create a system topic first and then create a subscription on the topic, as it's the latest way of creating system topics.
### Failure to create system topics
-The system topic creation fails if you have set up Azure policies in such a way that the Event Grid service can't create it. For example, you may have a policy that allows creation of only certain types of resources (for example: Azure Storage, Azure Event Hubs, etc.) in the subscription.
+The system topic creation fails if you have set up Azure policies in such a way that the Event Grid service can't create it. For example, you may have a policy that allows creation of only certain types of resources (for example: Azure Storage, Azure Event Hubs, and so on.) in the subscription.
In such cases, event flow functionality is preserved. However, metrics and diagnostic functionalities of system topics will be unavailable.
event-hubs Authenticate Shared Access Signature https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/authenticate-shared-access-signature.md
Title: Authenticate access to Azure Event Hubs with shared access signatures description: This article shows you how to authenticate access to Event Hubs resources using shared access signatures. Previously updated : 07/26/2021 Last updated : 01/05/2022 ms.devlang: csharp, java, javascript, php
private static string createToken(string resourceUri, string keyName, string key
} ```
+#### PowerShell
+
+```azurepowershell-interactive
+[Reflection.Assembly]::LoadWithPartialName("System.Web")| out-null
+$URI="myNamespace.servicebus.windows.net/myEventHub"
+$Access_Policy_Name="RootManageSharedAccessKey"
+$Access_Policy_Key="myPrimaryKey"
+#Token expires now+300
+$Expires=([DateTimeOffset]::Now.ToUnixTimeSeconds())+300
+$SignatureString=[System.Web.HttpUtility]::UrlEncode($URI)+ "`n" + [string]$Expires
+$HMAC = New-Object System.Security.Cryptography.HMACSHA256
+$HMAC.key = [Text.Encoding]::ASCII.GetBytes($Access_Policy_Key)
+$Signature = $HMAC.ComputeHash([Text.Encoding]::ASCII.GetBytes($SignatureString))
+$Signature = [Convert]::ToBase64String($Signature)
+$SASToken = "SharedAccessSignature sr=" + [System.Web.HttpUtility]::UrlEncode($URI) + "&sig=" + [System.Web.HttpUtility]::UrlEncode($Signature) + "&se=" + $Expires + "&skn=" + $Access_Policy_Name
+$SASToken
+```
+
+#### BASH
+
+```bash
+get_sas_token() {
+ local EVENTHUB_URI=$1
+ local SHARED_ACCESS_KEY_NAME=$2
+ local SHARED_ACCESS_KEY=$3
+ local EXPIRY=${EXPIRY:=$((60 * 60 * 24))} # Default token expiry is 1 day
+
+ local ENCODED_URI=$(echo -n $EVENTHUB_URI | jq -s -R -r @uri)
+ local TTL=$(($(date +%s) + $EXPIRY))
+ local UTF8_SIGNATURE=$(printf "%s\n%s" $ENCODED_URI $TTL | iconv -t utf8)
+
+ local HASH=$(echo -n "$UTF8_SIGNATURE" | openssl sha256 -hmac $SHARED_ACCESS_KEY -binary | base64)
+ local ENCODED_HASH=$(echo -n $HASH | jq -s -R -r @uri)
+
+ echo -n "SharedAccessSignature sr=$ENCODED_URI&sig=$ENCODED_HASH&se=$TTL&skn=$SHARED_ACCESS_KEY_NAME"
+}
+```
+ ## Authenticating Event Hubs publishers with SAS An event publisher defines a virtual endpoint for an event hub. The publisher can only be used to send messages to an event hub and not receive messages.
All tokens are assigned with SAS keys. Typically, all tokens are signed with the
For example, to define authorization rules scoped down to only sending/publishing to Event Hubs, you need to define a send authorization rule. This can be done at a namespace level or give more granular scope to a particular entity (event hubs instance or a topic). A client or an application that is scoped with such granular access is called, Event Hubs publisher. To do so, follow these steps: 1. Create a SAS key on the entity you want to publish to assign the **send** scope on it. For more information, see [Shared access authorization policies](authorize-access-shared-access-signature.md#shared-access-authorization-policies).
-2. Generate a SAS token with an expiry time for a specific publisher by using the key generated in step1.
-
- ```csharp
- var sasToken = SharedAccessSignatureTokenProvider.GetPublisherSharedAccessSignature(
- new Uri("Service-Bus-URI"),
- "eventub-name",
- "publisher-name",
- "sas-key-name",
- "sas-key",
- TimeSpan.FromMinutes(30));
- ```
+2. Generate a SAS token with an expiry time for a specific publisher by using the key generated in step1. For the sample code, see [Generating a signature(token) from a policy](#generating-a-signaturetoken-from-a-policy).
3. Provide the token to the publisher client, which can only send to the entity and the publisher that token grants access to. Once the token expires, the client loses its access to send/publish to the entity.
hdinsight Hdinsight Hadoop Oms Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md
To disable, the use the [`az hdinsight monitor disable`](/cli/azure/hdinsight/mo
```azurecli az hdinsight monitor disable --name $cluster --resource-group $resourceGroup ```
-## <a name="oms-with-firewall">Prerequisites for clusters behind a firewall</a>
+## <a name="oms-with-firewall"></a>Prerequisites for clusters behind a firewall
To be able to successfully setup Azure Monitor integration with HDInsight, behind a firewall, some customers may need to enable the following endpoints:
HDInsight support cluster auditing with Azure Monitor logs, by importing the fol
## Update the Log Analytics (OMS) Agent used by HDInsight Azure Monitor Integration
-When Azure Monitor integration is enabled on a cluster, the Log Analytics agent, or Operations Management Suite (OMS) Agent, is installed on the cluster and is not updated unless you disable and re-enable Azure Monitor Integration. Complete the following steps if you need to update the OMS Agent on the cluster. If you are behind a firewall you may need to complete the [Prerequisites for clusters behind a firewall](#oms-with-firewall) before completing these steps.
+When Azure Monitor integration is enabled on a cluster, the Log Analytics agent, or Operations Management Suite (OMS) Agent, is installed on the cluster and is not updated unless you disable and re-enable Azure Monitor Integration. Complete the following steps if you need to update the OMS Agent on the cluster. If you are behind a firewall you may need to complete the [Prerequisites for clusters behind a firewall](hdinsight-hadoop-oms-log-analytics-tutorial.md?tabs=previous#oms-with-firewall) before completing these steps.
1. From the [Azure portal](https://portal.azure.com/), select your cluster. The cluster is opened in a new portal page. 1. From the left, under **Monitoring**, select **Azure Monitor**.
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-release-notes.md
The OS versions for this release are:
HDInsight 4.0 image has been updated to mitigate Log4j vulnerability as described in [MicrosoftΓÇÖs Response to CVE-2021-44228 Apache Log4j 2.](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/) > [!Note]
-> * Any new HDInsight 4.0 clusters created post 27 December 2021 00:00 UTC, need to be patched/rebooted.
+> * Any HDI 4.0 clusters created post 27 Dec 2021 00:00 UTC are created with an updated version of the image which mitigates the log4j vulnerabilities. Hence, customers need not patch/reboot these clusters.
> * For new HDInsight 4.0 clusters created between 16 Dec 2021 at 01:15 UTC and 27 Dec 2021 00:00 UTC, HDInsight 3.6 or in pinned subscriptions after 16 Dec 2021 the patch is auto applied within the hour in which the cluster is created, however customers must then reboot their nodes for the patching to complete (except for Kafka Management nodes, which are automatically rebooted).
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md
Previously updated : 01/03/2022 Last updated : 01/05/2022
Azure API for FHIR supports create, conditional create, update, and conditional
## Delete and Conditional Delete
-The FHIR service offers two delete types. There is [Delete](https://www.hl7.org/fhir/http.html#delete), which is also know as Hard + Soft Delete, and [Conditional Delete](https://www.hl7.org/fhir/http.html#3.1.0.7.1).
+Azure API for FHIR offers two delete types. There is [Delete](https://www.hl7.org/fhir/http.html#delete), which is also know as Hard + Soft Delete, and [Conditional Delete](https://www.hl7.org/fhir/http.html#3.1.0.7.1).
### Delete (Hard + Soft Delete)
-Delete defined by the FHIR specification requires that after deleting a resource, subsequent non-version specific reads of a resource returns a 410 HTTP status code. Therefore, the resource is no longer found through searching. Additionally, the FHIR service enables you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true `(DELETE {{FHIR_URL}}/{resource}/{id}?hardDelete=true)`. If you don't pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available.
+Delete defined by the FHIR specification requires that after deleting a resource, subsequent non-version specific reads of a resource returns a 410 HTTP status code. Therefore, the resource is no longer found through searching. Additionally, Azure API for FHIR enables you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true `(DELETE {{FHIR_URL}}/{resource}/{id}?hardDelete=true)`. If you don't pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available.
> [!NOTE]
-> If you only want to delete the history, the FHIR service supports a custom operation called `$purge-history`. This operation allows you to delete the history off of a resource.
+> If you only want to delete the history, Azure API for FHIR supports a custom operation called `$purge-history`. This operation allows you to delete the history off of a resource.
### Conditional Delete
To delete multiple resources, include `_count=100` parameter. This parameter wil
### Recovery of deleted files
-If you don't use the hard delete parameter, then the record(s) in the FHIR service should still exist. The record(s) can be found by doing a history search on the resource and looking for the last version with data.
+If you don't use the hard delete parameter, then the record(s) in Azure API for FHIR should still exist. The record(s) can be found by doing a history search on the resource and looking for the last version with data.
If the ID of the resource that was deleted is known, use the following URL pattern:
After you've found the record you want to restore, use the `PUT` operation to re
## Patch and Conditional Patch
-Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using Patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three types of ways to Patch resources in FHIR: JSON Patch, XML Patch, and FHIR Path Patch. The FHIR service supports JSON Patch and Conditional JSON Patch (which allows you to Patch a resource based on a search criteria instead of an ID). To walk through some examples of using JSON Patch, refer to the sample [REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PatchRequests.http).
+Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using Patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three types of ways to Patch resources in FHIR: JSON Patch, XML Patch, and FHIR Path Patch. Azure API for FHIR supports JSON Patch and Conditional JSON Patch (which allows you to Patch a resource based on a search criteria instead of an ID). To walk through some examples of using JSON Patch, refer to the sample [REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PatchRequests.http).
> [!NOTE] > When using `PATCH` against STU3, and if you are requesting a History bundle, the patched resource's `Bundle.entry.request.method` is mapped to `PUT`. This is because STU3 doesn't contain a definition for the `PATCH` verb in the [HTTPVerb value set](http://hl7.org/fhir/STU3/valueset-http-verb.html).
iot-dps Quick Create Simulated Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/quick-create-simulated-device-symm-key.md
This quickstart demonstrates a solution for a Windows-based workstation. However
::: zone pivot="programming-language-csharp"
-* Install [.NET Core 2.1 SDK](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
+* Install [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
```cmd dotnet --info
To update and run the provisioning sample with your device information:
Provisioning Status: PROV_DEVICE_REG_STATUS_CONNECTED Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
- Registration Information received from service:
- test-docs-hub.azure-devices.net, deviceId: device-007
+ Registration Information received from service:
+ test-docs-hub.azure-devices.net, deviceId: device-007
Press enter key to exit: ```
iot-dps Quick Create Simulated Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/quick-create-simulated-device-tpm.md
# Quickstart: Provision a simulated TPM device
-In this quickstart, you'll create a TPM simulated device on your Windows machine. After you've configured your device, you'll then provision it to your IoT hub using the Azure IoT Hub Device Provisioning Service. Sample code will then be used to help enroll the device with a Device Provisioning Service instance
+In this quickstart, you'll create a simulated device on your Windows machine. The simulated device will be configured to use a [Trusted Platform Module (TPM) attestation](concepts-tpm-attestation.md) mechanism for authentication. After you've configured your device, you'll provision it to your IoT hub using the Azure IoT Hub Device Provisioning Service. Sample code will then be used to help enroll the device with a Device Provisioning Service instance.
If you're unfamiliar with the process of provisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview. Also make sure you've completed the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md) before continuing.
The following prerequisites are for a Windows development environment. For Linux
::: zone pivot="programming-language-csharp"
-* Install [.NET Core 2.1 SDK](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
+* A TPM 2.0 hardware security module on your Windows-based machine.
+
+* Install [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
```bash dotnet --info
In this section, you'll prepare a development environment used to build the [Azu
::: zone-end + ## Build and run the TPM device simulator In this section, you'll build and run the TPM simulator. This simulator listens over a socket on ports 2321 and 2322. Do not close the command window. You'll need to keep this simulator running until the end of this quickstart. + ::: zone pivot="programming-language-ansi-c" 1. Run the following command to build Azure IoT C SDK that includes the TPM device simulator sample code. A Visual Studio solution for the simulated device is generated in the `cmake` directory. This sample provides a TPM [attestation mechanism](concepts-service.md#attestation-mechanism) via Shared Access Signature (SAS) Token authentication.
In this section, you'll build and run the TPM simulator. This simulator listens
::: zone-end -
-1. In the main menu of your Device Provisioning Service, select **Overview**.
-
-2. Copy the **ID Scope** value.
-
- ![Copy provisioning service Scope ID from the portal blade](./media/quick-create-simulated-device-tpm/extract-dps-endpoints-csharp.png)
-
-3. In a command prompt, change directories to the project directory for the TPM device provisioning sample.
-
- ```cmd
- cd .\azure-iot-samples-csharp\provisioning\Samples\device\TpmSample
- ```
-
-4. Type the following command to build and run the TPM device provisioning sample (replace `<IDScope>` with the ID Scope for your provisioning service).
-
- ```cmd
- dotnet run <IDScope>
- ```
-
- >[!NOTE]
- >This command will launch the TPM chip simulator in a separate command prompt. On Windows, you may encounter a Windows Security Alert that asks whether you want to allow `Simulator.exe` to communicate on public networks. For the purposes of this sample, you may cancel the request.
-
-5. The original command window displays the **_Endorsement key_**, the **_Registration ID_**, and a suggested **_Device ID_** needed for device enrollment. Take note of these values. You'll use these value to create an individual enrollment in your Device Provisioning Service instance.
-
- > [!NOTE]
- > Do not confuse the window that contains command output with the window that contains output from the TPM simulator. You may have to select the original command window to bring it to the foreground.
-- ::: zone pivot="programming-language-nodejs" 1. Go to the GitHub root folder.
In this section, you'll build and run the TPM simulator. This simulator listens
::: zone-end <a id="simulatetpm"></a> ## Read cryptographic keys from the TPM device ++ In this section, you'll build and execute a sample that reads the endorsement key and registration ID from the TPM simulator you left running, and is still listening over ports 2321 and 2322. These values will be used for device enrollment with your Device Provisioning Service instance. ::: zone-end
In this section, you'll build and execute a sample that reads the endorsement ke
::: zone-end +
+In this section, you'll build and execute a sample that reads the endorsement key from your TPM 2.0 hardware security module. This value will be used for device enrollment with your Device Provisioning Service instance.
+
+1. In a command prompt, change directories to the project directory for the TPM device provisioning sample.
+
+ ```cmd
+ cd .\azure-iot-samples-csharp\provisioning\Samples\device\TpmSample
+ ```
+
+2. Type the following command to build and run the TPM device provisioning sample. Copy the endorsement key returned from your TPM 2.0 hardware security module to use later when enrolling your device.
+
+ ```cmd
+ dotnet run -- -e
+ ```
++ <a id="portalenrollment"></a> ## Create a device enrollment entry + 1. Sign in to the [Azure portal](https://portal.azure.com). 2. On the left-hand menu or on the portal page, select **All resources**.
In this section, you'll build and execute a sample that reads the endorsement ke
7. Select **Save**. ++
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. On the left-hand menu or on the portal page, select **All resources**.
+
+3. Select your Device Provisioning Service.
+
+4. In the **Settings** menu, select **Manage enrollments**.
+
+5. At the top of the page, select **+ Add individual enrollment**.
+
+6. In the **Add Enrollment** panel, enter the following information:
+
+ * Select **TPM** as the identity attestation *Mechanism*.
+ * Enter the *Endorsement key* you retrieved earlier from your HSM.
+ * Enter a unique *Registration ID* for your device. You will also use this registration ID when registering your device, so make a note of it for later.
+ * Select an IoT hub linked with your provisioning service.
+ * Optionally, you may provide the following information:
+ * Enter a unique *Device ID* (you can use the suggested **test-docs-device** or provide your own). Make sure to avoid sensitive data while naming your device. If you choose not to provide one, the registration ID will be used to identify the device instead.
+ * Update the **Initial device twin state** with the desired initial configuration for the device.
+ * Once complete, press the **Save** button.
+
+ ![Enter device enrollment information in the portal](./media/quick-create-simulated-device-tpm/enter-device-enrollment.png)
+
+7. Select **Save**.
++ ## Register the device In this section, you'll configure sample code to use the [Advanced Message Queuing Protocol (AMQP)](https://wikipedia.org/wiki/Advanced_Message_Queuing_Protocol) to send the device's boot sequence to your Device Provisioning Service instance. This boot sequence causes the device to be registered to an IoT hub linked to the Device Provisioning Service instance.
In this section, you'll configure sample code to use the [Advanced Message Queui
static const char* id_scope = "0ne00002193"; ```
-6. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_TPM` instead of `SECURE_DEVICE_TYPE_X509` as shown below.
+6. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_TPM` as shown below.
```c SECURE_DEVICE_TYPE hsm_type;
In this section, you'll configure sample code to use the [Advanced Message Queui
::: zone-end +
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+
+2. Copy the **_ID Scope_** value.
+
+ ![Copy provisioning service Scope ID from the portal blade](./media/quick-create-simulated-device-tpm/extract-dps-endpoints-csharp.png)
+
+3. In a command prompt, change directories to the project directory for the TPM device provisioning sample.
+
+ ```cmd
+ cd .\azure-iot-samples-csharp\provisioning\Samples\device\TpmSample
+ ```
+
+4. Run the following command to register your device. Replace `<IdScope>` with the value for the DPS you just copied and `<RegistrationId>` with the value you used when creating the device enrollment.
+
+ ```cmd
+ dotnet run -- -s <IdScope> -r <RegistrationId>
+ ```
+
+ If the device registration was successful, you'll see the following messages:
+
+ ```cmd/sh
+ Initializing security using the local TPM...
+ Initializing the device provisioning client...
+ Initialized for registration Id <RegistrationId>.
+ Registering with the device provisioning service...
+ Registration status: Assigned.
+ Device <RegistrationId> registered to <HubName>.azure-devices.net.
+ Creating TPM authentication for IoT Hub...
+ Testing the provisioned device with IoT Hub...
+ Sending a telemetry message...
+ Finished.
+ ```
++ ::: zone pivot="programming-language-nodejs" 1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/quick-create-simulated-device-x509.md
The following prerequisites are for a Windows development environment. For Linux
::: zone pivot="programming-language-csharp"
-* Install [.NET Core 3.1 SDK or later](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
+* Install [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
```bash dotnet --info
The following prerequisites are for a Windows development environment. For Linux
::: zone pivot="programming-language-nodejs"
-* Install [Node.js v4.0 or above](https://nodejs.org) or later on your machine.
+* Install [Node.js v4.0 or above](https://nodejs.org) on your machine.
* Install [OpenSSL](https://www.openssl.org/) on your machine and is added to the environment variables accessible to the command window. This library can either be built and installed from source or downloaded and installed from a [third party](https://wiki.openssl.org/index.php/Binaries) such as [this](https://sourceforge.net/projects/openssl/).
In addition to the tooling in the C SDK, the [Group certificate verification sam
- End Nesting Level 1 - Provider = Microsoft Strong Cryptographic Provider Signature test passed
- CertUtil: -dump command completed successfully.
+ CertUtil: -dump command completed successfully.
``` ::: zone-end
A test certificate file (*python-device.pem*) and private key file (*python-devi
java -jar ./provisioning-x509-cert-generator-{version}-with-deps.jar ```
-3. Enter **N** for _Do you want to input common name_.
+3. Enter **N** for _Do you want to input common name_.
4. Copy the output of `Client Cert` to the clipboard, starting from *--BEGIN CERTIFICATE--* through *--END CERTIFICATE--*.
A test certificate file (*python-device.pem*) and private key file (*python-devi
5. Create a file named *_X509individual.pem_* on your Windows machine.
-6. Open *_X509individual.pem_* in an editor of your choice, and copy the clipboard contents to this file.
+6. Open *_X509individual.pem_* in an editor of your choice, and copy the clipboard contents to this file.
7. Save the file and close your editor.
This article demonstrates an individual enrollment for a single device to be pro
6. In the **Add Enrollment** panel, enter the following information: * Select **X.509** as the identity attestation *Mechanism*.
- * Under the *Primary certificate .pem or .cer file*, choose *Select a file* to select the certificate file *X509individual.pem* created in the previous steps.
+ * Under the *Primary certificate .pem or .cer file*, choose *Select a file* to select the certificate file *X509individual.pem* created in the previous steps.
* Optionally, you may provide the following information: * Select an IoT hub linked with your provisioning service.
- * Enter a unique device ID. Make sure to avoid sensitive data while naming your device.
+ * Enter a unique device ID. Make sure to avoid sensitive data while naming your device.
* Update the **Initial device twin state** with the desired initial configuration for the device.
-
+ :::image type="content" source="./media/quick-create-simulated-device-x509/device-enrollment.png" alt-text="Add device as individual enrollment with X.509 attestation."::: ::: zone-end
-
+ 7. Select **Save**. You'll be returned to **Manage enrollments**. 8. Select **Individual Enrollments**. Your X.509 enrollment entry should appear in the registration table.
In this section, we'll update the sample code to send the device's boot sequence
3. In Visual Studio's *Solution Explorer* window, navigate to the **Provision\_Samples** folder. Expand the sample project named **prov\_dev\_client\_sample**. Expand **Source Files**, and open **prov\_dev\_client\_sample.c**.
-4. Find the `id_scope` constant, and replace the value with your **ID Scope** value that you copied earlier.
+4. Find the `id_scope` constant, and replace the value with your **ID Scope** value that you copied earlier.
```c static const char* id_scope = "0ne00002193";
In this section, we'll update the sample code to send the device's boot sequence
::: zone pivot="programming-language-nodejs"
-1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
2. Copy the **_ID Scope_** and **Global device endpoint** values.
In this section, we'll update the sample code to send the device's boot sequence
5. Edit the **register\_x509.js** file with the following changes: * Replace `provisioning host` with the **_Global Device Endpoint_** noted in **Step 1** above.
- * Replace `id scope` with the **_ID Scope_** noted in **Step 1** above.
+ * Replace `id scope` with the **_ID Scope_** noted in **Step 1** above.
* Replace `registration id` with the **_Registration ID_** noted in the previous section. * Replace `cert filename` and `key filename` with the files you copied in **Step 2** above.
In this section, we'll update the sample code to send the device's boot sequence
```cmd/sh node register_x509.js
- ```
+ ```
>[!TIP] >The [Azure IoT Hub Node.js Device SDK](https://github.com/Azure/azure-iot-sdk-node) provides an easy way to simulate a device. For more information, see [Device concepts](./concepts-service.md).
The Python provisioning sample, [provision_x509.py](https://github.com/Azure/azu
| Variable name | Description | | :- | :- |
-| `PROVISIONING_HOST` | This value is the global endpoint used for connecting to your DPS resource |
-| `PROVISIONING_IDSCOPE` | This value is the ID Scope for your DPS resource |
-| `DPS_X509_REGISTRATION_ID` | This value is the ID for your device. It must also match the subject name on the device certificate |
-| `X509_CERT_FILE` | Your device certificate filename |
+| `PROVISIONING_HOST` | This value is the global endpoint used for connecting to your DPS resource |
+| `PROVISIONING_IDSCOPE` | This value is the ID Scope for your DPS resource |
+| `DPS_X509_REGISTRATION_ID` | This value is the ID for your device. It must also match the subject name on the device certificate |
+| `X509_CERT_FILE` | Your device certificate filename |
| `X509_KEY_FILE` | The private key filename for your device certificate |
-| `PASS_PHRASE` | The pass phrase you used to encrypt the certificate and private key file (`1234`). |
+| `PASS_PHRASE` | The pass phrase you used to encrypt the certificate and private key file (`1234`). |
1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
The Python provisioning sample, [provision_x509.py](https://github.com/Azure/azu
CN=Python-device-01 Name Hash(sha1): 1dd88de40e9501fb64892b698afe12d027011000 Name Hash(md5): a62c784820daa931b9d3977739b30d12
-
+ NotBefore: 1/29/2021 7:05 PM NotAfter: 1/29/2022 7:05 PM
-
+ Subject: ===> CN=Python-device-01 <=== Name Hash(sha1): 1dd88de40e9501fb64892b698afe12d027011000
The Python provisioning sample, [provision_x509.py](https://github.com/Azure/azu
``` * Use the following format when copying/pasting your certificate and private key:
-
+ ```java private static final String leafPublicPem = "--BEGIN CERTIFICATE--\n" + "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/security-features.md
Azure Private Link Service enables you to access Azure Key Vault and Azure hoste
- Despite known vulnerabilities in TLS protocol, there is no known attack that would allow a malicious agent to extract any information from your key vault when the attacker initiates a connection with a TLS version that has vulnerabilities. The attacker would still need to authenticate and authorize itself, and as long as legitimate clients always connect with recent TLS versions, there is no way that credentials could have been leaked from vulnerabilities at old TLS versions. > [!NOTE]
-> For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a planform that supports TLS 1.2 or recent version. If the application is dependent on .Net framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .Net framework.
+> For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a platform that supports TLS 1.2 or recent version. If the application is dependent on .Net framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .Net framework.
## Key Vault authentication options
You should also take regular back ups of your vault on update/delete/create of o
- [Azure Key Vault security baseline](security-baseline.md) - [Azure Key Vault best practices](security-baseline.md) - [Virtual network service endpoints for Azure Key Vault](overview-vnet-service-endpoints.md)-- [Azure RBAC: Built-in roles](../../role-based-access-control/built-in-roles.md)
+- [Azure RBAC: Built-in roles](../../role-based-access-control/built-in-roles.md)
load-balancer Skus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/skus.md
# Azure Load Balancer SKUs
-Azure Load Balancer has two SKUs.
+Azure Load Balancer has three SKUs.
## <a name="skus"></a> SKU comparison Azure Load Balancer has 3 SKUs - Basic, Standard, and Gateway. Each SKU is catered towards a specific scenario and have differences in scale, features, and pricing.
Azure Load Balancer has 3 SKUs - Basic, Standard, and Gateway. Each SKU is cater
To compare and understand the differences between Basic and Standard SKU, see the following table. For more information, see [Azure Standard Load Balancer overview](./load-balancer-overview.md). For information on Gateway SKU - catered for third-party network virtual appliances (NVAs) currently in preview, see [Gateway Load Balancer overview](gateway-overview.md) >[!NOTE]
-> Microsoft recommends Standard load balancer.
-Standalone VMs, availability sets, and virtual machine scale sets can be connected to only one SKU, never both. Load balancer and the public IP address SKU must match when you use them with public IP addresses. Load balancer and public IP SKUs aren't mutable.
+> Microsoft recommends Standard load balancer. See [Upgrade from Basic to Standard Load Balancer](upgrade-basic-standard.md) for a guided instruction on upgrading SKUs along with an upgrade script.
+>
+> Standalone VMs, availability sets, and virtual machine scale sets can be connected to only one SKU, never both. Load balancer and the public IP address SKU must match when you use them with public IP addresses. Load balancer and public IP SKUs aren't mutable.
| | Standard Load Balancer | Basic Load Balancer | | | | |
For more information, see [Load balancer limits](../azure-resource-manager/manag
## Limitations -- You can [upgrade Load Balancer SKUs](upgrade-basic-standard.md). - A standalone virtual machine resource, availability set resource, or virtual machine scale set resource can reference one SKU, never both. - [Move operations](../azure-resource-manager/management/move-resource-group-and-subscription.md): - Resource group move operations (within same subscription) **are supported** for Standard Load Balancer and Standard Public IP.
load-testing Tutorial Cicd Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/tutorial-cicd-azure-pipelines.md
You'll learn how to:
To get started, you need a GitHub repository with the sample web application. You'll use this repository to configure an Azure Pipelines workflow to run the load test.
-1. Open a browser and go to the sample application's [source GitHub repository](https://github.com/Azure-Samples/nodejs-appsvc-cosmosdb-bottleneck.git).
-
- The sample application is a Node.js app that consists of an Azure App Service web component and an Azure Cosmos DB database.
-
-1. Select **Fork** to fork the sample application's repository to your GitHub account.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/fork-github-repo.png" alt-text="Screenshot that shows the button to fork the sample application's GitHub repo.":::
-
-## Configure the Apache JMeter script
- The sample application's source repo includes an Apache JMeter script named *SampleApp.jmx*. This script makes three API calls on each test iteration: * `add`: Carries out a data insert operation on Azure Cosmos DB for the number of visitors on the web app. * `get`: Carries out a GET operation from Azure Cosmos DB to retrieve the count. * `lasttimestamp`: Updates the time stamp since the last user went to the website.
-Update the Apache JMeter script with the URL of your sample web app:
-
-1. In your sample application's repository, open *SampleApp.jmx* for editing.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/edit-jmx.png" alt-text="Screenshot that shows the button for editing the Apache JMeter test script.":::
-
-1. Search for `<stringProp name="HTTPSampler.domain">`.
-
- You'll see three instances of `<stringProp name="HTTPSampler.domain">` in the file.
-
-1. Replace all three instances of the value with the URL of your sample web app:
-
- ```xml
- <stringProp name="HTTPSampler.domain">your-app-name.azurewebsites.net</stringProp>
- ```
+1. Open a browser and go to the sample application's [source GitHub repository](https://github.com/Azure-Samples/nodejs-appsvc-cosmosdb-bottleneck.git).
- You'll deploy the sample application to an Azure App Service web app by using Azure Pipelines in the subsequent steps. For now, replace the placeholder text `your-app-name` in the previous XML snippet with a unique name that you want to provide to the App Service web app. You'll then use this same name to create the web app.
+ The sample application is a Node.js app that consists of an Azure App Service web component and an Azure Cosmos DB database.
- > [!IMPORTANT]
- > Don't include `https` or `http` in the sample application's URL.
+1. Select **Fork** to fork the sample application's repository to your GitHub account.
-1. Commit your changes to the main branch.
+ :::image type="content" source="./media/tutorial-cicd-azure-pipelines/fork-github-repo.png" alt-text="Screenshot that shows the button to fork the sample application's GitHub repo.":::
## Set up Azure Pipelines access permissions for Azure
To access Azure resources, create a service connection in Azure DevOps and use r
## Configure the Azure Pipelines workflow to run a load test
-In this section, you'll set up an Azure Pipelines workflow that triggers the load test.
+In this section, you'll set up an Azure Pipelines workflow that triggers the load test. The sample application repository contains a pipelines definition file. The pipeline first deploys the sample web application to Azure App Service, and then invokes the load test. The pipeline uses an environment variable to pass the URL of the web application to the Apache JMeter script.
First, you'll install the Azure Load Testing extension from the Azure DevOps Marketplace, create a new pipeline, and then connect it to the sample application's forked repository.
First, you'll install the Azure Load Testing extension from the Azure DevOps Mar
|`<Azure subscriptionId>` | Your Azure subscription ID. | |`<Name of your load test resource>` | The name of your Azure Load Testing resource. | |`<Name of your load test resource group>` | The name of the resource group that contains the Azure Load Testing resource. |
-
- > [!IMPORTANT]
- > The name of Azure web app should match the name that you used for the endpoint URL in the *SampleApp.jmx* test script.
:::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline-review.png" alt-text="Screenshot that shows the Azure Pipelines Review tab when you're creating a pipeline.":::
In this tutorial, you'll reconfigure the sample application to accept only secur
1. Commit the changes to the *config.json* file.
-1. Edit the *SampleApp_Secrets.jmx* file.
-
-1. Search for `<stringProp name="HTTPSampler.domain">`.
-
- You'll see three instances of `<stringProp name="HTTPSampler.domain">` in the file.
-
-1. Replace all three instances of the value with the URL of your sample web app:
-
- ```xml
- <stringProp name="HTTPSampler.domain">{your-app-name}.azurewebsites.net</stringProp>
- ```
-
- You'll deploy the sample application to an Azure App Service web app by using the GitHub Actions workflow. In the previous XML snippet, replace the placeholder text `{your-app-name}` with the unique name of the App Service web app.
-
- > [!IMPORTANT]
- > Don't include `https` or `http` in the sample application's URL.
-
-1. Save and commit the Apache JMeter script.
- 1. Go to the **Pipelines** page, select your pipeline definition, and then select **Edit**. :::image type="content" source="./media/tutorial-cicd-azure-pipelines/edit-pipeline.png" alt-text="Screenshot that shows selections for editing a pipeline definition.":::
load-testing Tutorial Cicd Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/tutorial-cicd-github-actions.md
You'll learn how to:
To get started, you need a GitHub repository with the sample web application. You'll use this repository to configure a GitHub Actions workflow to run the load test.
-1. Open a browser and go to the sample application's [source GitHub repository](https://github.com/Azure-Samples/nodejs-appsvc-cosmosdb-bottleneck.git).
-
- The sample application is a Node.js app that consists of an Azure App Service web component and an Azure Cosmos DB database.
-
-1. Select **Fork** to fork the sample application's repository to your GitHub account.
-
- :::image type="content" source="./media/tutorial-cicd-github-actions/fork-github-repo.png" alt-text="Screenshot that shows the button to fork the sample application's GitHub repo.":::
-
-## Configure the Apache JMeter script
- The sample application's source repo includes an Apache JMeter script named *SampleApp.jmx*. This script makes three API calls on each test iteration: * `add`: Carries out a data insert operation on Azure Cosmos DB for the number of visitors on the web app. * `get`: Carries out a GET operation from Azure Cosmos DB to retrieve the count. * `lasttimestamp`: Updates the time stamp since the last user went to the website.
-Update the Apache JMeter script with the URL of your sample web app:
-
-1. In your sample application's repository, open *SampleApp.jmx* for editing.
-
- :::image type="content" source="./media/tutorial-cicd-github-actions/edit-jmx.png" alt-text="Screenshot that shows the button for editing the Apache JMeter test script.":::
-
-1. Search for `<stringProp name="HTTPSampler.domain">`.
-
- You'll see three instances of `<stringProp name="HTTPSampler.domain">` in the file.
-
-1. Replace all three instances of the value with the URL of your sample web app:
-
- ```xml
- <stringProp name="HTTPSampler.domain">your-app-name.azurewebsites.net</stringProp>
- ```
+1. Open a browser and go to the sample application's [source GitHub repository](https://github.com/Azure-Samples/nodejs-appsvc-cosmosdb-bottleneck.git).
- You'll deploy the sample application to an Azure App Service web app by using the GitHub Actions workflow in the subsequent steps. For now, replace the placeholder text `your-app-name` in the previous XML snippet with a unique name that you want to provide to the App Service web app. You'll then use this same name to create the web app.
+ The sample application is a Node.js app that consists of an Azure App Service web component and an Azure Cosmos DB database.
- > [!IMPORTANT]
- > Don't include `https` or `http` in the sample application's URL.
+1. Select **Fork** to fork the sample application's repository to your GitHub account.
-1. Commit your changes to the main branch.
+ :::image type="content" source="./media/tutorial-cicd-github-actions/fork-github-repo.png" alt-text="Screenshot that shows the button to fork the sample application's GitHub repo.":::
## Set up GitHub access permissions for Azure
To access Azure resources, you'll create an Azure Active Directory service princ
## Configure the GitHub Actions workflow to run a load test
-In this section, you'll set up a GitHub Actions workflow that triggers the load test.
+In this section, you'll set up a GitHub Actions workflow that triggers the load test. The sample application repository contains a workflow file *SampleApp.yaml*. The workflow first deploys the sample web application to Azure App Service, and then invokes the load test. The GitHub action uses an environment variable to pass the URL of the web application to the Apache JMeter script.
-To run a load test by using Azure Load Testing from a CI/CD workflow, you need a YAML configuration file. The sample application's repository contains the *SampleApp.yaml* file that contains the parameters for running the test.
+Update the *SampleApp.yaml* GitHub Actions workflow file to configure the parameters for running the load test.
1. Open the *.github/workflows/workflow.yml* GitHub Actions workflow file in your sample application's repository.
To run a load test by using Azure Load Testing from a CI/CD workflow, you need a
|`<your-azure-web-app>` | The name of the Azure App Service web app. | |`<your-azure-load-testing-resource-name>` | The name of your Azure Load Testing resource. | |`<your-azure-load-testing-resource-group-name>` | The name of the resource group that contains the Azure Load Testing resource. |
-
-
- > [!IMPORTANT]
- > The name of Azure web app should match the name that you used for the endpoint URL in the *SampleApp.jmx* test script.
-
+ ```yaml env: AZURE_WEBAPP_NAME: "<your-azure-web-app>"
To run a load test by using Azure Load Testing from a CI/CD workflow, you need a
``` 1. Commit your changes directly to the main branch.
-
+ :::image type="content" source="./media/tutorial-cicd-github-actions/commit-workflow.png" alt-text="Screenshot that shows selections for committing changes to the GitHub Actions workflow file."::: The commit will trigger the GitHub Actions workflow in your repository. You can verify that the workflow is running by going to the **Actions** tab.
In this tutorial, you'll reconfigure the sample application to accept only secur
1. Commit the changes to the *config.json* file.
-1. Edit the *SampleApp_Secrets.jmx* file.
-
-1. Search for `<stringProp name="HTTPSampler.domain">`.
-
- You'll see three instances of `<stringProp name="HTTPSampler.domain">` in the file.
-
-1. Replace all three instances of the value with the URL of your sample web app:
-
- ```xml
- <stringProp name="HTTPSampler.domain">{your-app-name}.azurewebsites.net</stringProp>
- ```
-
- You'll deploy the secure sample application to an Azure App Service web app by using the GitHub Actions workflow in subsequent steps. In the previous XML snippet, replace the placeholder text `{your-app-name}` with the unique name of the App Service web app. You'll then use this same name to create the web app.
-
- > [!IMPORTANT]
- > Don't include `https` or `http` in the sample application's URL.
-
-1. Save and commit the Apache JMeter script.
- 1. Add a new secret to your GitHub repository by selecting **Settings** > **Secrets** > **New repository secret**. 1. Enter **MY_SECRET** for **Name**, enter **1797669089** for **Value**, and then select **Add secret**.
load-testing Tutorial Identify Bottlenecks Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/tutorial-identify-bottlenecks-azure-portal.md
Now that you have the application deployed and running, you can run your first l
In this section, you'll create a load test by using a sample Apache JMeter test script.
-### Configure the Apache JMeter script
- The sample application's source repo includes an Apache JMeter script named *SampleApp.jmx*. This script makes three API calls to the web app on each test iteration: * `add`: Carries out a data insert operation on Azure Cosmos DB for the number of visitors on the web app.
The sample application's source repo includes an Apache JMeter script named *Sam
> [!NOTE] > The sample Apache JMeter script requires two plugins: ```Custom Thread Groups``` and ```Throughput Shaping Timer```. To open the script on your local Apache JMeter instance, you need to install both plugins. You can use the [Apache JMeter Plugins Manager](https://jmeter-plugins.org/install/Install/) to do this.
-To load test the sample web app that you deployed previously, you need to update the API URLs in the Apache JMeter script.
-
-1. Open the directory of the cloned sample app in Visual Studio Code:
-
- ```powershell
- cd nodejs-appsvc-cosmosdb-bottleneck
- code .
- ```
-
-1. Open *SampleApp.jmx*.
-
-1. Search for `<stringProp name="HTTPSampler.domain">`.
-
- You'll see three instances of `<stringProp name="HTTPSampler.domain">` in the file.
-
-1. Replace the value with the URL of the newly deployed sample application:
-
- ```xml
- <stringProp name="HTTPSampler.domain">your-app-name.azurewebsites.net</stringProp>
- ```
-
- Update the value in all three places. Don't include the `https://` prefix.
-
-1. Save your changes and close the file.
- ### Create the Azure Load Testing resource The Load Testing resource is a top-level resource for your load-testing activities. This resource provides a centralized place to view and manage load tests, test results, and related artifacts.
To create a load test in the Load Testing resource for the sample app:
Optionally, you can select and upload additional Apache JMeter configuration files or other files that are referenced in the JMX file. For example, if your test script uses CSV data sets, you can upload the corresponding *.csv* file(s).
+1. On the **Parameters** tab, add a new environment variable. Enter *webapp* for the **Name** and *`<yourappname>.azurewebsites.net`* for the **Value**. Replace the placeholder text `<yourappname>` with the name of the newly deployed sample application. Don't include the `https://` prefix.
+
+ The Apache JMeter test script uses the environment variable to retrieve the web application URL. The script then invokes the three APIs in the web application.
+
+ :::image type="content" source="media/tutorial-identify-bottlenecks-azure-portal/create-new-test-parameters.png" alt-text="Screenshot that shows the parameters tab to add environment variable.":::
+ 1. On the **Load** tab, configure the following details. You can leave the default value for this tutorial. |Setting |Value |Description |
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-auto-train-forecast.md
Previously updated : 10/21/2021 Last updated : 11/18/2021 # Set up AutoML to train a time-series forecasting model with Python
For a low code experience, see the [Tutorial: Forecast demand with automated mac
Unlike classical time series methods, in automated ML, past time-series values are "pivoted" to become additional dimensions for the regressor together with other predictors. This approach incorporates multiple contextual variables and their relationship to one another during training. Since multiple factors can influence a forecast, this method aligns itself well with real world forecasting scenarios. For example, when forecasting sales, interactions of historical trends, exchange rate, and price all jointly drive the sales outcome. - ## Prerequisites For this article you need,
For this article you need,
* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](tutorial-auto-train-models.md) or [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns. [!INCLUDE [automl-sdk-version](../../includes/machine-learning-automl-sdk-version.md)]
-## Preparing data
-
-The most important difference between a forecasting regression task type and regression task type within automated ML is including a feature in your data that represents a valid time series. A regular time series has a well-defined and consistent frequency and has a value at every sample point in a continuous time span.
-
-Consider the following snapshot of a file `sample.csv`.
-This data set is of daily sales data for a company that has two different stores, A, and B.
-
-Additionally, there are features for
-
- * `week_of_year`: allows the model to detect weekly seasonality.
-* `day_datetime`: represents a clean time series with daily frequency.
-* `sales_quantity`: the target column for running predictions.
-
-```output
-day_datetime,store,sales_quantity,week_of_year
-9/3/2018,A,2000,36
-9/3/2018,B,600,36
-9/4/2018,A,2300,36
-9/4/2018,B,550,36
-9/5/2018,A,2100,36
-9/5/2018,B,650,36
-9/6/2018,A,2400,36
-9/6/2018,B,700,36
-9/7/2018,A,2450,36
-9/7/2018,B,650,36
-```
--
-Read the data into a Pandas dataframe, then use the `to_datetime` function to ensure the time series is a `datetime` type.
-```python
-import pandas as pd
-data = pd.read_csv("sample.csv")
-data["day_datetime"] = pd.to_datetime(data["day_datetime"])
-```
-
-In this case, the data is already sorted ascending by the time field `day_datetime`. However, when setting up an experiment, ensure the desired time column is sorted in ascending order to build a valid time series.
-
-The following code,
-* Assumes the data contains 1,000 records, and makes a deterministic split in the data to create training and test data sets.
-* Identifies the label column as `sales_quantity`.
-* Separates the label field from `test_data` to form the `test_target` set.
+## Training and validation data
-```python
-train_data = data.iloc[:950]
-test_data = data.iloc[-50:]
-
-label = "sales_quantity"
-
-test_labels = test_data.pop(label).values
-```
+The most important difference between a forecasting regression task type and regression task type within automated ML is including a feature in your training data that represents a valid time series. A regular time series has a well-defined and consistent frequency and has a value at every sample point in a continuous time span.
> [!IMPORTANT] > When training a model for forecasting future values, ensure all the features used in training can be used when running predictions for your intended horizon. <br> <br>For example, when creating a demand forecast, including a feature for current stock price could massively increase training accuracy. However, if you intend to forecast with a long horizon, you may not be able to accurately predict future stock values corresponding to future time-series points, and model accuracy could suffer.
-<a name="config"></a>
-
-## Training and validation data
-
-You can specify separate train and validation sets directly in the `AutoMLConfig` object. Learn more about the [AutoMLConfig](#configure-experiment).
+You can specify separate [training data and validation data](concept-automated-ml.md#training-validation-and-test-data) directly in the `AutoMLConfig` object. Learn more about the [AutoMLConfig](#configure-experiment).
For time series forecasting, only **Rolling Origin Cross Validation (ROCV)** is used for validation by default. Pass the training and validation data together, and set the number of cross validation folds with the `n_cross_validations` parameter in your `AutoMLConfig`. ROCV divides the series into training and validation data using an origin time point. Sliding the origin in time generates the cross-validation folds. This strategy preserves the time series data integrity and eliminates the risk of data leakage
You can also bring your own validation data, learn more in [Configure data split
```python automl_config = AutoMLConfig(task='forecasting',
+ training_data= training_data,
n_cross_validations=3, ... **time_series_settings)
Learn more about how AutoML applies cross validation to [prevent over-fitting mo
The [`AutoMLConfig`](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig) object defines the settings and data necessary for an automated machine learning task. Configuration for a forecasting model is similar to the setup of a standard regression model, but certain models, configuration options, and featurization steps exist specifically for time-series data. ### Supported models
-Automated machine learning automatically tries different models and algorithms as part of the model creation and tuning process. As a user, there is no need for you to specify the algorithm. For forecasting experiments, both native time-series and deep learning models are part of the recommendation system. The following table summarizes this subset of models.
+
+Automated machine learning automatically tries different models and algorithms as part of the model creation and tuning process. As a user, there is no need for you to specify the algorithm. For forecasting experiments, both native time-series and deep learning models are part of the recommendation system.
>[!Tip]
-> Traditional regression models are also tested as part of the recommendation system for forecasting experiments. See the [supported model table](how-to-configure-auto-train.md#supported-models) for the full list of models.
+> Traditional regression models are also tested as part of the recommendation system for forecasting experiments. See a complete list of the [supported models](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels) in the SDK reference documentation.
-Models| Description | Benefits
--|-|
-Prophet (Preview)|Prophet works best with time series that have strong seasonal effects and several seasons of historical data. To leverage this model, install it locally using `pip install fbprophet`. | Accurate & fast, robust to outliers, missing data, and dramatic changes in your time series.
-Auto-ARIMA (Preview)|Auto-Regressive Integrated Moving Average (ARIMA) performs best, when the data is stationary. This means that its statistical properties like the mean and variance are constant over the entire set. For example, if you flip a coin, then the probability of you getting heads is 50%, regardless if you flip today, tomorrow, or next year.| Great for univariate series, since the past values are used to predict the future values.
-ForecastTCN (Preview)| ForecastTCN is a neural network model designed to tackle the most demanding forecasting tasks. It captures nonlinear local and global trends in your data and relationships between time series.|Capable of leveraging complex trends in your data and readily scales to the largest of datasets.
### Configuration settings
-Similar to a regression problem, you define standard training parameters like task type, number of iterations, training data, and number of cross-validations. For forecasting tasks, there are additional parameters that must be set that affect the experiment.
-
-The following table summarizes these additional parameters. See the [ForecastingParameter class reference documentation](/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters) for syntax design patterns.
-
-| Parameter&nbsp;name | Description | Required |
-|-|-|-|
-|`time_column_name`|Used to specify the datetime column in the input data used for building the time series and inferring its frequency.|Γ£ô|
-|`forecast_horizon`|Defines how many periods forward you would like to forecast. The horizon is in units of the time series frequency. Units are based on the time interval of your training data, for example, monthly, weekly that the forecaster should predict out.|Γ£ô|
-|`enable_dnn`|[Enable Forecasting DNNs](#enable-deep-learning).||
-|`time_series_id_column_names`|The column name(s) used to uniquely identify the time series in data that has multiple rows with the same timestamp. If time series identifiers are not defined, the data set is assumed to be one time-series. To learn more about single time-series, see the [energy_demand_notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb).||
-|`freq`| The time series dataset frequency. This parameter represents the period with which events are expected to occur, such as daily, weekly, yearly, etc. The frequency must be a [pandas offset alias](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects). Learn more about [frequency].(#frequency-target-data-aggregation)||
-|`target_lags`|Number of rows to lag the target values based on the frequency of the data. The lag is represented as a list or single integer. Lag should be used when the relationship between the independent variables and dependent variable doesn't match up or correlate by default. ||
-|`feature_lags`| The features to lag will be automatically decided by automated ML when `target_lags` are set and `feature_lags` is set to `auto`. Enabling feature lags may help to improve accuracy. Feature lags are disabled by default. ||
-|`target_rolling_window_size`|*n* historical periods to use to generate forecasted values, <= training set size. If omitted, *n* is the full training set size. Specify this parameter when you only want to consider a certain amount of history when training the model. Learn more about [target rolling window aggregation](#target-rolling-window-aggregation).||
-|`short_series_handling_config`| Enables short time series handling to avoid failing during training due to insufficient data. Short series handling is set to `auto` by default. Learn more about [short series handling](#short-series-handling).||
-|`target_aggregation_function`| The function to be used to aggregate the time series target column to conform to the frequency specified via the `freq` parameter. The `freq` parameter must be set, in order to use the `target_aggregation_function`. Defaults to `None`; for most scenarios using `sum` is sufficient.<br> Learn more about [target column aggregation](#frequency--target-data-aggregation).
+Similar to a regression problem, you define standard training parameters like task type, number of iterations, training data, and number of cross-validations. Forecasting tasks require the `time_column_name` and `forecast_horizon` parameters to configure your experiment. You can also include additional parameters to better configure your run, see the [optional configurations](#optional-configurations) section for more detail on what can be included.
+| Parameter&nbsp;name | Description |
+|-|-|
+|`time_column_name`|Used to specify the datetime column in the input data used for building the time series and inferring its frequency.|
+|`forecast_horizon`|Defines how many periods forward you would like to forecast. The horizon is in units of the time series frequency. Units are based on the time interval of your training data, for example, monthly, weekly that the forecaster should predict out.|
The following code, * Leverages the [`ForecastingParameters`](/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters) class to define the forecasting parameters for your experiment training * Sets the `time_column_name` to the `day_datetime` field in the data set.
-* Defines the `time_series_id_column_names` parameter to `"store"`. This ensures that **two separate time-series groups** are created for the data; one for store A and B.
+* Defines the `time_series_id_column_names` parameter to `auto`.
* Sets the `forecast_horizon` to 50 in order to predict for the entire test set.
-* Sets a forecast window to 10 periods with `target_rolling_window_size`
-* Specifies a single lag on the target values for two periods ahead with the `target_lags` parameter.
-* Sets `target_lags` to the recommended "auto" setting, which will automatically detect this value for you.
+ ```python from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters(time_column_name='day_datetime', forecast_horizon=50,
- time_series_id_column_names=["store"],
- freq='W',
- target_lags='auto',
- target_rolling_window_size=10)
+ time_series_id_column_names='auto',
+ freq='W')
```
The following formula calculates the amount of historic data that what would be
Minimum historic data required: (2x `forecast_horizon`) + #`n_cross_validations` + max(max(`target_lags`), `target_rolling_window_size`)
-An Error exception will be raised for any series in the dataset that does not meet the required amount of historic data for the relevant settings specified.
+An `Error exception` is raised for any series in the dataset that does not meet the required amount of historic data for the relevant settings specified.
### Featurization steps
If you're using the Azure Machine Learning studio for your experiment, see [how
## Optional configurations
-Additional optional configurations are available for forecasting tasks, such as enabling deep learning and specifying a target rolling window aggregation.
+Additional optional configurations are available for forecasting tasks, such as enabling deep learning and specifying a target rolling window aggregation. A complete list of additional parameters is available in the [ForecastingParameters SDK reference documentation](/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters).
### Frequency & target data aggregation
Leverage the frequency, `freq`, parameter to help avoid failures caused by irreg
For highly irregular data or for varying business needs, users can optionally set their desired forecast frequency, `freq`, and specify the `target_aggregation_function` to aggregate the target column of the time series. Leverage these two settings in your `AutoMLConfig` object can help save some time on data preparation.
-When the `target_aggregation_function` parameter is used,
-* The target column values are aggregated based on the specified operation. Typically, `sum` is appropriate for most scenarios.
-
-* Numerical predictor columns in your data are aggregated by sum, mean, minimum value, and maximum value. As a result, automated ML generates new columns suffixed with the aggregation function name and applies the selected aggregate operation.
-
-* For categorical predictor columns, the data is aggregated by mode, the most prominent category in the window.
-
-* Date predictor columns are aggregated by minimum value, maximum value and mode.
- Supported aggregation operations for target column values include:
-|Function | description
+|Function | Description
|| |`sum`| Sum of target values |`mean`| Mean or average of target values
automl_config = AutoMLConfig(task='forecasting',
> [!Warning] > When you enable DNN for experiments created with the SDK, [best model explanations](how-to-machine-learning-interpretability-automl.md) are disabled.
-To enable DNN for an AutoML experiment created in the Azure Machine Learning studio, see the [task type settings in the studio how-to](how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment).
+To enable DNN for an AutoML experiment created in the Azure Machine Learning studio, see the [task type settings in the studio UI how-to](how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment).
View the [Beverage Production Forecasting notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-beer-remote/auto-ml-forecasting-beer-remote.ipynb) for a detailed code example using DNNs. ### Target rolling window aggregation+ Often the best information a forecaster can have is the recent value of the target. Target rolling window aggregations allow you to add a rolling aggregation of data values as features. Generating and using these features as extra contextual data helps with the accuracy of the train model. For example, say you want to predict energy demand. You might want to add a rolling window feature of three days to account for thermal changes of heated spaces. In this example, create this window by setting `target_rolling_window_size= 3` in the `AutoMLConfig` constructor.
View a Python code example applying the [target rolling window aggregate feature
### Short series handling
-Automated ML considers a time series a **short series** if there are not enough data points to conduct the train and validation phases of model development. The number of data points varies for each experiment, and depends on the max_horizon, the number of cross validation splits, and the length of the model lookback, that is the maximum of history that's needed to construct the time-series features. For the exact calculation, see the [short_series_handling_configuration reference documentation](/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters#short-series-handling-configuration).
+Automated ML considers a time series a **short series** if there are not enough data points to conduct the train and validation phases of model development. The number of data points varies for each experiment, and depends on the max_horizon, the number of cross validation splits, and the length of the model lookback, that is the maximum of history that's needed to construct the time-series features.
Automated ML offers short series handling by default with the `short_series_handling_configuration` parameter in the `ForecastingParameters` object.
best_run, fitted_model = local_run.get_output()
## Forecasting with best model
-Use the best model iteration to forecast values for the test data set.
+Use the best model iteration to forecast values for data that wasn't used to train the model.
The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast which doesn't have a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
You can also use the `forecast_destination` parameter in the `forecast_quantiles
label_query = test_labels.copy().astype(np.float) label_query.fill(np.nan) label_fcst, data_trans = fitted_model.forecast_quantiles(
- test_data, label_query, forecast_destination=pd.Timestamp(2019, 1, 8))
+ test_dataset, label_query, forecast_destination=pd.Timestamp(2019, 1, 8))
``` Often customers want to understand the predictions at a specific quantile of the distribution. For example, when the forecast is used to control inventory like grocery items or virtual machines for a cloud service. In such cases, the control point is usually something like "we want the item to be in stock and not run out 99% of the time". The following demonstrates how to specify which quantiles you'd like to see for your predictions, such as 50th or 95th percentile. If you don't specify a quantile, like in the aforementioned code example, then only the 50th percentile predictions are generated.
Often customers want to understand the predictions at a specific quantile of the
# specify which quantiles you would like fitted_model.quantiles = [0.05,0.5, 0.9] fitted_model.forecast_quantiles(
- test_data, label_query, forecast_destination=pd.Timestamp(2019, 1, 8))
+ test_dataset, label_query, forecast_destination=pd.Timestamp(2019, 1, 8))
```
-
-Calculate root mean squared error (RMSE) between the `actual_labels` actual values, and the forecasted values in `predict_labels`.
-```python
-from sklearn.metrics import mean_squared_error
-from math import sqrt
+You can calculate model metrics like, root mean squared error (RMSE) or mean absolute percentage error (MAPE) to help you estimate the models performance. See the Evaluate section of the [Bike share demand notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) for an example.
-rmse = sqrt(mean_squared_error(actual_labels, predict_labels))
-rmse
-```
-
-
-Now that the overall model accuracy has been determined, the most realistic next step is to use the model to forecast unknown future values.
+After the overall model accuracy has been determined, the most realistic next step is to use the model to forecast unknown future values.
-Supply a data set in the same format as the test set `test_data` but with future datetimes, and the resulting prediction set is the forecasted values for each time-series step. Assume the last time-series records in the data set were for 12/31/2018. To forecast demand for the next day (or as many periods as you need to forecast, <= `forecast_horizon`), create a single time series record for each store for 01/01/2019.
+Supply a data set in the same format as the test set `test_dataset` but with future datetimes, and the resulting prediction set is the forecasted values for each time-series step. Assume the last time-series records in the data set were for 12/31/2018. To forecast demand for the next day (or as many periods as you need to forecast, <= `forecast_horizon`), create a single time series record for each store for 01/01/2019.
```output day_datetime,store,week_of_year
day_datetime,store,week_of_year
01/01/2019,A,1 ```
-Repeat the necessary steps to load this future data to a dataframe and then run `best_run.forecast_quantiles(test_data)` to predict future values.
+Repeat the necessary steps to load this future data to a dataframe and then run `best_run.forecast_quantiles(test_dataset)` to predict future values.
> [!NOTE] > In-sample predictions are not supported for forecasting with automated ML when `target_lags` and/or `target_rolling_window_size` are enabled.
The following diagram shows the workflow for the many models solution.
![Many models concept diagram](./media/how-to-auto-train-forecast/many-models.svg)
-The following code demonstrates the key parameters users need to set up their many models run.
+The following code demonstrates the key parameters users need to set up their many models run. See the [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) for a many models forecasting example
```python from azureml.train.automl.runtime._many_models.many_models_parameters import ManyModelsTrainParameters
To further visualize this, the leaf levels of the hierarchy contain all the time
The hierarchical time series solution is built on top of the Many Models Solution and share a similar configuration setup.
-The following code demonstrates the key parameters to set up your hierarchical time series forecasting runs.
+The following code demonstrates the key parameters to set up your hierarchical time series forecasting runs. See the [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb), for an end to end example.
```python
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-features.md
Guardrail|Status|Condition&nbsp;for&nbsp;trigger
**Validation split handling** |Done| The validation configuration was set to `'auto'` and the training data contained *fewer than 20,000 rows*. <br> Each iteration of the trained model was validated by using cross-validation. Learn more about [validation data](./how-to-configure-auto-train.md#training-validation-and-test-data). <br><br> The validation configuration was set to `'auto'`, and the training data contained *more than 20,000 rows*. <br> The input data has been split into a training dataset and a validation dataset for validation of the model. **Class balancing detection** |Passed <br><br><br><br>Alerted <br><br><br>Done | Your inputs were analyzed, and all classes are balanced in your training data. A dataset is considered to be balanced if each class has good representation in the dataset, as measured by number and ratio of samples. <br><br> Imbalanced classes were detected in your inputs. To fix model bias, fix the balancing problem. Learn more about [imbalanced data](./concept-manage-ml-pitfalls.md#identify-models-with-imbalanced-data).<br><br> Imbalanced classes were detected in your inputs and the sweeping logic has determined to apply balancing. **Memory issues detection** |Passed <br><br><br><br> Done |<br> The selected values (horizon, lag, rolling window) were analyzed, and no potential out-of-memory issues were detected. Learn more about time-series [forecasting configurations](./how-to-auto-train-forecast.md#configuration-settings). <br><br><br>The selected values (horizon, lag, rolling window) were analyzed and will potentially cause your experiment to run out of memory. The lag or rolling-window configurations have been turned off.
-**Frequency detection** |Passed <br><br><br><br> Done |<br> The time series was analyzed, and all data points are aligned with the detected frequency. <br> <br> The time series was analyzed, and data points that don't align with the detected frequency were detected. These data points were removed from the dataset. Learn more about [data preparation for time-series forecasting](./how-to-auto-train-forecast.md#preparing-data).
+**Frequency detection** |Passed <br><br><br><br> Done |<br> The time series was analyzed, and all data points are aligned with the detected frequency. <br> <br> The time series was analyzed, and data points that don't align with the detected frequency were detected. These data points were removed from the dataset.
## Customize featurization
BERT generally runs longer than other featurizers. For better performance, we re
AutoML will distribute BERT training across multiple nodes if they are available (upto a max of eight nodes). This can be done in your `AutoMLConfig` object by setting the `max_concurrent_iterations` parameter to higher than 1.
-## Supported languages for BERT in autoML
+## Supported languages for BERT in AutoML
-AutoML currently supports around 100 languages and depending on the dataset's language, autoML chooses the appropriate BERT model. For German data, we use the German BERT model. For English, we use the English BERT model. For all other languages, we use the multilingual BERT model.
+AutoML currently supports around 100 languages and depending on the dataset's language, AutoML chooses the appropriate BERT model. For German data, we use the German BERT model. For English, we use the English BERT model. For all other languages, we use the multilingual BERT model.
In the following code, the German BERT model is triggered, since the dataset language is specified to `deu`, the three letter language code for German according to [ISO classification](https://iso639-3.sil.org/code/deu):
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-workspace.md
In the [Azure portal](https://portal.azure.com/), select **Delete** at the top
* **Azure portal**: * If you go directly to your workspace from a share link from the SDK or the Azure portal, you can't view the standard **Overview** page that has subscription information in the extension. In this scenario, you also can't switch to another workspace. To view another workspace, go directly to [Azure Machine Learning studio](https://ml.azure.com) and search for the workspace name. * All assets (Datasets, Experiments, Computes, and so on) are available only in [Azure Machine Learning studio](https://ml.azure.com). They're *not* available from the Azure portal.
+ * Attempting to export a template for a workspace from the Azure portal may return an error similar to the following text: `Could not get resource of the type <type>. Resources of this type will not be exported.` As a workaround, use one of the templates provided at [https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices) as the basis for your template.
### Workspace diagnostics
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-training-vnet.md
Previously updated : 11/05/2021 Last updated : 01/06/2022
In this article you learn how to secure the following training compute resources
+ To deploy resources into a virtual network or subnet, your user account must have permissions to the following actions in Azure role-based access control (Azure RBAC):
- - "Microsoft.Network/virtualNetworks/*/read" on the virtual network resource. This is not needed for Azure Resource Manager (ARM) template deployments
+ - "Microsoft.Network/virtualNetworks/*/read" on the virtual network resource. This permission is not needed for Azure Resource Manager (ARM) template deployments.
- "Microsoft.Network/virtualNetworks/subnet/join/action" on the subnet resource. For more information on Azure RBAC with networking, see the [Networking built-in roles](../role-based-access-control/built-in-roles.md#networking) ### Azure Machine Learning compute cluster/instance
+* Compute clusters and instances create the following resources. If they are unable to create these resources (for example, if there is a resource lock on the resource group) then creation, scale out, or scale in, may fail.
+
+ * IP address.
+ * Network Security Group (NSG).
+ * Load balancer.
+ * The virtual network must be in the same subscription as the Azure Machine Learning workspace. * The subnet used for the compute instance or cluster must have enough unassigned IP addresses.
In this article you learn how to secure the following training compute resources
* A compute instance only requires one IP address. * To create a compute cluster or instance [without a public IP address](#no-public-ip) (a preview feature), your workspace must use a private endpoint to connect to the VNet. For more information, see [Configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md).
-* Make sure that there are no security policies or locks that restrict permissions to manage the virtual network. When checking for policies or locks, look at both the subscription and resource group for the virtual network.
-* Check to see whether your security policies or locks on the virtual network's subscription or resource group restrict permissions to manage the virtual network.
* If you plan to secure the virtual network by restricting traffic, see the [Required public internet access](#required-public-internet-access) section. * The subnet used to deploy compute cluster/instance shouldn't be delegated to any other service. For example, it shouldn't be delegated to ACI.
In this article you learn how to secure the following training compute resources
### Azure Machine Learning compute cluster/instance
-* If put multiple compute instances or clusters in one virtual network, you may need to request a quota increase for one or more of your resources. The Machine Learning compute instance or cluster automatically allocates additional networking resources __in the resource group that contains the virtual network__. For each compute instance or cluster, the service allocates the following resources:
+* If put multiple compute instances or clusters in one virtual network, you may need to request a quota increase for one or more of your resources. The Machine Learning compute instance or cluster automatically allocates networking resources __in the resource group that contains the virtual network__. For each compute instance or cluster, the service allocates the following resources:
* One network security group (NSG). This NSG contains the following rules, which are specific to compute cluster and compute instance:
In this article you learn how to secure the following training compute resources
> [!TIP] > If your compute cluster or instance does not use a public IP address (a preview feature), these inbound NSG rules are not required.
- * For compute cluster or instance, it is now possible to remove the public IP address (a preview feature). If you have Azure Policy assignments prohibiting Public IP creation then deployment of the compute cluster or instance will succeed.
+ * For compute cluster or instance, it is now possible to remove the public IP address (a preview feature). If you have Azure Policy assignments prohibiting Public IP creation, then deployment of the compute cluster or instance will succeed.
* One load balancer
When the creation process finishes, you train your model by using the cluster in
### <a name="no-public-ip-amlcompute"></a>No public IP for compute clusters (preview)
-When you enable **No public IP**, your compute cluster doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem as well as service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute cluster nodes from the internet thus eliminating a significant threat vector. **No public IP** clusters help comply with no public IP policies many enterprises have.
+When you enable **No public IP**, your compute cluster doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute cluster nodes from the internet thus eliminating a significant threat vector. **No public IP** clusters help comply with no public IP policies many enterprises have.
-A compute cluster with **No public IP** enabled has **no inbound communication requirements** from public internet compared to those for public IP compute cluster. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork** and any port source, to destination of **VirtualNetwork**, and destination port of **29876, 29877**.
+A compute cluster with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork** and any port source, to destination of **VirtualNetwork**, and destination port of **29876, 29877**.
**No public IP** clusters are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace. A compute cluster with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and are not Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service](../private-link/disable-private-link-service-network-policy.md) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
You can also create no public IP compute cluster through an ARM template. In the
**Troubleshooting**
-* If you get this error message during creation of cluster "The specified subnet has PrivateLinkServiceNetworkPolicies or PrivateEndpointNetworkEndpoints enabled" please follow the instructions from [Disable network policies for Private Link service](../private-link/disable-private-link-service-network-policy.md) and [Disable network policies for Private Endpoint](../private-link/disable-private-endpoint-network-policy.md).
+* If you get this error message during creation of cluster `The specified subnet has PrivateLinkServiceNetworkPolicies or PrivateEndpointNetworkEndpoints enabled`, follow the instructions from [Disable network policies for Private Link service](../private-link/disable-private-link-service-network-policy.md) and [Disable network policies for Private Endpoint](../private-link/disable-private-endpoint-network-policy.md).
* If job execution fails with connection issues to ACR or Azure Storage, verify that customer has added ACR and Azure Storage service endpoint/private endpoints to subnet and ACR/Azure Storage allows the access from the subnet.
For steps on how to create a compute instance deployed in a virtual network, see
### <a name="no-public-ip"></a>No public IP for compute instances (preview)
-When you enable **No public IP**, your compute instance doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem as well as service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute instance node from the internet thus eliminating a significant threat vector. Compute instances will also do packet filtering to reject any traffic from outside virtual network. **No public IP** instances are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace.
+When you enable **No public IP**, your compute instance doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute instance node from the internet thus eliminating a significant threat vector. Compute instances will also do packet filtering to reject any traffic from outside virtual network. **No public IP** instances are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace.
For **outbound connections** to work, you need to set up an egress firewall such as Azure firewall with user defined routes. For instance, you can use a firewall set up with [invound/outbound configuration](how-to-access-azureml-behind-firewall.md) and route traffic there by defining a route table on the subnet in which the compute instance is deployed. The route table entry can set up the next hop of the private IP address of the firewall with the address prefix of 0.0.0.0/0.
-A compute instance with **No public IP** enabled has **no inbound communication requirements** from public internet compared to those for public IP compute instance. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork**, any port source, destination of **VirtualNetwork**, and destination port of **29876, 29877, 44224**.
+A compute instance with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork**, any port source, destination of **VirtualNetwork**, and destination port of **29876, 29877, 44224**.
A compute instance with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and are not Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service source IP](../private-link/disable-private-link-service-network-policy.md) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
media-services Analyze Video Audio Files Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/analyze-video-audio-files-concept.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-Media Services lets you extract insights from your video and audio files using the audio and video analyzer presets. This article describes the analyzer presets used to extract insights. If you want more detailed insights from your videos, use the [Azure Video Analyzer for Media service](/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview.md). To understand when to use Video Analyzer for Media vs. Media Services analyzer presets, check out the [comparison document](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md).
+Media Services lets you extract insights from your video and audio files using the audio and video analyzer presets. This article describes the analyzer presets used to extract insights. If you want more detailed insights from your videos, use the [Azure Video Analyzer for Media service](/azure/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview). To understand when to use Video Analyzer for Media vs. Media Services analyzer presets, check out the [comparison document](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md).
There are two modes for the Audio Analyzer preset, basic and standard. See the description of the differences in the table below.
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-backup-restore.md
These backup files cannot be exported. The backups can only be used for restore
## Backup frequency Backups on flexible servers are snapshot-based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are taken daily once. Transaction log backups occur every five minutes.
+If a scheduled backup fails, our backup service tries every 20 minutes to take a backup until a successful backup is taken. These backup failures may occur due to heavy transactional production loads on the server instance.
## Backup redundancy options
mysql How To Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-restore-dropped-server.md
To restore a deleted Azure Database for MySQL Flexible server, you need the foll
- **Operation** = Update MySQL Server Create ## Next steps-- If you are trying to restore a server within five days, and still receive an error after accurately following the steps discussed earlier, open a support incident for assistance. If you are trying to restore a deleted server after five days, an error is expected since the backup file cannot be found. Do not open a support ticket in this scenario. The support team cannot provide any assistance if the backup is deleted from the system. +
+- If you are trying to restore a server within five days, and still receive an error after accurately following the steps discussed earlier, open a support incident for assistance. If you are trying to restore a deleted server after five days, an error is expected since the backup file cannot be found. Do not open a support ticket in this scenario. The support team cannot provide any assistance if the backup is deleted from the system.
+- If you are trying to restore a dropped server whose consequent resource group has been deleted/dropped as well, re-create the resource group with the same name before trying to restore the dropped server.
- To prevent accidental deletion of servers, we highly recommend using [Resource Locks](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/preventing-the-disaster-of-accidental-deletion-for-your-mysql/ba-p/825222).
openshift Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-deploy-java-liberty-app.md
Besides image management, the **aad-user** will also be granted administrative p
After creating and connecting to the cluster, install the Open Liberty Operator. The main starting page for the Open Liberty Operator is on [GitHub](https://github.com/OpenLiberty/open-liberty-operator). 1. Sign in to the OpenShift web console from your browser using the `kubeadmin` credentials.
-2. Navigate to **Operators** > **OperatorHub** and search for **Open Liberty Operator**.
-3. Select **Open Liberty Operator** from the search results.
+2. Navigate to **Operators** > **OperatorHub** and search for **Open Liberty**.
+3. Select **Open Liberty** from the search results.
4. Select **Install**.
-5. In the popup **Create Operator Subscription**, check **All namespaces on the cluster (default)** for **Installation Mode**, **beta** for **Update Channel**, and **Automatic** for **Approval Strategy**:
+5. In the page **Install Operator**, check **beta2** for **Update channel**, **All namespaces on the cluster (default)** for **Installation mode**, and **Automatic** for **Update approval**:
![create operator subscription for Open Liberty Operator](./media/howto-deploy-java-liberty-app/install-operator.png)
-6. Select **Subscribe** and wait a minute or two until the Open Liberty Operator is displayed.
-7. Observe the Open Liberty Operator with status of "Succeeded". If you don't, diagnose and resolve the problem before continuing.
+6. Select **Install** and wait a minute or two until the installation completes.
+7. Observe the Open Liberty Operator is successfully installed and ready for use. If you don't, diagnose and resolve the problem before continuing.
:::image type="content" source="media/howto-deploy-java-liberty-app/open-liberty-operator-installed.png" alt-text="Installed Operators showing Open Liberty is installed."::: ## Prepare the Liberty application
openshift Tutorial Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/tutorial-create-cluster.md
Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an Open
-o table ```
-ARO pull secret does not change the cost of the RH OpenShift license for ARO.
- ### Verify your permissions During this tutorial, you will create a resource group, which will contain the virtual network for the cluster. You must have either Contributor and User Access Administrator permissions, or Owner permissions, either directly on the virtual network, or on the resource group or subscription containing it.
az feature register --namespace Microsoft.RedHatOpenShift --name preview
### Get a Red Hat pull secret (optional)
+ > [!NOTE]
+ > ARO pull secret does not change the cost of the RH OpenShift license for ARO.
+ A Red Hat pull secret enables your cluster to access Red Hat container registries along with additional content. This step is optional but recommended. 1. [Navigate to your Red Hat OpenShift cluster manager portal](https://cloud.redhat.com/openshift/install/azure/aro-provisioned) and log in.
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-extensions.md
The following extensions are available in Azure Database for PostgreSQL servers
> [!div class="mx-tableFixed"] > | **Extension**| **Extension version** | **Description** | > ||||
-> |[address_standardizer](http://postgis.net/docs/Address_Standardizer.html) | 2.5.1 | Used to parse an address into constituent elements. |
-> |[address_standardizer_data_us](http://postgis.net/docs/Address_Standardizer.html) | 2.5.1 | Address Standardizer US dataset example|
+> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Used to parse an address into constituent elements. |
+> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Address Standardizer US dataset example|
> |[btree_gin](https://www.postgresql.org/docs/11/btree-gin.html) | 1.3 | support for indexing common datatypes in GIN| > |[btree_gist](https://www.postgresql.org/docs/11/btree-gist.html) | 1.5 | support for indexing common datatypes in GiST| > |[citext](https://www.postgresql.org/docs/11/citext.html) | 1.5 | data type for case-insensitive character strings|
The following extensions are available in Azure Database for PostgreSQL servers
> [!div class="mx-tableFixed"] > | **Extension**| **Extension version** | **Description** | > ||||
-> |[address_standardizer](http://postgis.net/docs/Address_Standardizer.html) | 2.5.1 | Used to parse an address into constituent elements. |
-> |[address_standardizer_data_us](http://postgis.net/docs/Address_Standardizer.html) | 2.5.1 | Address Standardizer US dataset example|
+> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Used to parse an address into constituent elements. |
+> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Address Standardizer US dataset example|
> |[btree_gin](https://www.postgresql.org/docs/10/btree-gin.html) | 1.3 | support for indexing common datatypes in GIN| > |[btree_gist](https://www.postgresql.org/docs/10/btree-gist.html) | 1.5 | support for indexing common datatypes in GiST| > |[chkpass](https://www.postgresql.org/docs/10/chkpass.html) | 1.0 | data type for auto-encrypted passwords|
The following extensions are available in Azure Database for PostgreSQL servers
> [!div class="mx-tableFixed"] > | **Extension**| **Extension version** | **Description** | > ||||
-> |[address_standardizer](http://postgis.net/docs/Address_Standardizer.html) | 2.3.2 | Used to parse an address into constituent elements. |
-> |[address_standardizer_data_us](http://postgis.net/docs/Address_Standardizer.html) | 2.3.2 | Address Standardizer US dataset example|
+> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.3.2 | Used to parse an address into constituent elements. |
+> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.3.2 | Address Standardizer US dataset example|
> |[btree_gin](https://www.postgresql.org/docs/9.6/btree-gin.html) | 1.0 | support for indexing common datatypes in GIN| > |[btree_gist](https://www.postgresql.org/docs/9.6/btree-gist.html) | 1.2 | support for indexing common datatypes in GiST| > |[chkpass](https://www.postgresql.org/docs/9.6/chkpass.html) | 1.0 | data type for auto-encrypted passwords|
The following extensions are available in Azure Database for PostgreSQL servers
> [!div class="mx-tableFixed"] > | **Extension**| **Extension version** | **Description** | > ||||
-> |[address_standardizer](http://postgis.net/docs/Address_Standardizer.html) | 2.3.0 | Used to parse an address into constituent elements. |
-> |[address_standardizer_data_us](http://postgis.net/docs/Address_Standardizer.html) | 2.3.0 | Address Standardizer US dataset example|
+> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.3.0 | Used to parse an address into constituent elements. |
+> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.3.0 | Address Standardizer US dataset example|
> |[btree_gin](https://www.postgresql.org/docs/9.5/btree-gin.html) | 1.0 | support for indexing common datatypes in GIN| > |[btree_gist](https://www.postgresql.org/docs/9.5/btree-gist.html) | 1.1 | support for indexing common datatypes in GiST| > |[chkpass](https://www.postgresql.org/docs/9.5/chkpass.html) | 1.0 | data type for auto-encrypted passwords|
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-extensions.md
The following extensions are available in Azure Database for PostgreSQL - Flexib
> [!div class="mx-tableFixed"] > | **Extension**| **Extension version** | **Description** | > ||||
-> |[address_standardizer](http://postgis.net/docs/Address_Standardizer.html) | 3.1.1 | Used to parse an address into constituent elements. |
-> |[address_standardizer_data_us](http://postgis.net/docs/Address_Standardizer.html) | 3.1.1 | Address Standardizer US dataset example|
+> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 3.1.1 | Used to parse an address into constituent elements. |
+> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 3.1.1 | Address Standardizer US dataset example|
> |[amcheck](https://www.postgresql.org/docs/13/amcheck.html) | 1.2 | functions for verifying relation integrity| > |[bloom](https://www.postgresql.org/docs/13/bloom.html) | 1.0 | bloom access method - signature file based index| > |[btree_gin](https://www.postgresql.org/docs/13/btree-gin.html) | 1.3 | support for indexing common datatypes in GIN|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> [!div class="mx-tableFixed"] > | **Extension**| **Extension version** | **Description** | > ||||
-> |[address_standardizer](http://postgis.net/docs/Address_Standardizer.html) | 3.0.0 | Used to parse an address into constituent elements. |
-> |[address_standardizer_data_us](http://postgis.net/docs/Address_Standardizer.html) | 3.0.0 | Address Standardizer US dataset example|
+> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 3.0.0 | Used to parse an address into constituent elements. |
+> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 3.0.0 | Address Standardizer US dataset example|
> |[amcheck](https://www.postgresql.org/docs/12/amcheck.html) | 1.2 | functions for verifying relation integrity| > |[bloom](https://www.postgresql.org/docs/12/bloom.html) | 1.0 | bloom access method - signature file based index| > |[btree_gin](https://www.postgresql.org/docs/12/btree-gin.html) | 1.3 | support for indexing common datatypes in GIN|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> [!div class="mx-tableFixed"] > | **Extension**| **Extension version** | **Description** | > ||||
-> |[address_standardizer](http://postgis.net/docs/Address_Standardizer.html) | 2.5.1 | Used to parse an address into constituent elements. |
-> |[address_standardizer_data_us](http://postgis.net/docs/Address_Standardizer.html) | 2.5.1 | Address Standardizer US dataset example|
+> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Used to parse an address into constituent elements. |
+> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Address Standardizer US dataset example|
> |[amcheck](https://www.postgresql.org/docs/11/amcheck.html) | 1.1 | functions for verifying relation integrity| > |[bloom](https://www.postgresql.org/docs/11/bloom.html) | 1.0 | bloom access method - signature file based index| > |[btree_gin](https://www.postgresql.org/docs/11/btree-gin.html) | 1.3 | support for indexing common datatypes in GIN|
purview Concept Best Practices Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-glossary.md
When building new term templates in Purview, review the following considerations
- Before importing terms, test the import in a lab environment to ensure that no unexpected results occur, such as duplicate terms. - The email address for Stewards and Experts should be the primary address of the user from the Azure Active Directory group. Alternate email, user principal name and non-Azure Active Directory emails are not yet supported. - Glossary terms provide fours status: draft, approved, expire, alert. Draft is not officially implemented, approved is official/stand/approved for production, expired means should no longer be used, alert need to pay more attention.
-For more information, see [Create, import, and export glossary terms](/how-to-create-import-export-glossary)
+For more information, see [Create, import, and export glossary terms](./how-to-create-import-export-glossary.md)
## Recommendations for exporting glossary terms
role-based-access-control Role Definitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/role-definitions.md
Previously updated : 09/28/2021 Last updated : 01/06/2022
If you are trying to understand how an Azure role works or if you are creating y
A *role definition* is a collection of permissions. It's sometimes just called a *role*. A role definition lists the actions that can be performed, such as read, write, and delete. It can also list the actions that are excluded from allowed actions or actions related to underlying data.
-The following shows an example of the properties in a role definition when displayed using Azure PowerShell:
+The following shows an example of the properties in a role definition when displayed using [Azure PowerShell](role-definitions-list.md#azure-powershell):
``` Name
NotDataActions []
AssignableScopes [] ```
-The following shows an example of the properties in a role definition when displayed using the Azure portal, Azure CLI, or the REST API:
+The following shows an example of the properties in a role definition when displayed using the [Azure portal](role-definitions-list.md#azure-portal), [Azure CLI](role-definitions-list.md#azure-cli), or the [REST API](role-definitions-list.md#rest-api):
``` roleName
The `{action}` portion of an action string specifies the type of actions you can
Here's the [Contributor](built-in-roles.md#contributor) role definition as displayed in Azure PowerShell and Azure CLI. The wildcard (`*`) actions under `Actions` indicates that the principal assigned to this role can perform all actions, or in other words, it can manage everything. This includes actions defined in the future, as Azure adds new resource types. The actions under `NotActions` are subtracted from `Actions`. In the case of the [Contributor](built-in-roles.md#contributor) role, `NotActions` removes this role's ability to manage access to resources and also manage Azure Blueprints assignments.
-Contributor role as displayed in Azure PowerShell:
+Contributor role as displayed in [Azure PowerShell](role-definitions-list.md#azure-powershell):
```json {
Contributor role as displayed in Azure PowerShell:
} ```
-Contributor role as displayed in Azure CLI:
+Contributor role as displayed in [Azure CLI](role-definitions-list.md#azure-cli):
```json {
search Search Howto Create Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-create-indexers.md
Previously updated : 11/02/2021 Last updated : 01/05/2022 # Creating indexers in Azure Cognitive Search A search indexer provides an automated workflow for reading content from an external data source, and ingesting that content into a search index on your search service. Indexers support two workflows:
-+ Extracting text and metadata for full text search
-+ Analyzing images and large undifferentiated text for text and structure, adding [AI enrichment](cognitive-search-concept-intro.md) to the pipeline for deeper content processing.
++ Extract text and metadata during indexing for full text search scenarios++ Apply integrated machine learning and AI models to analyze content that is *not* intrinsically searchable, such as images and large undifferentiated text. This extended workflow is called [AI enrichment](cognitive-search-concept-intro.md) and it's indexer-driven. Using indexers significantly reduces the quantity and complexity of the code you need to write. This article focuses on the mechanics of creating an indexer as preparation for more advanced work with source-specific indexers and [skillsets](cognitive-search-working-with-skillsets.md). ## Indexer structure
-The following index definitions are typical of what you might create for text-based and AI enrichment scenarios.
+The following indexer definitions are typical of what you might create for text-based and AI enrichment scenarios.
### Indexing for full text search The original purpose of an indexer was to simplify the complex process of loading an index by providing a mechanism for connecting to and reading text and numeric content from fields in a data source, serialize that content as JSON documents, and hand off those documents to the search engine for indexing. This is still a primary use case, and for this operation, you'll need to create an indexer with the properties defined in the following example.
-```json
+```http
+POST /indexers?api-version=[api-version]
{ "name": (required) String that uniquely identifies the indexer, "dataSourceName": (required) String indicated which existing data source to use,
The **`field mappings`** property is used to explicitly map source-to-destinatio
Because indexers are the mechanism by which a search service makes outbound requests, indexers were extended to support AI enrichments, adding infrastructure and objects to implement this use case.
-All of the above properties and parameters apply to indexers that perform AI enrichment. The following properties are specific to AI enrichment: **`skillSets`**, **`outputFieldMappings`**, **`cache`** (preview and REST only).
+All of the above properties and parameters apply to indexers that perform AI enrichment. The following properties are specific to AI enrichment: **`skillSetName`**, **`outputFieldMappings`**, **`cache`** (preview and REST only).
-```json
+```http
+POST /indexers?api-version=[api-version]
{ "name": (required) String that uniquely identifies the indexer, "dataSourceName": (required) String, name of an existing data source,
When you are ready to create an indexer on a remote search service, you will nee
### [**Azure portal**](#tab/indexer-portal)
-The portal provides two options for creating an indexer: [**Import data wizard**](search-import-data-portal.md) and **New Indexer** that provides fields for specifying an indexer definition. The wizard is unique in that it creates all of the required elements. Other approaches require that you have predefined a data source and index.
+The portal provides two options for creating an indexer: [**Import data wizard**](search-import-data-portal.md) and **New Indexer** that provides a visual editor for specifying an indexer definition. The wizard is unique in that it creates all of the required elements. Other approaches require that you have predefined a data source and index.
The following screenshot shows where you can find these features in the portal.
Indexers can detect changes in the underlying data and only process new or updat
How an indexer supports change detection varies by data source:
-+ Azure Blob Storage, Azure Table Storage, and Azure Data Lake Storage Gen2 stamp each blob or row update with a date and time. The various indexers use this information to determine which documents to update in the index. Built-in change detection means that an indexer can recognize new and updated documents, with no additional configuration required on your part.
++ Azure Blob Storage, Azure Table Storage, and Azure Data Lake Storage Gen2 stamp each blob or row update with a date and time. The various indexers use this information to determine which documents to update in the index. Built-in change detection means that an indexer can recognize new and updated documents automatically. + Azure SQL and Cosmos DB provide change detection features in their platforms. You can specify the change detection policy in your data source definition.
If you need to clear the high water mark to re-index in full, you can use [Reset
Indexers expect a tabular row set, where each row becomes a full or partial search document in the index. Often, there is a one-to-one correspondence between a row in a database and the resulting search document, where all the fields in the row set fully populate each document. But you can use indexers to generate a subset of a document's fields, and fill in the remaining fields using a different indexer or methodology.
-To flatten relational data into a row set, you should create a SQL view, or build a query that returns parent and child records in the same row. For example, the built-in hotels sample dataset is a SQL database that has 50 records (one for each hotel), linked to room records in a related table. The query that flattens the collective data into a row set embeds all of the room information in JSON documents in each hotel record. The embedded room information is a generated by a query that uses a **FOR JSON AUTO** clause. You can learn more about this technique in [define a query that returns embedded JSON](index-sql-relational-data.md#define-a-query-that-returns-embedded-json). This is just one example; you can find other approaches that will produce the same effect.
+To flatten relational data into a row set, you should create a SQL view, or build a query that returns parent and child records in the same row. For example, the built-in hotels sample dataset is a SQL database that has 50 records (one for each hotel), linked to room records in a related table. The query that flattens the collective data into a row set embeds all of the room information in JSON documents in each hotel record. The embedded room information is a generated by a query that uses a **FOR JSON AUTO** clause. You can learn more about this technique in [define a query that returns embedded JSON](index-sql-relational-data.md#define-a-query-that-returns-embedded-json). This is just one example; you can find other approaches that will produce the same result.
In addition to flattened data, it's important to pull in only searchable data. Searchable data is alphanumeric. Cognitive Search cannot search over binary data in any format, although it can extract and infer text descriptions of image files (see [AI enrichment](cognitive-search-concept-intro.md)) to create searchable content. Likewise, using AI enrichment, large text can be analyzed by natural language models to find structure or relevant information, generating new content that you can add to a search document.
sentinel Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/iot-solution.md
In the **Instructions** tab of the data connector page, scroll down to the **Cre
To visualize and monitor your Defender for IoT data, use the workbooks deployed to your Microsoft Sentinel workspace as part of the [IoT OT Threat Monitoring with Defender for IoT](#install-the-defender-for-iot-solution) solution.
-The Defender for IoT workbooks provide guided investigations for OT entities based on open incidents, alert notifications, and activities for OT assets. They also providing a hunting experience across the MITRE ATT&CK® framework for ICS, and are designed to enable analysts, security engineers, and MSSPs to gain situational awareness of OT security posture.
+The Defender for IoT workbooks provide guided investigations for OT entities based on open incidents, alert notifications, and activities for OT assets. They also provide a hunting experience across the MITRE ATT&CK® framework for ICS, and are designed to enable analysts, security engineers, and MSSPs to gain situational awareness of OT security posture.
View workbooks in Microsoft Sentinel on the **Threat management > Workbooks > My workbooks** tab. For more information, see [Visualize collected data](get-visibility.md).
service-bus-messaging Service Bus End To End Tracing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-end-to-end-tracing.md
One part of this problem is tracking logical pieces of work. It includes message
When a producer sends a message through a queue, it typically happens in the scope of some other logical operation, initiated by some other client or service. The same operation is continued by consumer once it receives a message. Both producer and consumer (and other services that process the operation), presumably emit telemetry events to trace the operation flow and result. In order to correlate such events and trace operation end-to-end, each service that reports telemetry has to stamp every event with a trace context. Microsoft Azure Service Bus messaging has defined payload properties that producers and consumers should use to pass such trace context.
-The protocol is based on the [HTTP Correlation protocol](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Diagnostics.DiagnosticSource/src/HttpCorrelationProtocol.md).
+The protocol is based on the [W3C Trace-Context](https://www.w3.org/TR/trace-context/).
# [Azure.Messaging.ServiceBus SDK (Latest)](#tab/net-standard-sdk-2) | Property Name | Description | |-|-|
-| Diagnostic-Id | Unique identifier of an external call from producer to the queue. Refer to [Request-Id in HTTP protocol](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Diagnostics.DiagnosticSource/src/HttpCorrelationProtocol.md#request-id) for the rationale, considerations, and format |
+| Diagnostic-Id | Unique identifier of an external call from producer to the queue. Refer to [W3C Trace-Context traceparent header](https://www.w3.org/TR/trace-context/#traceparent-header) for the format |
## Service Bus .NET Client autotracing The `ServiceBusProcessor` class of [Azure Messaging Service Bus client for .NET](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) provides tracing instrumentation points that can be hooked by tracing systems, or piece of client code. The instrumentation allows tracking all calls to the Service Bus messaging service from client side. If message processing is done by using [`ProcessMessageAsync` of `ServiceBusProcessor`](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) (message handler pattern), the message processing is also instrumented.
If you're running any external code in addition to the Application Insights SDK,
It doesn't mean that there was a delay in receiving the message. In this scenario, the message has already been received since the message is passed in as a parameter to the SDK code. And, the **name** tag in the App Insights logs (**Process**) indicates that the message is now being processed by your external event processing code. This issue isn't Azure-related. Instead, these metrics refer to the efficiency of your external code given that the message has already been received from Service Bus.
+### Tracking with OpenTelemetry
+Service Bus .NET Client library version 7.5.0 and later supports OpenTelemetry in experimental mode. Refer to [Distributed tracing in .NET SDK](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/core/Azure.Core/samples/Diagnostics.md#opentelemetry-with-azure-monitor-zipkin-and-others) documentation for more details.
+ ### Tracking without tracing system In case your tracing system doesn't support automatic Service Bus calls tracking you may be looking into adding such support into a tracing system or into your application. This section describes diagnostics events sent by Service Bus .NET client.
service-fabric Service Fabric Best Practices Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-best-practices-applications.md
Become familiar with the [general architecture](/azure/architecture/reference-ar
Use an API gateway service that communicates to back-end services that can then be scaled out. The most common API gateway services used are: - [Azure API Management](./service-fabric-api-management-overview.md), which is [integrated with Service Fabric](./service-fabric-tutorial-deploy-api-management.md).-- [Azure IoT Hub](../iot-hub/index.yml) or [Azure Event Hubs](../event-hubs/index.yml), using the [ServiceFabricProcessor](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Microsoft.Azure.EventHubs.ServiceFabricProcessor) to read from Event Hub partitions. - [Træfik reverse proxy](https://techcommunity.microsoft.com/t5/azure-service-fabric/bg-p/Service-Fabric), using the [Azure Service Fabric provider](https://docs.traefik.io/v1.6/configuration/backends/servicefabric/). - [Azure Application Gateway](../application-gateway/index.yml).
Service Fabric Reliable Actors enables you to easily create stateful, virtual ac
## Application diagnostics Be thorough about adding [application logging](./service-fabric-diagnostics-event-generation-app.md) in service calls. It will help you diagnose scenarios in which services call each other. For example, when A calls B calls C calls D, the call could fail anywhere. If you don't have enough logging, failures are hard to diagnose. If the services are logging too much because of call volumes, be sure to at least log errors and warnings.
-## IoT and messaging applications
-When you're reading messages from [Azure IoT Hub](../iot-hub/index.yml) or [Azure Event Hubs](../event-hubs/index.yml), use [ServiceFabricProcessor](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/ServiceFabricProcessor). ServiceFabricProcessor integrates with Service Fabric Reliable Services to maintain the state of reading from the event hub partitions and pushes new messages to your services via the `IEventProcessor::ProcessEventsAsync()` method.
-- ## Design guidance on Azure * Visit the [Azure architecture center](/azure/architecture/microservices/) for design guidance on [building microservices on Azure](/azure/architecture/microservices/).
site-recovery Azure To Azure How To Enable Zone To Zone Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md
This article describes how to replicate, failover, and failback Azure virtual ma
> >- Support for Zone to Zone disaster recovery is currently limited to the following regions: Southeast Asia, East Asia, Japan East, Korea Central, Australia East, India Central, UK South, West Europe, North Europe, Norway East, France Central, Sweden Central (Managed Access), Canada Central, Central US, South Central US, East US, East US 2, West US 2, Brazil South and West US 3. >- Site Recovery does not move or store customer data out of the region in which it is deployed when the customer is using Zone to Zone Disaster Recovery. Customers may select a Recovery Services Vault from a different region if they so choose. The Recovery Services Vault contains metadata but no actual customer data.
+>- Zone to Zone disaster recovery is not supported for VMs having ZRS managed disks.
Site Recovery service contributes to your business continuity and disaster recovery strategy by keeping your business apps up and running, during planned and unplanned outages. It is the recommended Disaster Recovery option to keep your applications up and running if there are regional outages.
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-support-matrix.md
DRBD | Disks that are part of a DRBD setup are not supported. |
LRS | Supported | GRS | Supported | RA-GRS | Supported |
-ZRS | Not supported |
+ZRS | Supported | ZRS Managed disks are supported. If the source VM has one or more ZRS managed disks, Site Recovery ensures the target VM also has the same configuration of disks. If the source managed disks are of a different type, they cannot be converted to ZRS managed disks at target, and vice versa.
Cool and Hot Storage | Not supported | Virtual machine disks are not supported on cool and hot storage Azure Storage firewalls for virtual networks | Supported | If restrict virtual network access to storage accounts, enable [Allow trusted Microsoft services](../storage/common/storage-network-security.md#exceptions). General purpose V2 storage accounts (Both Hot and Cool tier) | Supported | Transaction costs increase substantially compared to General purpose V1 storage accounts
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-whats-new.md
This public preview covers a complete overhaul of the current architecture for p
- [Learn](./vmware-azure-architecture-preview.md) about the new architecture and the changes introduced. - Check the pre-requisites and setup the ASR replication appliance by following [these steps](./deploy-vmware-azure-replication-appliance-preview.md). - [Enable replication](./vmware-azure-set-up-replication-tutorial-preview.md) for your VMware machines.-- Check out the [automatic upgrade](./upgrade-mobility-service-preview.md) and [switch](./switch-replication-appliance-preview.md) capability for ASR replication appliance.--
-### Update rollup 56
-
-[Update rollup 56](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) provides the following updates:
-
-**Update** | **Details**
- |
-**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
-**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
-
-**Azure Site Recovery Service** | Made improvements so that enabling replication and re-protect operations are faster by 46%.
-**Azure Site Recovery Portal** | Replication can now be enabled between any two Azure regions around the world. You are no longer limited to enabling replication within your continent.
+- Check out the [automatic upgrade](./upgrade-mobility-service-preview.md) and [switch](./switch-replication-appliance-preview.md) capability for ASR replication appliance.
## Updates (July 2021)
spring-cloud Quickstart Setup Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/quickstart-setup-log-analytics.md
Use the following steps to set up your Log Analytics workspace.
1. Set up the diagnostic settings. For more information on log categories and contents, see [Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md). ```azurecli
- az monitor diagnositc-settings create \
+ az monitor diagnostic-settings create \
--name "<new-name-for-settings>" \ --resource "<service-instance-id>" \ --workspace "<workspace-id>" \
storage Storage Analytics Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-analytics-logging.md
Previously updated : 01/29/2021 Last updated : 01/04/2022
For information about listing blobs programmatically, see [Enumerating Blob Reso
- `EndTime=2011-07-31T18:22:09Z` - `LogVersion=1.0`
+### Log entries
+
+The following sections show an example log entry for each supported Azure Storage service.
+
+##### Example log entry for Blob Storage
+
+`2.0;2022-01-03T20:34:54.4617505Z;PutBlob;SASSuccess;201;7;7;sas;;logsamples;blob;https://logsamples.blob.core.windows.net/container1/1.txt?se=2022-02-02T20:34:54Z&amp;sig=XXXXX&amp;sp=rwl&amp;sr=c&amp;sv=2020-04-08&amp;timeout=901;"/logsamples/container1/1.txt";xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx;0;71.197.193.44:53371;2019-12-12;654;13;337;0;13;"xxxxxxxxxxxxxxxxxxxxx==";"xxxxxxxxxxxxxxxxxxxxx==";"&quot;0x8D9CEF88004E296&quot;";Monday, 03-Jan-22 20:34:54 GMT;;"Microsoft Azure Storage Explorer, 1.20.1, win32, azcopy-node, 2.0.0, win32, AzCopy/10.11.0 Azure-Storage/0.13 (go1.15; Windows_NT)";;"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx";;;;;;;;`
+
+##### Example log entry for Blob Storage (Data Lake Storage Gen2 enabled)
+
+`2.0;2022-01-04T22:50:56.0000775Z;RenamePathFile;Success;201;49;49;authenticated;logsamples;logsamples;blob;"https://logsamples.dfs.core.windows.net/my-container/myfileorig.png?mode=legacy";"/logsamples/my-container/myfilerenamed.png";xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx;0;73.157.16.8;2020-04-08;591;0;224;0;0;;;;Friday, 11-Jun-21 17:58:15 GMT;;"Microsoft Azure Storage Explorer, 1.19.1, win32 azsdk-js-storagedatalake/12.3.1 (NODE-VERSION v12.16.3; Windows_NT 10.0.22000)";;"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx";;;;;;;;`
+
+##### Example log entry for Queue Storage
+
+`2.0;2022-01-03T20:35:04.6097590Z;PeekMessages;Success;200;5;5;authenticated;logsamples;logsamples;queue;https://logsamples.queue.core.windows.net/queue1/messages?numofmessages=32&amp;peekonly=true&amp;timeout=30;"/logsamples/queue1";xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx;0;71.197.193.44:53385;2020-04-08;536;0;232;62;0;;;;;;"Microsoft Azure Storage Explorer, 1.20.1, win32 azsdk-js-storagequeue/12.3.1 (NODE-VERSION v12.16.3; Windows_NT 10.0.22000)";;"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx";;;;;;;;`
+
+##### Example log entry for Table Storage
+
+`1.0;2022-01-03T20:35:13.0719766Z;CreateTable;Success;204;30;30;authenticated;logsamples;logsamples;table;https://logsamples.table.core.windows.net/Tables;"/logsamples/Table1";xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx;0;71.197.193.44:53389;2018-03-28;601;22;339;0;22;;;;;;"Microsoft Azure Storage Explorer, 1.20.1, win32, Azure-Storage/2.10.3 (NODE-VERSION v12.16.3; Windows_NT 10.0.22000)";;"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"`
++ ## Next steps - [Enable and manage Azure Storage Analytics logs (classic)](manage-storage-analytics-logs.md)
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-planning.md
For Azure File Sync and DFS-R to work side by side:
1. Azure File Sync cloud tiering must be disabled on volumes with DFS-R replicated folders. 2. Server endpoints should not be configured on DFS-R read-only replication folders.
+3. Only a single server endpoint can overlap with a DFS-R location. Multiple server endpoints overlapping with other active DFS-R locations may lead to conflicts.
For more information, see [DFS Replication overview](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj127250(v=ws.11)).
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/solution-integration/validated-partners/analytics/partner-overview.md
This article highlights Microsoft partner companies that are integrated with Azu
![Striim company logo](./media/striim-logo.png) |**Striim**<br>Striim enables continuous data movement and in-stream transformations from a wide variety of sources into multiple Azure solutions including Azure Synapse Analytics, Cosmos DB, Azure cloud databases. The Striim solution enables Azure Data Lake Storage customers to quickly build streaming data pipelines. Customers can choose their desired data latency (real-time, micro-batch, or batch) and enrich the data with more context. These pipelines can then support any application or big data analytics solution, including Azure SQL Data Warehouse and Azure Databricks. |[Partner page](https://www.striim.com/partners/striim-for-microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/striim.azurestorageintegration?tab=overview)| ![Talend company logo](./media/talend-logo.png) |**Talend**<br>Talend Data Fabric is a platform that brings together multiple integration and governance capabilities. Using a single unified platform, Talend delivers complete, clean, and uncompromised data in real time. The Talend Trust Score helps assess the reliability of any data set. |[Partner page](https://www.talend.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/talend.talendclouddi)| ![Unravel](./media/unravel-logo.png) |**Unravel Data**<br>Unravel Data provides observability and automatic management through a single pane of glass. AI-powered recommendations proactively improve reliability, speed, and resource allocations of your data pipelines and jobs. Unravel connects easily with Azure Databricks, HDInsight, Azure Data Lake Storage, and more through the Azure Marketplace or Unravel SaaS service. Unravel Data also helps migrate to Azure by providing an assessment of your environment. This assessment uncovers usage details, dependency maps, cost, and effort needed for a fast move with less risk.|[Partner page](https://www.unraveldata.com/azure-databricks/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/unravel-data.unravel4databrickssubscriptionasaservice?tab=Overview)
-|![Wandisco company logo](./medi) is tightly integrated with Azure. Besides having an Azure portal deployment experience, it also uses role-based access control, Azure Active Directory, Azure Policy enforcement, and Activity log integration. With Azure Billing integration, you don't need to add a vendor contract or get more vendor approvals.<br><br>Accelerate the replication of Hadoop data between multiple sources and targets for any data architecture. With LiveData Cloud Services, your data will be available for Azure Databricks, Synapse Analytics, and HDInsight as soon as it lands, with guaranteed 100% data consistency. |[Partner page](https://www.wandisco.com/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/wandisco.ldm?tab=Overview)|
+|![Wandisco company logo](./medi) is tightly integrated with Azure. Besides having an Azure portal deployment experience, it also uses role-based access control, Azure Active Directory, Azure Policy enforcement, and Activity log integration. With Azure Billing integration, you don't need to add a vendor contract or get more vendor approvals.<br><br>Accelerate the replication of Hadoop data between multiple sources and targets for any data architecture. With LiveData Cloud Services, your data will be available for Azure Databricks, Synapse Analytics, and HDInsight as soon as it lands, with guaranteed 100% data consistency. |[Partner page](https://www.wandisco.com/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/wandisco.ldma?tab=Overview)|
Are you a storage partner but your solution is not listed yet? Send us your info [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR3i8TQB_XnRAsV3-7XmQFpFUQjY4QlJYUzFHQ0ZBVDNYWERaUlNRVU5IMyQlQCN0PWcu). ## Next steps
storsimple Storsimple 8000 Manage Volumes U2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storsimple/storsimple-8000-manage-volumes-u2.md
description: Explains how to add, modify, monitor, and delete StorSimple volumes
Previously updated : 08/11/2021 Last updated : 01/05/2022
Modify a volume when you need to expand it or change the hosts that access the v
3. In the list of disks, select the volume that you updated, right-click, and then select **Extend Volume**. The Extend Volume wizard starts. Click **Next**. 4. Complete the wizard, accepting the default values. After the wizard is finished, the volume should show the increased size.
- > [!NOTE]
- > If you expand a locally pinned volume and then expand another locally pinned volume immediately afterwards, the volume expansion jobs run sequentially. The first volume expansion job must finish before the next volume expansion job can begin.
+> [!NOTE]
+> - Expansion of a volume typically takes about 30 minutes.
+> - If you expand a locally pinned volume and then expand another locally pinned volume immediately afterwards, the volume expansion jobs run sequentially. The first volume expansion job must finish before the next volume expansion job can begin.
## Change the volume type
time-series-insights Time Series Insights Authentication And Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-authentication-and-authorization.md
Required request headers are described below.
| Required request header | Description | | | |
-| Authorization | To authenticate with Azure Time Series Insights, a valid OAuth 2.0 Bearer token must be passed in the [Authorization header](/rest/api/apimanagement/2020-12-01/authorization-server/create-or-update). |
+| Authorization | To authenticate with Azure Time Series Insights, a valid OAuth 2.0 Bearer token must be passed in the [Authorization header](/rest/api/apimanagement/current-preview/authorization-server/create-or-update). |
> [!TIP] > Read the hosted Azure Time Series Insights [client SDK sample visualization](https://tsiclientsample.azurewebsites.net/) to learn how to authenticate with the Azure Time Series Insights APIs programmatically using the [JavaScript Client SDK](https://github.com/microsoft/tsiclient/blob/master/docs/API.md) along with charts and graphs.
virtual-machine-scale-sets Tutorial Use Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/tutorial-use-disks-cli.md
Information on the disk size, storage tier, and LUN (Logical Unit Number) is sho
"lun": 0, "managedDisk": { "additionalProperties": {},
- "storageAccountType": "Standard_LRS"
+ "storageAccountType": "StandardSSD_LRS"
}, "name": null },
Information on the disk size, storage tier, and LUN (Logical Unit Number) is sho
"lun": 1, "managedDisk": { "additionalProperties": {},
- "storageAccountType": "Standard_LRS"
+ "storageAccountType": "StandardSSD_LRS"
}, "name": null },
Information on the disk size, storage tier, and LUN (Logical Unit Number) is sho
"lun": 2, "managedDisk": { "additionalProperties": {},
- "storageAccountType": "Standard_LRS"
+ "storageAccountType": "StandardSSD_LRS"
}, "name": null }
virtual-machine-scale-sets Virtual Machine Scale Sets Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-deploy-app.md
The Custom Script Extension downloads and executes scripts on Azure VMs. This ex
## Install an app to a Windows VM with PowerShell DSC
-[PowerShell Desired State Configuration (DSC)](/powershell/dsc/overview/overview) is a management platform to define the configuration of target machines. DSC configurations define what to install on a machine and how to configure the host. A Local Configuration Manager (LCM) engine runs on each target node that processes requested actions based on pushed configurations.
+[PowerShell Desired State Configuration (DSC)](/powershell/dsc/overview) is a management platform to define the configuration of target machines. DSC configurations define what to install on a machine and how to configure the host. A Local Configuration Manager (LCM) engine runs on each target node that processes requested actions based on pushed configurations.
The PowerShell DSC extension lets you customize VM instances in a scale set with PowerShell. The following example:
virtual-machine-scale-sets Virtual Machine Scale Sets Dsc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-dsc.md
Learn how the [DSC extension securely handles credentials](../virtual-machines/e
For more information on the Azure DSC extension handler, see [Introduction to the Azure Desired State Configuration extension handler](../virtual-machines/extensions/dsc-overview.md?toc=/azure/virtual-machines/windows/toc.json).
-For more information about PowerShell DSC, [visit the PowerShell documentation center](/powershell/dsc/overview/overview).
+For more information about PowerShell DSC, [visit the PowerShell documentation center](/powershell/dsc/overview).
virtual-machines Disks Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-pools.md
description: Learn about Azure disk pools (preview).
Previously updated : 11/02/2021 Last updated : 01/04/2022
Disk pools are currently available in the following regions:
## Billing
-When you deploy a disk pool, there are two main areas that will incur billing costs:
--- The disks added to the disk pool-- The Azure resources deployed in the managed resource group that accompany the disk pool. These resources are:
- - Virtual machines.
- - Managed disks.
- - One network interface.
- - One storage account for diagnostic logs and metrics.
-
-You will be billed for the resources inside this managed resource group and the individual disks that are the actual data storage. For example, if you have a disk pool with one P30 disk added, you will be billed for the P30 disk and all resources deployed in the managed resource group. Other than these resources and your disks, there are no extra service charges for a disk pool. For details on the managed resource group, see the [How it works](#how-it-works) section.
-
-See the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) for regional pricing on VMs and disks to evaluate the cost of a disk pool for you. Azure resources consumed by the disk pool can be accounted for in Azure Reservations, if you have them.
+When you deploy a disk pool, there are two areas that will incur billing costs: The price of the disk pool service fee itself, and the price of each individual disk added to the pool. For example, if you have a disk pool with one P30 disk added, you will be billed for the P30 disk and the disk pool. Other than the disk pool and your disks, there are no extra service charges for a disk pool and you will not be billed for the resources deployed in the managed resource group: MSP_(resource-group-name)_(diskpool-name)_(region-name).
+See the [Azure managed disk pricing page](https://azure.microsoft.com/pricing/details/managed-disks/) for regional pricing on disk pools and disks to evaluate the cost of a disk pool for you.
## Next steps
virtual-machines Dsc Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/dsc-credentials.md
This process is different than [using secure configurations without the extensio
- Get an [introduction to Azure DSC extension handler](dsc-overview.md). - Examine the [Azure Resource Manager template for the DSC extension](dsc-template.md).-- For more information about PowerShell DSC, go to the [PowerShell documentation center](/powershell/dsc/overview/overview).
+- For more information about PowerShell DSC, go to the [PowerShell documentation center](/powershell/dsc/overview/).
- For more functionality that you can manage by using PowerShell DSC, and for more DSC resources, browse the [PowerShell gallery](https://www.powershellgallery.com/packages?q=DscResource&x=0&y=0).
virtual-machines Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/dsc-overview.md
Logs for the extension are stored in the following location: `C:\WindowsAzure\Lo
## Next steps -- For more information about PowerShell DSC, go to the [PowerShell documentation center](/powershell/dsc/overview/overview).
+- For more information about PowerShell DSC, go to the [PowerShell documentation center](/powershell/dsc/overview).
- Examine the [Resource Manager template for the DSC extension](dsc-template.md). - For more functionality that you can manage by using PowerShell DSC, and for more DSC resources, browse the [PowerShell gallery](https://www.powershellgallery.com/packages?q=DscResource&x=0&y=0). - For details about passing sensitive parameters into configurations, see [Manage credentials securely with the DSC extension handler](dsc-credentials.md).
virtual-machines Dsc Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/dsc-template.md
or settings.configuration.module is specified"
- Learn about [using virtual machine scale sets with the Azure DSC extension](../../virtual-machine-scale-sets/virtual-machine-scale-sets-dsc.md). - Find more details about [DSC's secure credential management](dsc-credentials.md). - Get an [introduction to the Azure DSC extension handler](dsc-overview.md).-- For more information about PowerShell DSC, go to the [PowerShell documentation center](/powershell/dsc/overview/overview).
+- For more information about PowerShell DSC, go to the [PowerShell documentation center](/powershell/dsc/overview).
virtual-machines Spot Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/spot-vms.md
The following [offer types](https://azure.microsoft.com/support/legal/offer-deta
- Enterprise Agreement - Pay-as-you-go offer code (003P) - Sponsored (0036P and 0136P)-- For Cloud Service Provider (CSP), contact your partner
+- For Cloud Service Provider (CSP), see the [Partner Center](/partner-center/azure-plan-get-started) or contact your partner directly.
## Pricing
virtual-machines User Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/user-data.md
The VM.Properties in these requests should contain your desired UserData field,
"osDisk": { "caching": "ReadWrite", "managedDisk": {
- "storageAccountType": "Standard_LRS"
+ "storageAccountType": "StandardSSD_LRS"
}, "name": "vmOSdisk", "createOption": "FromImage"
virtual-machines Vm Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/vm-generalized-image-version.md
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
"osDisk": { "caching": "ReadWrite", "managedDisk": {
- "storageAccountType": "Standard_LRS"
+ "storageAccountType": "StandardSSD_LRS"
}, "createOption": "FromImage" }
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
"osDisk": { "caching": "ReadWrite", "managedDisk": {
- "storageAccountType": "Standard_LRS"
+ "storageAccountType": "StandardSSD_LRS"
}, "createOption": "FromImage" }
virtual-machines Tutorial Custom Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/tutorial-custom-images.md
New-AzGalleryImageVersion `
-Location $resourceGroup.Location ` -TargetRegion $targetRegions ` -Source $sourceVM.Id.ToString() `
- -PublishingProfileEndOfLifeDate '2020-12-01'
+ -PublishingProfileEndOfLifeDate '2030-12-01'
``` It can take a while to replicate the image to all of the target regions.
virtual-network Virtual Network Tap Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-network-tap-overview.md
The accounts you use to apply TAP configuration on network interfaces must be as
- [Darktrace](https://www.darktrace.com/en/azure/) - [ExtraHop Reveal(x)](https://www.extrahop.com/partners/tech-partners/microsoft/) - [Fidelis Cybersecurity](https://www.fidelissecurity.com/technology-partners/microsoft-azure )-- [Flowmon](https://www.flowmon.com/blog/azure-vtap)
+- [Flowmon](https://www.flowmon.com/en/blog/azure-vtap)
- [NetFort LANGuardian](https://www.netfort.com/languardian/solutions/visibility-in-azure-network-tap/) - [Netscout vSTREAM]( https://www.netscout.com/marketplace-azure) - [Noname Security](https://nonamesecurity.com/)
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/virtual-wan-faq.md
Yes. For a list of Managed Service Provider (MSP) solutions enabled via Azure Ma
### How does Virtual WAN Hub routing differ from Azure Route Server in a VNet?
-Azure Route Server provides a Border Gateway Protocol (BGP) peering service that can be used by NVAs (Network Virtual Appliance) to learn routes from the route server in a DIY hub VNet. Virtual WAN routing provides multiple capabilities including VNet-to-VNet transit routing, custom routing, custom route association and propagation, and a zero-touch fully meshed hub service along with connectivity services of ExpressRoute, Site VPN, Remote User/Large Scale P2S VPN, and Secure hub (Azure Firewall) capabilities. When you establish a BGP peering between your NVA and Azure Route Server, you can advertise IP addresses from your NVA to your virtual network. For all advanced routing capabilities such as transit routing, custom routing, etc., you can use Virtual WAN routing.
+Both Azure Virtual WAN hub and Azure Route Server provide Border Gateway Protocol (BGP) peering capabilities that can be utilized by NVAs (Network Virtual Appliance) to advertise IP addresses from the NVA to the userΓÇÖs Azure virtual networks. The deployment options differ in the sense that Azure Route Server is typically deployed by a self-managed customer hub VNet whereas Azure Virtual WAN provides a zero-touch fully meshed hub service to which customers connect their various spokes end points (Azure VNET, on-premise branches with Site-to-site VPN or SDWAN, remote users with Point-to-site/Remote User VPN and Private connections with ExpressRoute) and enjoy BGP Peering for NVAs deployed in spoke VNET along with other vWAN capabilities such as transit connectivity for VNet-to-VNet , transit connectivity between VPN and ExpressRoute , custom/advanced routing, custom route association and propagation, routing intent/policies for no hassle inter-region security, Secure Hub/Azure firewall etc. For more details about Virtual WAN BGP Peering, please see [How to peer BGP with a virtual hub](scenario-bgp-peering-hub.md).
### If I am using a third-party security provider (Zscaler, iBoss or Checkpoint) to secure my internet traffic, why don't I see the VPN site associated to the third-party security provider in the Azure portal?