Updates from: 04/16/2022 01:05:10
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Eid Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-eid-me.md
To configure your tenant application as a Relying Party in eID-Me the following
| Name | Azure AD B2C/your desired application name | | Domain | name.onmicrosoft.com | | Redirect URIs | https://jwt.ms |
-| Redirect URLs | https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp<br>For Example: `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp`<br>If you use a custom domain, enter https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp.<br> Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant. |
+| Redirect URLs | `https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp`<br>For Example: `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp`<br>If you use a custom domain, enter https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp.<br> Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant. |
| URL for application home page | Will be displayed to the end user | | URL for application privacy policy | Will be displayed to the end user |
There are additional identity claims that eID-Me supports and can be added.
1. Open the `TrustFrameworksExtension.xml`
-2. Find the `BuildingBlocks` element. This is where additional identity claims that eID-Me supports can be added. Full lists of supported eID-Me identity claims with descriptions are mentioned at [http://www.oid-info.com/get/1.3.6.1.4.1.50715](http://www.oid-info.com/get/1.3.6.1.4.1.50715) with the OIDC identifiers used here [https://eid-me.bluink.ca/.well-known/openid-configuration](https://eid-me.bluink.ca/.well-known/openid-configuration).
+2. Find the `BuildingBlocks` element. This is where additional identity claims that eID-Me supports can be added. Full lists of supported eID-Me identity claims with descriptions are mentioned at `http://www.oid-info.com/get/1.3.6.1.4.1.50715` with the OIDC identifiers used here [https://eid-me.bluink.ca/.well-known/openid-configuration](https://eid-me.bluink.ca/.well-known/openid-configuration).
```xml <BuildingBlocks>
active-directory On Premises Migrate Microsoft Identity Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-migrate-microsoft-identity-manager.md
You can import into the Azure Active Directory (Azure AD) ECMA Connector Host a
>[!IMPORTANT] >Currently, only the generic SQL and LDAP connectors are supported for use with the Azure AD ECMA Connector Host.
-## Create and export a connector configuration in MIM Sync
-If you already have MIM Sync with your ECMA connector configured, skip to step 10.
+## Create a connector configuration in MIM Sync
+This section is included for illustrative purposes, if you wish to set up MIM Sync with a connector. If you already have MIM Sync with your ECMA connector configured, skip to the next section.
1. Prepare a Windows Server 2016 server, which is distinct from the server that will be used for running the Azure AD ECMA Connector Host. This host server should either have a SQL Server 2016 database colocated or have network connectivity to a SQL Server 2016 database. One way to set up this server is by deploying an Azure virtual machine with the image **SQL Server 2016 SP1 Standard on Windows Server 2016**. This server doesn't need internet connectivity other than remote desktop access for setup purposes. 1. Create an account for use during the MIM Sync installation. It can be a local account on that Windows Server instance. To create a local account, open **Control Panel** > **User Accounts**, and add the user account **mimsync**. 1. Add the account created in the previous step to the local Administrators group. 1. Give the account created earlier the ability to run a service. Start **Local Security Policy** and select **Local Policies** > **User Rights Assignment** > **Log on as a service**. Add the account mentioned earlier.
- 1. Install MIM Sync on this host. If you don't have MIM Sync binaries, you can install an evaluation by downloading the zip file from the [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=48244), mounting the ISO image, and copying the folder **Synchronization Service** to the Windows Server host. Then run the setup program contained in that folder. Evaluation software is time limited and will expire. It isn't intended for production use.
+ 1. Install MIM Sync on this host.
1. After the installation of MIM Sync is complete, sign out and sign back in.
- 1. Install your connector on the same server as MIM Sync. For illustration purposes, this test lab guide will illustrate using one of the Microsoft-supplied connectors for download from the [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=51495).
+ 1. Install your connector on the same server as MIM Sync. For illustration purposes, use either of the Microsoft-supplied SQL or LDAP connectors for download from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=51495).
1. Start the Synchronization Service UI. Select **Management Agents**. Select **Create**, and specify the connector management agent. Be sure to select a connector management agent that's ECMA based. 1. Give the connector a name, and configure the parameters needed to import and export data to the connector. Be sure to configure that the connector can import and export single-valued string attributes of a user or person object type.+
+## Export a connector configuration from MIM Sync
+ 1. On the MIM Sync server computer, start the Synchronization Service UI, if it isn't already running. Select **Management Agents**. 1. Select the connector, and select **Export Management Agent**. Save the XML file, and the DLL and related software for your connector, to the Windows server that will be holding the ECMA Connector Host. At this point, the MIM Sync server is no longer needed.
- 1. Sign in to the Windows server as the account that the Azure AD ECMA Connector Host will run as.
+## Import a connector configuration
+
+ 1. Install the ECMA Connector host and provisioning agent on a Windows Server, using the [provisioning users into SQL based applications](on-premises-sql-connector-configure.md#download-install-and-configure-the-azure-ad-connect-provisioning-agent-package) or [provisioning users into LDAP directories](on-premises-ldap-connector-configure.md#download-install-and-configure-the-azure-ad-connect-provisioning-agent-package) articles.
+ 1. Sign in to the Windows server as the account that the Azure AD ECMA Connector Host runs as.
1. Change to the directory C:\Program Files\Microsoft ECMA2host\Service\ECMA. Ensure there are one or more DLLs already present in that directory. Those DLLs correspond to Microsoft-delivered connectors. 1. Copy the MA DLL for your connector, and any of its prerequisite DLLs, to that same ECMA subdirectory of the Service directory. 1. Change to the directory C:\Program Files\Microsoft ECMA2Host\Wizard. Run the program Microsoft.ECMA2Host.ConfigWizard.exe to set up the ECMA Connector Host configuration. 1. A new window appears with a list of connectors. By default, no connectors will be present. Select **New connector**.
- 1. Specify the management agent XML file that was exported from MIM Sync earlier. Continue with the configuration and schema-mapping instructions from the section "Configure a connector."
+ 1. Specify the management agent XML file that was exported from MIM Sync earlier. Continue with the configuration and schema-mapping instructions from the section "Create a connector" in either the [provisioning users into SQL based applications](on-premises-sql-connector-configure.md#create-a-generic-sql-connector) or [provisioning users into LDAP directories](on-premises-ldap-connector-configure.md#configure-a-generic-ldap-connector) articles.
## Next steps -- [App provisioning](user-provisioning.md)-- [Generic SQL connector](on-premises-sql-connector-configure.md)
+- Learn more about [App provisioning](user-provisioning.md)
+- [Configuring Azure AD to provision users into SQL based applications](on-premises-sql-connector-configure.md) with the Generic SQL connector
+- [Configuring Azure AD to provision users into LDAP directories](on-premises-ldap-connector-configure.md) with the Generic LDAP connector
active-directory How To Migrate Mfa Server To Azure Mfa User Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md
Previously updated : 06/22/2021 Last updated : 04/07/2022
If you are already using Conditional Access to determine when users are prompted
As users are migrated to cloud authentication, they will start using Azure AD MFA as defined by your existing Conditional Access policies. They wonΓÇÖt be redirected to AD FS and MFA Server anymore.
-If your federated domain(s) have SupportsMFA set to false, you are likely enforcing MFA on AD FS using claims rules.
+If your federated domain(s) have the [federatedIdpMfaBehavior](/graph/api/resources/federatedIdpMfaBehavior?view=graph-rest-beta) set to `enforceMfaByFederatedIdp` or **SupportsMfa** flag set to `$True` (the **federatedIdpMfaBehavior** overrides **SupportsMfa** when both are set), you are likely enforcing MFA on AD FS using claims rules.
In this case, you will need to analyze your claims rules on the Azure AD relying party trust and create Conditional Access policies that support the same security goals. If you need to configure Conditional Access policies, you need to do so before enabling staged rollout.
active-directory How To Migrate Mfa Server To Azure Mfa With Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-with-federation.md
description: Step-by-step guidance to move from Azure MFA Server on-premises to
Previously updated : 06/22/2021 Last updated : 04/07/2022
Once you've configured the servers, you can add Azure AD MFA as an additional au
![Screen shot showing the Edit authentication methods screen with Azure MFA and Azure Mutli-factor authentication Server selected](./media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/edit-authentication-methods.png)
-## Prepare Azure AD and implement
+## Prepare Azure AD and implement migration
-### Ensure SupportsMFA is set to True
+This section covers final steps before migrating user phone numbers.
-For federated domains, MFA may be enforced by Azure AD Conditional Access or by the on-premises federation provider. Each federated domain in Azure AD has a SupportsMFA flag. When the SupportsMFA flag is set to True, Azure AD redirects users to MFA on AD FS or another federation providers. For example, if a user is accessing an application for which a Conditional Access policy that requires MFA has been configured, the user will be redirected to AD FS. Adding Azure AD MFA as an authentication method in AD FS, enables Azure AD MFA to be invoked once your configurations are complete.
+### Set federatedIdpMfaBehavior to enforceMfaByFederatedIdp
-If the SupportsMFA flag is set to False, you're likely not using Azure MFA; you're probably using claims rules on AD FS relying parties to invoke MFA.
+For federated domains, MFA may be enforced by Azure AD Conditional Access or by the on-premises federation provider. Each federated domain has a Microsoft Graph PowerShell security setting named **federatedIdpMfaBehavior**. You can set **federatedIdpMfaBehavior** to `enforceMfaByFederatedIdp` so Azure AD accepts MFA that's performed by the federated identity provider. If the federated identity provider didn't perform MFA, Azure AD redirects the request to the federated identity provider to perform MFA. For more information, see [federatedIdpMfaBehavior](/graph/api/resources/federatedIdpMfaBehavior?view=graph-rest-beta&preserve-view=true).
-You can check the status of your SupportsMFA flag with the following [Windows PowerShell cmdlet](/powershell/module/msonline/get-msoldomainfederationsettings):
+>[!NOTE]
+> The **federatedIdpMfaBehavior** setting is an evolved version of the **SupportsMfa** property of the [Set-MsolDomainFederationSettings MSOnline v1 PowerShell cmdlet](/powershell/module/msonline/set-msoldomainfederationsettings).
+
+For domains that have already set the **SupportsMfa** property, these rules determine how **federatedIdpMfaBehavior** and **SupportsMfa** work together:
+
+- Switching between **federatedIdpMfaBehavior** and **SupportsMfa** is not supported.
+- Once **federatedIdpMfaBehavior** property is set, Azure AD ignores the **SupportsMfa** setting.
+- If the **federatedIdpMfaBehavior** property is never set, Azure AD will continue to honor the **SupportsMfa** setting.
+- If neither **federatedIdpMfaBehavior** nor **SupportsMfa** is set, Azure AD will default to `acceptIfMfaDoneByFederatedIdp` behavior.
+
+You can check the status of **federatedIdpMfaBehavior** by using [Get-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/get-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true).
```powershell
-Get-MsolDomainFederationSettings ΓÇôDomainName yourdomain.com
+Get-MgDomainFederationConfiguration ΓÇôDomainID yourdomain.com
```
-If the SupportsMFA flag is set to false or is blank for your federated domain, set it to true using the following Windows PowerShell cmdlet:
+You can also check the status of your **SupportsMfa** flag with [Get-MsolDomainFederationSettings](/powershell/module/msonline/get-msoldomainfederationsettings):
```powershell
-Set-MsolDomainFederationSettings -DomainName contoso.com -SupportsMFA $true
+Get-MsolDomainFederationSettings ΓÇôDomainName yourdomain.com
+```
+
+The following example shows how to set **federatedIdpMfaBehavior** to `enforceMfaByFederatedIdp` by using Graph PowerShell.
+
+#### Request
+<!-- {
+ "blockType": "request",
+ "name": "update_internaldomainfederation"
+}
+-->
+``` http
+PATCH https://graph.microsoft.com/beta/domains/contoso.com/federationConfiguration/6601d14b-d113-8f64-fda2-9b5ddda18ecc
+Content-Type: application/json
+{
+ "federatedIdpMfaBehavior": "enforceMfaByFederatedIdp"
+}
+```
++
+#### Response
+>**Note:** The response object shown here might be shortened for readability.
+<!-- {
+ "blockType": "response",
+ "truncated": true,
+ "@odata.type": "microsoft.graph.internalDomainFederation"
+}
+-->
+``` http
+HTTP/1.1 200 OK
+Content-Type: application/json
+{
+ "@odata.type": "#microsoft.graph.internalDomainFederation",
+ "id": "6601d14b-d113-8f64-fda2-9b5ddda18ecc",
+ "issuerUri": "http://contoso.com/adfs/services/trust",
+ "metadataExchangeUri": "https://sts.contoso.com/adfs/services/trust/mex",
+ "signingCertificate": "MIIE3jCCAsagAwIBAgIQQcyDaZz3MI",
+ "passiveSignInUri": "https://sts.contoso.com/adfs/ls",
+ "preferredAuthenticationProtocol": "wsFed",
+ "activeSignInUri": "https://sts.contoso.com/adfs/services/trust/2005/usernamemixed",
+ "signOutUri": "https://sts.contoso.com/adfs/ls",
+ "promptLoginBehavior": "nativeSupport",
+ "isSignedAuthenticationRequestRequired": true,
+ "nextSigningCertificate": "MIIE3jCCAsagAwIBAgIQQcyDaZz3MI",
+ "signingCertificateUpdateStatus": {
+ "certificateUpdateResult": "Success",
+ "lastRunDateTime": "2021-08-25T07:44:46.2616778Z"
+ },
+ "federatedIdpMfaBehavior": "enforceMfaByFederatedIdp"
+}
```
-This configuration allows the decision to use MFA Server or Azure MFA to be made on AD FS.
### Configure Conditional Access policies if needed If you use Conditional Access to determine when users are prompted for MFA, you shouldn't need to change your policies.
-If your federated domain(s) have SupportsMFA set to false, analyze your claims rules on the Azure AD relying party trust and create Conditional Access policies that support the same security goals.
+If your federated domain(s) have SupportsMfa set to false, analyze your claims rules on the Azure AD relying party trust and create Conditional Access policies that support the same security goals.
After creating conditional access policies to enforce the same controls as AD FS, you can back up and remove your claim rules customizations on the Azure AD Relying Party.
Detailed Azure MFA registration information can be found on the Registration tab
![Image of Authentication methods activity screen showing user registrations to MFA](./media/how-to-migrate-mfa-server-to-azure-mfa-with-federation/authentication-methods.png)
-
+ ## Clean up steps
active-directory Howto Mfa Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-adfs.md
Previously updated : 04/29/2021 Last updated : 04/15/2022
If your organization is federated with Azure Active Directory, use Azure AD Multi-Factor Authentication or Active Directory Federation Services (AD FS) to secure resources that are accessed by Azure AD. Use the following procedures to secure Azure Active Directory resources with either Azure AD Multi-Factor Authentication or Active Directory Federation Services. >[!NOTE]
->To secure your Azure AD resource, it is recommended to require MFA through a [Conditional Access policy](../conditional-access/howto-conditional-access-policy-all-users-mfa.md), set the domain setting SupportsMfa to $True and [emit the multipleauthn claim](#secure-azure-ad-resources-using-ad-fs) when a user performs two-step verification successfully.
+>Set the domain setting [federatedIdpMfaBehavior](/graph/api/resources/federatedIdpMfaBehavior?view=graph-rest-beta&preserve-view=true) to `enforceMfaByFederatedIdp` (recommended) or **SupportsMFA** to `$True`. The **federatedIdpMfaBehavior** setting overrides **SupportsMFA** when both are set.
## Secure Azure AD resources using AD FS
To secure your cloud resource, set up a claims rule so that Active Directory Fed
## Trusted IPs for federated users
-Trusted IPs allow administrators to by-pass two-step verification for specific IP addresses, or for federated users that have requests originating from within their own intranet. The following sections describe how to configure Azure AD Multi-Factor Authentication Trusted IPs with federated users and by-pass two-step verification when a request originates from within a federated users intranet. This is achieved by configuring AD FS to use a pass-through or filter an incoming claim template with the Inside Corporate Network claim type.
+Trusted IPs allow administrators to bypass two-step verification for specific IP addresses, or for federated users who have requests originating from within their own intranet. The following sections describe how to configure the bypass using Trusted IPs. This is achieved by configuring AD FS to use a pass-through or filter an incoming claim template with the Inside Corporate Network claim type.
This example uses Microsoft 365 for our Relying Party Trusts.
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md
For customers that use the Azure Government or Azure China 21Vianet clouds, the
| Registry key | Value | |--|--| | AZURE_MFA_HOSTNAME | strongauthenticationservice.auth.microsoft.us |
+ | AZURE_MFA_RESOURCE_HOSTNAME | adnotifications.windowsazure.us |
| STS_URL | https://login.microsoftonline.us/ | 1. For Azure China 21Vianet customers, set the following key values:
For customers that use the Azure Government or Azure China 21Vianet clouds, the
| Registry key | Value | |--|--| | AZURE_MFA_HOSTNAME | strongauthenticationservice.auth.microsoft.cn |
+ | AZURE_MFA_RESOURCE_HOSTNAME | adnotifications.windowsazure.cn |
| STS_URL | https://login.chinacloudapi.cn/ | 1. Repeat the previous two steps to set the registry key values for each NPS server.
active-directory Howto Password Ban Bad On Premises Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md
The current setting can be queried using the `Get-AzureADPasswordProtectionProxy
The `Get-AzureADPasswordProtectionProxy` cmdlet may be used to query the software version of all currently installed Azure AD Password Protection proxy servers in a forest.
+> [!NOTE]
+> The proxy service will only automatically upgrade to a newer version when critical security patches are needed.
+ ### Manual upgrade process A manual upgrade is accomplished by running the latest version of the `AzureADPasswordProtectionProxySetup.exe` software installer. The latest version of the software is available on the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=57071).
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
By selecting **Other clients**, you can specify a condition that affects apps th
## Device state (preview)
-> [!CAUTION]
-> **This preview feature has been deprecated.** Customers should use **Filter for devices** condition in Conditional Access to satisfy scenarios, previously achieved using device state (preview) condition.
+**This preview feature is being deprecated.** Customers should use the **Filter for devices** condition in the Conditional Access policy, to satisfy scenarios previously achieved using device state (preview) condition.
+ The device state condition was used to exclude devices that are hybrid Azure AD joined and/or devices marked as compliant with a Microsoft Intune compliance policy from an organization's Conditional Access policies.
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
Use the What-If tool to simulate a login from the user to the target application
To make sure that your policy works as expected, the recommended best practice is to test it before rolling it out into production. Ideally, use a test tenant to verify whether your new policy works as intended. For more information, see the article [Plan a Conditional Access deployment](plan-conditional-access.md). ## Known issues-- If you configure sign-in frequency for mobile devices, authentication after each sign-in frequency internal would be slow (can take 30 seconds on average). Also, it could happen across various apps at the same time.
+- If you configure sign-in frequency for mobile devices, authentication after each sign-in frequency interval could be slow (it can take 30 seconds on average). Also, it could happen across various apps at the same time.
- In iOS devices, if an app configures certificates as the first authentication factor and the app has both Sign-in frequency and [Intune mobile application management](/mem/intune/apps/app-lifecycle) policies applied, the end-users will be blocked from signing in to the app when the policy is triggered. ## Next steps
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/overview.md
Previously updated : 02/08/2022 Last updated : 04/15/2022
Customers with [Microsoft 365 Business Premium licenses](/office365/servicedescr
Risk-based policies require access to [Identity Protection](../identity-protection/overview-identity-protection.md), which is an Azure AD P2 feature.
+Other products and features that may interact with Conditional Access policies require appropriate licensing for those products and features.
+ ## Next steps - [Building a Conditional Access policy piece by piece](concept-conditional-access-policies.md)
active-directory Active Directory V2 Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-v2-protocols.md
Three types of bearer tokens are used by the Microsoft identity platform as *sec
* [ID tokens](id-tokens.md) - ID tokens are issued by the authorization server to the client application. Clients use ID tokens when signing in users and to get basic information about them.
-* **Refresh tokens** - The client uses a refresh token, or *RT*, to request new access and ID tokens from the authorization server. Your code should treat refresh tokens and their string content as opaque because they're intended for use only by authorization server.
+* [Refresh tokens](refresh-tokens.md) - The client uses a refresh token, or *RT*, to request new access and ID tokens from the authorization server. Your code should treat refresh tokens and their string content as opaque because they're intended for use only by authorization server.
## App registration
active-directory V2 Oauth2 Client Creds Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-client-creds-grant-flow.md
You can use the OAuth 2.0 client credentials grant specified in [RFC 6749](https
This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md).
-The OAuth 2.0 client credentials grant flow permits a web service (confidential client) to use its own credentials, instead of impersonating a user, to authenticate when calling another web service. For a higher level of assurance, the Microsoft identity platform also allows the calling service to authenticate using a [certificate](#second-case-access-token-request-with-a-certificate) or federated credential instead of a shared secret. Because the applications own credentials are being used, these credentials must be kept safe - _never_ publish that credential in your source code, embed it in web pages, or use it in a widely distributed native application.
+The OAuth 2.0 client credentials grant flow permits a web service (confidential client) to use its own credentials, instead of impersonating a user, to authenticate when calling another web service. For a higher level of assurance, the Microsoft identity platform also allows the calling service to authenticate using a [certificate](#second-case-access-token-request-with-a-certificate) or federated credential instead of a shared secret. Because the application's own credentials are being used, these credentials must be kept safe - _never_ publish that credential in your source code, embed it in web pages, or use it in a widely distributed native application.
In the client credentials flow, permissions are granted directly to the application itself by an administrator. When the app presents a token to a resource, the resource enforces that the app itself has authorization to perform an action since there is no user involved in the authentication. This article covers both the steps needed to [authorize an application to call an API](#application-permissions), as well as [how to get the tokens needed to call that API](#get-a-token).
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-management-azure-portal.md
There are two ways to delete a device:
- The toolbar, after you drill down for a specific device. > [!IMPORTANT]
-> - You must be a Cloud Device Administrator, Intune Administrator, or Global Administrator in Azure AD to delete a device.
+> - You must be a Cloud Device Administrator, Intune Administrator, Windows 365 Administrator or Global Administrator in Azure AD to delete a device.
> - Printers and Windows Autopilot devices can't be deleted in Azure AD. > - Deleting a device: > - Prevents it from accessing your Azure AD resources.
active-directory Active Directory Get Started Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-get-started-premium.md
Before you sign up for Active Directory Premium 1 or Premium 2, you must first d
Signing up using your Azure subscription with previously purchased and activated Azure AD licenses, automatically activates the licenses in the same directory. If that's not the case, you must still activate your license plan and your Azure AD access. For more information about activating your license plan, see [Activate your new license plan](#activate-your-new-license-plan). For more information about activating your Azure AD access, see [Activate your Azure AD access](#activate-your-azure-ad-access). ## Sign up using your existing Azure or Microsoft 365 subscription
-As an Azure or Microsoft 365 subscriber, you can purchase the Azure Active Directory Premium editions online. For detailed steps, see How to Purchase Azure Active Directory Premium - New Customers.
+As an Azure or Microsoft 365 subscriber, you can purchase the Azure Active Directory Premium editions online. For detailed steps, see [Buy or remove licenses](https://docs.microsoft.com/microsoft-365/commerce/licenses/buy-licenses?view=o365-worldwide).
## Sign up using your Enterprise Mobility + Security licensing plan Enterprise Mobility + Security is a suite, comprised of Azure AD Premium, Azure Information Protection, and Microsoft Intune. If you already have an EMS license, you can get started with Azure AD, using one of these licensing options:
active-directory Active Directory Whatis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-whatis.md
# What is Azure Active Directory?
-Azure Active Directory (Azure AD) is a cloud-based identity and access management service. This service helps your employees access external resources, such as Microsoft 365, the Azure portal, and thousands of other SaaS applications. Azure AD also helps them access internal resources. These are resources like apps on your corporate network and intranet, along with any cloud apps developed by your own organization. For more information about creating a tenant for your organization, see [Quickstart: Create a new tenant in Azure Active Directory](active-directory-access-create-new-tenant.md).
+Azure Active Directory (Azure AD) is a cloud-based identity and access management service. This service helps your employees access external resources, such as Microsoft 365, the Azure portal, and thousands of other SaaS applications. Azure Active Directory also helps them access internal resources like apps on your corporate intranet network, along with any cloud apps developed for your own organization. For more information about creating a tenant for your organization, see [Quickstart: Create a new tenant in Azure Active Directory](active-directory-access-create-new-tenant.md).
-To learn the difference between Azure AD and Active Directory Domain Services, see [Compare Active Directory to Azure Active Directory](active-directory-compare-azure-ad-to-ad.md). You can also use the various [Microsoft Cloud for Enterprise Architects Series](/microsoft-365/solutions/cloud-architecture-models) posters to better understand the core identity services in Azure, Azure AD, and Microsoft 365.
+To learn the differences between Azure Active Directory and Azure Active Directory, see [Compare Active Directory to Azure Active Directory](active-directory-compare-azure-ad-to-ad.md). You can also refer [Microsoft Cloud for Enterprise Architects Series](/microsoft-365/solutions/cloud-architecture-models) posters to better understand the core identity services in Azure like Azure AD and Microsoft-365.
## Who uses Azure AD?
To better understand Azure AD and its documentation, we recommend reviewing the
- [Associate an Azure subscription to your Azure Active Directory](active-directory-how-subscriptions-associated-directory.md) -- [Azure Active Directory Premium P2 feature deployment checklist](active-directory-deployment-checklist-p2.md)
+- [Azure Active Directory Premium P2 feature deployment checklist](active-directory-deployment-checklist-p2.md)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
For more information about users flows, see [User flow versions in Azure Active
In July 2020 we have added following 55 new applications in our App gallery with Federation support:
-[Clap Your Hands](http://www.rmit.com.ar/), [Appreiz](https://microsoftteams.appreiz.com/), [Inextor Vault](https://inexto.com/inexto-suite/inextor), [Beekast](https://my.beekast.com/), [Templafy OpenID Connect](https://app.templafy.com/), [PeterConnects receptionist](https://msteams.peterconnects.com/), [AlohaCloud](https://appfusions.alohacloud.com/auth), Control Tower, [Cocoom](https://start.cocoom.com/), [COINS Construction Cloud](https://sso.coinsconstructioncloud.com/#login/), [Medxnote MT](https://task.teamsmain.medx.im/authorization), [Reflekt](https://reflekt.konsolute.com/login), [Rever](https://app.reverscore.net/access), [MyCompanyArchive](https://login.mycompanyarchive.com/), [GReminders](https://app.greminders.com/o365-oauth), [Titanfile](../saas-apps/titanfile-tutorial.md), [Wootric](../saas-apps/wootric-tutorial.md), [SolarWinds Orion](https://support.solarwinds.com/SuccessCenter/s/orion-platform?language=en_US), [OpenText Directory Services](../saas-apps/opentext-directory-services-tutorial.md), [Datasite](../saas-apps/datasite-tutorial.md), [BlogIn](../saas-apps/blogin-tutorial.md), [IntSights](../saas-apps/intsights-tutorial.md), [kpifire](../saas-apps/kpifire-tutorial.md), [Textline](../saas-apps/textline-tutorial.md), [Cloud Academy - SSO](../saas-apps/cloud-academy-sso-tutorial.md), [Community Spark](../saas-apps/community-spark-tutorial.md), [Chatwork](../saas-apps/chatwork-tutorial.md), [CloudSign](../saas-apps/cloudsign-tutorial.md), [C3M Cloud Control](../saas-apps/c3m-cloud-control-tutorial.md), [SmartHR](https://smarthr.jp/), [NumlyEngageΓäó](../saas-apps/numlyengage-tutorial.md), [Michigan Data Hub Single Sign-On](../saas-apps/michigan-data-hub-single-sign-on-tutorial.md), [Egress](../saas-apps/egress-tutorial.md), [SendSafely](../saas-apps/sendsafely-tutorial.md), [Eletive](https://app.eletive.com/), [Right-Hand Cybersecurity ADI](https://right-hand.ai/), [Fyde Enterprise Authentication](https://enterprise.fyde.com/), [Verme](../saas-apps/verme-tutorial.md), [Lenses.io](../saas-apps/lensesio-tutorial.md), [Momenta](../saas-apps/momenta-tutorial.md), [Uprise](https://app.uprise.co/sign-in), [Q](https://q.moduleq.com/login), [CloudCords](../saas-apps/cloudcords-tutorial.md), [TellMe Bot](https://tellme365liteweb.azurewebsites.net/), [Inspire](https://app.inspiresoftware.com/), [Maverics Identity Orchestrator SAML Connector](https://www.strata.io/identity-fabric/), [Smartschool (School Management System)](https://smartschoolz.com/login), [Zepto - Intelligent timekeeping](https://user.zepto-ai.com/signin), [Studi.ly](https://studi.ly/), [Trackplan](http://www.trackplanfm.com/), [Skedda](../saas-apps/skedda-tutorial.md), [WhosOnLocation](../saas-apps/whos-on-location-tutorial.md), [Coggle](../saas-apps/coggle-tutorial.md), [Kemp LoadMaster](https://kemptechnologies.com/cloud-load-balancer/), [BrowserStack Single Sign-on](../saas-apps/browserstack-single-sign-on-tutorial.md)
+[Appreiz](https://microsoftteams.appreiz.com/), [Inextor Vault](https://inexto.com/inexto-suite/inextor), [Beekast](https://my.beekast.com/), [Templafy OpenID Connect](https://app.templafy.com/), [PeterConnects receptionist](https://msteams.peterconnects.com/), [AlohaCloud](https://appfusions.alohacloud.com/auth), Control Tower, [Cocoom](https://start.cocoom.com/), [COINS Construction Cloud](https://sso.coinsconstructioncloud.com/#login/), [Medxnote MT](https://task.teamsmain.medx.im/authorization), [Reflekt](https://reflekt.konsolute.com/login), [Rever](https://app.reverscore.net/access), [MyCompanyArchive](https://login.mycompanyarchive.com/), [GReminders](https://app.greminders.com/o365-oauth), [Titanfile](../saas-apps/titanfile-tutorial.md), [Wootric](../saas-apps/wootric-tutorial.md), [SolarWinds Orion](https://support.solarwinds.com/SuccessCenter/s/orion-platform?language=en_US), [OpenText Directory Services](../saas-apps/opentext-directory-services-tutorial.md), [Datasite](../saas-apps/datasite-tutorial.md), [BlogIn](../saas-apps/blogin-tutorial.md), [IntSights](../saas-apps/intsights-tutorial.md), [kpifire](../saas-apps/kpifire-tutorial.md), [Textline](../saas-apps/textline-tutorial.md), [Cloud Academy - SSO](../saas-apps/cloud-academy-sso-tutorial.md), [Community Spark](../saas-apps/community-spark-tutorial.md), [Chatwork](../saas-apps/chatwork-tutorial.md), [CloudSign](../saas-apps/cloudsign-tutorial.md), [C3M Cloud Control](../saas-apps/c3m-cloud-control-tutorial.md), [SmartHR](https://smarthr.jp/), [NumlyEngageΓäó](../saas-apps/numlyengage-tutorial.md), [Michigan Data Hub Single Sign-On](../saas-apps/michigan-data-hub-single-sign-on-tutorial.md), [Egress](../saas-apps/egress-tutorial.md), [SendSafely](../saas-apps/sendsafely-tutorial.md), [Eletive](https://app.eletive.com/), [Right-Hand Cybersecurity ADI](https://right-hand.ai/), [Fyde Enterprise Authentication](https://enterprise.fyde.com/), [Verme](../saas-apps/verme-tutorial.md), [Lenses.io](../saas-apps/lensesio-tutorial.md), [Momenta](../saas-apps/momenta-tutorial.md), [Uprise](https://app.uprise.co/sign-in), [Q](https://q.moduleq.com/login), [CloudCords](../saas-apps/cloudcords-tutorial.md), [TellMe Bot](https://tellme365liteweb.azurewebsites.net/), [Inspire](https://app.inspiresoftware.com/), [Maverics Identity Orchestrator SAML Connector](https://www.strata.io/identity-fabric/), [Smartschool (School Management System)](https://smartschoolz.com/login), [Zepto - Intelligent timekeeping](https://user.zepto-ai.com/signin), [Studi.ly](https://studi.ly/), [Trackplan](http://www.trackplanfm.com/), [Skedda](../saas-apps/skedda-tutorial.md), [WhosOnLocation](../saas-apps/whos-on-location-tutorial.md), [Coggle](../saas-apps/coggle-tutorial.md), [Kemp LoadMaster](https://kemptechnologies.com/cloud-load-balancer/), [BrowserStack Single Sign-on](../saas-apps/browserstack-single-sign-on-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial
For more information, see the [Risk detection API reference documentation](/grap
In June 2019, we've added these 22 new apps with Federation support to the app gallery:
-[Azure AD SAML Toolkit](../saas-apps/saml-toolkit-tutorial.md), [Otsuka Shokai (大塚商会)](../saas-apps/otsuka-shokai-tutorial.md), [ANAQUA](../saas-apps/anaqua-tutorial.md), [Azure VPN Client](https://portal.azure.com/), [ExpenseIn](../saas-apps/expensein-tutorial.md), [Helper Helper](../saas-apps/helper-helper-tutorial.md), [Costpoint](../saas-apps/costpoint-tutorial.md), [GlobalOne](../saas-apps/globalone-tutorial.md), [Mercedes-Benz In-Car Office](https://me.secure.mercedes-benz.com/), [Skore](https://app.justskore.it/), [Oracle Cloud Infrastructure Console](../saas-apps/oracle-cloud-tutorial.md), [CyberArk SAML Authentication](../saas-apps/cyberark-saml-authentication-tutorial.md), [Scrible Edu](https://www.scrible.com/sign-in/#/create-account), [PandaDoc](../saas-apps/pandadoc-tutorial.md), [Perceptyx](https://apexdata.azurewebsites.net/docs.microsoft.com/azure/active-directory/saas-apps/perceptyx-tutorial), [Proptimise OS](https://www.proptimise.com/), [Vtiger CRM (SAML)](../saas-apps/vtiger-crm-saml-tutorial.md), Oracle Access Manager for Oracle Retail Merchandising, Oracle Access Manager for Oracle E-Business Suite, Oracle IDCS for E-Business Suite, Oracle IDCS for PeopleSoft, Oracle IDCS for JD Edwards
+[Azure AD SAML Toolkit](../saas-apps/saml-toolkit-tutorial.md), [Otsuka Shokai (大塚商会)](../saas-apps/otsuka-shokai-tutorial.md), [ANAQUA](../saas-apps/anaqua-tutorial.md), [Azure VPN Client](https://portal.azure.com/), [ExpenseIn](../saas-apps/expensein-tutorial.md), [Helper Helper](../saas-apps/helper-helper-tutorial.md), [Costpoint](../saas-apps/costpoint-tutorial.md), [GlobalOne](../saas-apps/globalone-tutorial.md), [Mercedes-Benz In-Car Office](https://me.secure.mercedes-benz.com/), [Skore](https://app.justskore.it/), [Oracle Cloud Infrastructure Console](../saas-apps/oracle-cloud-tutorial.md), [CyberArk SAML Authentication](../saas-apps/cyberark-saml-authentication-tutorial.md), [Scrible Edu](https://www.scrible.com/sign-in/#/create-account), [PandaDoc](../saas-apps/pandadoc-tutorial.md), [Perceptyx](https://apexdata.azurewebsites.net/docs.microsoft.com/azure/active-directory/saas-apps/perceptyx-tutorial), Proptimise OS, [Vtiger CRM (SAML)](../saas-apps/vtiger-crm-saml-tutorial.md), Oracle Access Manager for Oracle Retail Merchandising, Oracle Access Manager for Oracle E-Business Suite, Oracle IDCS for E-Business Suite, Oracle IDCS for PeopleSoft, Oracle IDCS for JD Edwards
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Use multi-stage reviews to create Azure AD access reviews in sequential stages,
In February 2022 we added the following 20 new applications in our App gallery with Federation support:
-[Embark](../saas-apps/embark-tutorial.md), [FENCE-Mobile RemoteManager SSO](../saas-apps/fence-mobile-remotemanager-sso-tutorial.md), [カオナビ](../saas-apps/kao-navi-tutorial.md), [Adobe Identity Management (OIDC)](../saas-apps/adobe-identity-management-tutorial.md), [AppRemo](../saas-apps/appremo-tutorial.md), [Live Center](https://livecenter.norkon.net/Login), [Offishall](https://app.offishall.io/), [MoveWORK Flow](https://www.movework-flow.fm/login), [Cirros SL](https://www.cirros.net/cirros-sl/), [ePMX Procurement Software](https://azure.epmxweb.com/admin/index.php?), [Vanta O365](https://app.vanta.com/connections), [Hubble](../saas-apps/hubble-tutorial.md), [Medigold Gateway](https://gateway.medigoldcore.com), [クラウドログ](../saas-apps/crowd-log-tutorial.md),[Amazing People Schools](../saas-apps/amazing-people-schools-tutorial.md), [Salus](https://salus.com/login), [XplicitTrust Network Access](https://console.xplicittrust.com/#/dashboard), [Spike Email - Mail & Team Chat](https://spikenow.com/web/), [AltheaSuite](https://planmanager.altheasuite.com/), [Balsamiq Wireframes](../saas-apps/balsamiq-wireframes-tutorial.md).
+[Embark](../saas-apps/embark-tutorial.md), [FENCE-Mobile RemoteManager SSO](../saas-apps/fence-mobile-remotemanager-sso-tutorial.md), [カオナビ](../saas-apps/kao-navi-tutorial.md), [Adobe Identity Management (OIDC)](../saas-apps/adobe-identity-management-tutorial.md), [AppRemo](../saas-apps/appremo-tutorial.md), [Live Center](https://livecenter.norkon.net/Login), [Offishall](https://app.offishall.io/), [MoveWORK Flow](https://www.movework-flow.fm/login), [Cirros SL](https://www.cirros.net/), [ePMX Procurement Software](https://azure.epmxweb.com/admin/index.php?), [Vanta O365](https://app.vanta.com/connections), [Hubble](../saas-apps/hubble-tutorial.md), [Medigold Gateway](https://gateway.medigoldcore.com), [クラウドログ](../saas-apps/crowd-log-tutorial.md),[Amazing People Schools](../saas-apps/amazing-people-schools-tutorial.md), [Salus](https://salus.com/login), [XplicitTrust Network Access](https://console.xplicittrust.com/#/dashboard), [Spike Email - Mail & Team Chat](https://spikenow.com/web/), [AltheaSuite](https://planmanager.altheasuite.com/), [Balsamiq Wireframes](../saas-apps/balsamiq-wireframes-tutorial.md).
You can also find the documentation of all the applications from here: [https://aka.ms/AppsTutorial](../saas-apps/tutorial-list.md),
active-directory How To Connect Azure Ad Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-azure-ad-trust.md
na Previously updated : 01/05/2022 Last updated : 03/24/2022
You can restore the issuance transform rules using the suggested steps below
## Best practice for securing and monitoring the AD FS trust with Azure AD When you federate your AD FS with Azure AD, it is critical that the federation configuration (trust relationship configured between AD FS and Azure AD) is monitored closely, and any unusual or suspicious activity is captured. To do so, we recommend setting up alerts and getting notified whenever any changes are made to the federation configuration. To learn how to setup alerts, see [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md). -
+If you are using cloud Azure MFA, for multi factor authentication, with federated users, we highly recommend enabling additional security protection. This security protection prevents bypassing of cloud Azure MFA when federated with Azure AD. When enabled, for a federated domain in your Azure AD tenant, it ensures that a bad actor cannot bypass Azure MFA by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, `federatedIdpMfaBehavior`.For additional information see [Best practices for securing Active Directory Federation Services](https://docs.microsoft.com/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-mfa-when-federated-with-azure-ad)
## Next steps * [Manage and customize Active Directory Federation Services using Azure AD Connect](how-to-connect-fed-management.md)
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
Previously updated : 07/08/2021 Last updated : 04/15/2022
Install [Azure Active Directory Connect](https://www.microsoft.com/download/deta
### Document current federation settings
-To find your current federation settings, run the [Get-MsolDomainFederationSettings](/windows-server/identity/ad-fs/operations/ad-fs-prompt-login) cmdlet.
+To find your current federation settings, run [Get-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/get-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true).
-Verify any settings that might have been customized for your federation design and deployment documentation. Specifically, look for customizations in **PreferredAuthenticationProtocol**, **SupportsMfa**, and **PromptLoginBehavior**.
+```powershell
+Get-MgDomainFederationConfiguration ΓÇôDomainID yourdomain.com
+```
+
+Verify any settings that might have been customized for your federation design and deployment documentation. Specifically, look for customizations in **PreferredAuthenticationProtocol**, [federatedIdpMfaBehavior](/graph/api/resources/federatedIdpMfaBehavior?view=graph-rest-beta&preserve-view=true), **SupportsMfa** (if **federatedIdpMfaBehavior** is not set), and **PromptLoginBehavior**.
### Back up federation settings
When technology projects fail, itΓÇÖs typically because of mismatched expectatio
### Plan communications
-After migrating to cloud authentication, the user sign in experience for accessing Microsoft 365 and other resources that are authenticated through Azure AD changes. Users who are outside the network see only the Azure AD sign in page.
+After migrating to cloud authentication, the user sign-in experience for accessing Microsoft 365 and other resources that are authenticated through Azure AD changes. Users who are outside the network see only the Azure AD sign-in page.
Proactively communicate with your users how their experience will change, when it will change, and how to gain support if they experience issues.
Here are key migration considerations.
The onload.js file cannot be duplicated in Azure AD. If your AD FS instance is heavily customized and relies on specific customization settings in the onload.js file, verify if Azure AD can meet your current customization requirements and plan accordingly. Communicate these upcoming changes to your users.
-#### Sign in experience
+#### Sign-in experience
-You cannot customize Azure AD sign in experience. No matter how your users signed-in earlier, you need a fully qualified domain name such as User Principal Name (UPN) or email to sign into Azure AD.
+You cannot customize Azure AD sign-in experience. No matter how your users signed-in earlier, you need a fully qualified domain name such as User Principal Name (UPN) or email to sign into Azure AD.
#### Organization branding
-You can [customize the Azure AD sign in page](../fundamentals/customize-branding.md). Some visual changes from AD FS on sign in pages should be expected after the conversion.
+You can [customize the Azure AD sign-in page](../fundamentals/customize-branding.md). Some visual changes from AD FS on sign-in pages should be expected after the conversion.
>[!NOTE] >Organization branding is not available in free Azure AD licenses unless you have a Microsoft 365 license.
Consider replacing AD FS access control policies with the equivalent Azure AD [C
### Plan support for MFA
-Each federated domain in Azure AD has a SupportsMFA flag.
+For federated domains, MFA may be enforced by Azure AD Conditional Access or by the on-premises federation provider. You can enable protection to prevent bypassing of Azure MFA by configuring the security setting **federatedIdpMfaBehavior**. Enabling the protection for a federated domain in your Azure AD tenant makes sure that Azure MFA is always performed when a federated user accesses an application that is governed by a Conditional Access policy requiring MFA. This includes performing Azure MFA even when federated identity provider has issued federated token claims that on-prem MFA has been performed. Enforcing Azure MFA every time assures that a bad actor cannot bypass Azure MFA by imitating that MFA has already been performed by the identity provider, and is highly recommended unless you perform MFA for your federated users using a third party MFA provider.
+
+The following table explains the behavior for each option. For more information, see [federatedIdpMfaBehavior](/graph/api/resources/federatedIdpMfaBehavior?view=graph-rest-beta&preserve-view=true).
+
+| Value | Description |
+| : | : |
+| acceptIfMfaDoneByFederatedIdp | Azure AD accepts MFA that's performed by the federated identity provider. If the federated identity provider didn't perform MFA, Azure AD performs the MFA. |
+| enforceMfaByFederatedIdp | Azure AD accepts MFA that's performed by federated identity provider. If the federated identity provider didn't perform MFA, it redirects the request to federated identity provider to perform MFA. |
+| rejectMfaByFederatedIdp | Azure AD always performs MFA and rejects MFA that's performed by the federated identity provider. |
+
+>[!NOTE]
+> The **federatedIdpMfaBehavior** setting is an evolved version of the **SupportsMfa** property of the [Set-MsolDomainFederationSettings MSOnline v1 PowerShell cmdlet](/powershell/module/msonline/set-msoldomainfederationsettings).
+
+For domains that have already set the **SupportsMfa** property, these rules determine how **federatedIdpMfaBehavior** and **SupportsMfa** work together:
+
+- Switching between **federatedIdpMfaBehavior** and **SupportsMfa** is not supported.
+- Once **federatedIdpMfaBehavior** property is set, Azure AD ignores the **SupportsMfa** setting.
+- If the **federatedIdpMfaBehavior** property is never set, Azure AD will continue to honor the **SupportsMfa** setting.
+- If neither **federatedIdpMfaBehavior** nor **SupportsMfa** is set, Azure AD will default to `acceptIfMfaDoneByFederatedIdp` behavior.
-**If the SupportsMFA flag is set to True**, Azure AD redirects users to perform MFA on AD FS or other federation providers. For example, if a user is accessing an application for which a Conditional Access policy that requires MFA has been configured, the user will be redirected to AD FS. Adding Azure AD MFA as an authentication method in AD FS, enables Azure AD MFA to be invoked once your configurations are complete.
+You can check the status of protection by running [Get-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/get-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true):
+
+```powershell
+Get-MgDomainFederationConfiguration -DomainId yourdomain.com
+```
-**If the SupportsMFA flag is set to False**, youΓÇÖre likely not using Azure MFA; youΓÇÖre probably using claims rules on AD FS relying parties to trigger MFA.
+You can also check the status of your SupportsMfa flag with [Get-MsolDomainFederationSettings](/powershell/module/msonline/get-msoldomainfederationsettings):
-You can check the status of your **SupportsMFA** flag with the following Windows PowerShell cmdlet:
```powershell
- Get-MsolDomainFederationSettings ΓÇôDomainName yourdomain.com
- ```
+Get-MsolDomainFederationSettings ΓÇôDomainName yourdomain.com
+```
>[!NOTE] >Microsoft MFA Server is nearing the end of support life, and if you're using it you must move to Azure AD MFA.
For more information, see **[Migrate from Microsoft MFA Server to Azure Multi-fa
## Plan for implementation
-This section includes pre-work before you switch your sign in method and convert the domains.
+This section includes pre-work before you switch your sign-in method and convert the domains.
### Create necessary groups for staged rollout
The version of SSO that you use is dependent on your device OS and join state.
### Pre-work for PHS and PTA
-Depending on the choice of sign in method, complete the [pre-work for PHS](how-to-connect-staged-rollout.md#pre-work-for-password-hash-sync) or [for PTA](how-to-connect-staged-rollout.md#pre-work-for-pass-through-authentication).
+Depending on the choice of sign-in method, complete the [pre-work for PHS](how-to-connect-staged-rollout.md#pre-work-for-password-hash-sync) or [for PTA](how-to-connect-staged-rollout.md#pre-work-for-pass-through-authentication).
## Implement your solution
-Finally, you switch the sign in method to PHS or PTA, as planned and convert the domains from federation to cloud authentication.
+Finally, you switch the sign-in method to PHS or PTA, as planned and convert the domains from federation to cloud authentication.
### Using staged rollout
Sign in to the [Azure AD portal](https://aad.portal.azure.com/), select **Azure
#### Option A
-**Switch from federation to the new sign in method by using Azure AD Connect**
+**Switch from federation to the new sign-in method by using Azure AD Connect**
1. On your Azure AD Connect server, open **Azure AD Connect** and select **Configure**.
Sign in to the [Azure AD portal](https://aad.portal.azure.com/), select **Azure
Domain Administrator account credentials are required to enable seamless SSO. The process completes the following actions, which require these elevated permissions: - A computer account named AZUREADSSO (which represents Azure AD) is created in your on-premises Active Directory instance. - The computer accountΓÇÖs Kerberos decryption key is securely shared with Azure AD.
- - Two Kerberos service principal names (SPNs) are created to represent two URLs that are used during Azure AD sign in.
+ - Two Kerberos service principal names (SPNs) are created to represent two URLs that are used during Azure AD sign-in.
The domain administrator credentials are not stored in Azure AD Connect or Azure AD and get discarded when the process successfully finishes. They are used to turn ON this feature.
Sign in to the [Azure AD portal](https://aad.portal.azure.com/), select **Azure
![Ready to configure page](media/deploy-cloud-user-authentication/ready-to-configure.png) > [!IMPORTANT]
- > At this point, all your federated domains will change to managed authentication. Your selected User sign in method is the new method of authentication.
+ > At this point, all your federated domains will change to managed authentication. Your selected User sign-in method is the new method of authentication.
1. In the Azure AD portal, select **Azure Active Directory**, and then select **Azure AD Connect**.
For most customers, two or three authentication agents are sufficient to provide
#### Option B
-**Switch from federation to the new sign in method by using Azure AD Connect and PowerShell**
+**Switch from federation to the new sign-in method by using Azure AD Connect and PowerShell**
*Available if you didnΓÇÖt initially configure your federated domains by using Azure AD Connect or if you're using third-party federation services.*
On your Azure AD Connect server, follow the steps 1- 5 in [Option A](#option-a).
Complete the following tasks to verify the sign-up method and to finish the conversion process.
-### Test the new sign in method
+### Test the new sign-in method
-When your tenant used federated identity, users were redirected from the Azure AD sign in page to your AD FS environment. Now that the tenant is configured to use the new sign in method instead of federated authentication, users arenΓÇÖt redirected to AD FS.
+When your tenant used federated identity, users were redirected from the Azure AD sign-in page to your AD FS environment. Now that the tenant is configured to use the new sign-in method instead of federated authentication, users arenΓÇÖt redirected to AD FS.
**Instead, users sign in directly on the Azure AD sign-in page.**
-Follow the steps in this link - [Validate sign in with PHS/ PTA and seamless SSO](how-to-connect-staged-rollout.md#validation) (where required)
+Follow the steps in this link - [Validate sign-in with PHS/ PTA and seamless SSO](how-to-connect-staged-rollout.md#validation) (where required)
### Remove a user from staged rollout
active-directory View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/view-assignments.md
Previously updated : 02/04/2022 Last updated : 04/15/2022
This article describes how to list roles you have assigned in Azure Active Direc
## Prerequisites -- AzureADPreview module when using PowerShell
+- AzureAD module when using PowerShell
- Admin consent when using Graph explorer for Microsoft Graph API For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
It's easy to list your own permissions as well. Select **Your Role** on the **Ro
### Download role assignments
-To download all assignments for a specific role, on the **Roles and administrators** page, select a role, and then select **Download role assignments**. A CSV file that lists assignments at all scopes for that role is downloaded.
+To download all active role assignments across all roles, including built-in and custom roles, follow these steps (currently in Preview).
-![download all assignments for a role](./media/view-assignments/download-role-assignments.png)
+1. On the **Roles and administrators** page, select **All roles**.
+
+1. Select **Download assignments**.
+
+ A CSV file that lists assignments at all scopes for all roles is downloaded.
+
+ :::image type="content" source="./media/view-assignments/download-role-assignments-all.png" alt-text="Screenshot showing download all role assignments." lightbox="./media/view-assignments/download-role-assignments-all.png":::
+
+To download all assignments for a specific role, follow these steps.
+
+1. On the **Roles and administrators** page, select a role.
+
+1. Select **Download assignments**.
+
+ A CSV file that lists assignments at all scopes for that role is downloaded.
+
+ :::image type="content" source="./media/view-assignments/download-role-assignments.png" alt-text="Screenshot showing download all assignments for a specific role." lightbox="./media/view-assignments/download-role-assignments.png":::
### List role assignments with single-application scope
This section describes how to list role assignments with single-application scop
This section describes viewing assignments of a role with organization-wide scope. This article uses the [Azure Active Directory PowerShell Version 2](/powershell/module/azuread/#directory_roles) module. To view single-application scope assignments using PowerShell, you can use the cmdlets in [Assign custom roles with PowerShell](custom-assign-powershell.md).
-Example of listing the role assignments.
+Use the [Get-AzureADMSRoleDefinition](/powershell/module/azuread/get-azureadmsroledefinition) and [Get-AzureADMSRoleAssignment](/powershell/module/azuread/get-azureadmsroleassignment) commands to list role assignments.
+
+The following example shows how to list the role assignments for the [Groups Administrator](permissions-reference.md#groups-administrator) role.
-``` PowerShell
+```powershell
# Fetch list of all directory roles with template ID Get-AzureADMSRoleDefinition # Fetch a specific directory role by ID
-$role = Get-AzureADMSRoleDefinition -Id "5b3fe201-fa8b-4144-b6f1-875829ff7543"
+$role = Get-AzureADMSRoleDefinition -Id "fdd7a751-b60b-444a-984c-02652fe8fa1c"
# Fetch membership for a role Get-AzureADMSRoleAssignment -Filter "roleDefinitionId eq '$($role.Id)'" ```
+```Example
+RoleDefinitionId PrincipalId DirectoryScopeId
+- -- -
+fdd7a751-b60b-444a-984c-02652fe8fa1c 04f632c3-8065-4466-9e30-e71ec81b3c36 /administrativeUnits/3883b136-67f0-412c-9b...
+```
+
+The following example shows how to list all active role assignments across all roles, including built-in and custom roles (currently in Preview).
+
+```powershell
+$roles = Get-AzureADMSRoleDefinition
+foreach ($role in $roles)
+{
+ Get-AzureADMSRoleAssignment -Filter "roleDefinitionId eq '$($role.Id)'"
+}
+```
+
+```Example
+RoleDefinitionId PrincipalId DirectoryScopeId Id
+- -- - --
+e8611ab8-c189-46e8-94e1-60213ab1f814 9f9fb383-3148-46a7-9cec-5bf93f8a879c / uB2o6InB6EaU4WAhOrH4FHwni...
+e8611ab8-c189-46e8-94e1-60213ab1f814 027c8aba-2e94-49a8-974b-401e5838b2a0 / uB2o6InB6EaU4WAhOrH4FEqdn...
+fdd7a751-b60b-444a-984c-02652fe8fa1c 04f632c3-8065-4466-9e30-e71ec81b3c36 /administrati... UafX_Qu2SkSYTAJlL-j6HL5Dr...
+...
+```
+ ## Microsoft Graph API
-This section describes how to list role assignments with organization-wide scope. To list single-application scope role assignments using Graph API, you can use the operations in [Assign custom roles with Graph API](custom-assign-graph.md).
+This section describes how to list role assignments with organization-wide scope. To list single-application scope role assignments using Graph API, you can use the operations in [Assign custom roles with Graph API](custom-assign-graph.md).
-Use the [List unifiedRoleAssignments](/graph/api/rbacapplication-list-roleassignments) API to get the role assignment for a specified role definition.
+Use the [List unifiedRoleAssignments](/graph/api/rbacapplication-list-roleassignments) API to get the role assignments for a specific role definition. The following example shows how to list the role assignments for a specific role definition with the ID `3671d40a-1aac-426c-a0c1-a3821ebd8218`.
```http GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments&$filter=roleDefinitionId eq ΓÇÿ<template-id-of-role-definition>ΓÇÖ
active-directory F5 Big Ip Oracle Jd Edwards Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-oracle-jd-edwards-easy-button.md
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes from Azure AD or another source
-See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory Servicenow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-tutorial.md
Previously updated : 07/21/2021 Last updated : 04/06/2022
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * A ServiceNow single sign-on (SSO) enabled subscription.
-* For ServiceNow, an instance or tenant of ServiceNow supports Calgary, Kingston, London, Madrid, New York, Orlando and Paris versions or later.
+* For ServiceNow, an instance or tenant of ServiceNow supports Calgary, Kingston, London, Madrid, New York, Orlando, Paris and San Diego versions or later.
* For ServiceNow Express, an instance of ServiceNow Express, Helsinki version or later. * The ServiceNow tenant must have the [Multiple Provider Single Sign On Plugin](https://old.wiki/index.php/Multiple_Provider_Single_Sign-On#gsc.tab=0) enabled. * For automatic configuration, enable the multi-provider plugin for ServiceNow.
-* To install the ServiceNow Classic (Mobile) application, go to the appropriate store, and search for the ServiceNow Classic application. Then download it.
+* To install the ServiceNow Agent (Mobile) application, go to the appropriate store, and search for the ServiceNow Agent application. Then download it.
> [!NOTE] > This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
In this tutorial, you configure and test Azure AD SSO in a test environment.
* ServiceNow supports [Automated user provisioning](servicenow-provisioning-tutorial.md).
-* You can configure the ServiceNow Classic (Mobile) application with Azure AD for enabling SSO. It supports both Android and iOS users. In this tutorial, you configure and test Azure AD SSO in a test environment.
+* You can configure the ServiceNow Agent (Mobile) application with Azure AD for enabling SSO. It supports both Android and iOS users. In this tutorial, you configure and test Azure AD SSO in a test environment.
## Add ServiceNow from the gallery
To configure and test Azure AD SSO with ServiceNow, perform the following steps:
1. [Create a ServiceNow test user](#create-servicenow-test-user) to have a counterpart of B.Simon in ServiceNow, linked to the Azure AD representation of the user. 1. [Configure ServiceNow Express SSO](#configure-servicenow-express-sso) to configure the single sign-on settings on the application side. 3. [Test SSO](#test-sso) to verify whether the configuration works.
-4. [Test SSO for ServiceNow Classic (Mobile)](#test-sso-for-servicenow-classic-mobile) to verify whether the configuration works.
+4. [Test SSO for ServiceNow Agent (Mobile)](#test-sso-for-servicenow-agent-mobile) to verify whether the configuration works.
## Configure Azure AD SSO
The objective of this section is to create a user called B.Simon in ServiceNow.
When you select the ServiceNow tile in the Access Panel, you should be automatically signed in to the ServiceNow for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-## Test SSO for ServiceNow Classic (Mobile)
+## Test SSO for ServiceNow Agent (Mobile)
-1. Open your **ServiceNow Classic (Mobile)** application, and perform the following steps:
+1. Open your **ServiceNow Agent (Mobile)** application, and perform the following steps:
- a. Select the plus sign in the lower-right corner.
+ b. Enter your ServiceNow instance address, nickname and select **Save and Login**.
- ![Screenshot of ServiceNow Classic application, with plus sign highlighted](./media/servicenow-tutorial/test-03.png)
-
- b. Enter your ServiceNow instance name, and select **Continue**.
-
- ![Screenshot of Add Instance page, with Continue highlighted](./media/servicenow-tutorial/test-04.png)
+ ![Screenshot of Add Instance page, with Continue highlighted](./media/servicenow-tutorial/mobile-instance.png)
c. On the **Log in** page, perform the following steps:
- ![Screenshot of Log in page, with Use external login highlighted](./media/servicenow-tutorial/test-01.png)
+ ![Screenshot of Log in page, with Use external login highlighted](./media/servicenow-tutorial/mobile-login.png)
* Enter **Username**, like B.simon@contoso.com.
- * Select **USE EXTERNAL LOGIN**. You're redirected to the Azure AD page for sign-in.
+ * Select **Use external login**. You're redirected to the Azure AD page for sign-in.
* Enter your credentials. If there is any third-party authentication, or any other security feature enabled, the user must respond accordingly. The application **Home page** appears.
- ![Screenshot of the application home page](./media/servicenow-tutorial/test-02.png)
+ ![Screenshot of the application home page](./media/servicenow-tutorial/mobile-landing-page.png)
## Next Steps
active-directory Fedramp Identification And Authentication Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/fedramp-identification-and-authentication-controls.md
Previously updated : 4/26/2021 Last updated : 4/07/2022
Each row in the following table provides prescriptive guidance to help you devel
| IA-02(5)| **When multiple users have access to a shared or group account password, require each user to first authenticate by using an individual authenticator.**<p>Use an individual account per user. If a shared account is required, Azure AD permits binding of multiple authenticators to an account so that each user has an individual authenticator. <p>Resources<br><li>[How it works: Azure AD multifactor authentication](../authentication/concept-mfa-howitworks.md)<br> <li>[Manage authentication methods for Azure AD multifactor authentication](../authentication/howto-mfa-userdevicesettings.md) | | IA-02(8)| **Implement replay-resistant authentication mechanisms for network access to privileged accounts.**<p>Configure conditional access policies to require multifactor authentication for all users. All Azure AD authentication methods at authentication assurance level 2 and 3 use either nonce or challenges and are resistant to replay attacks.<p>References<br> <li>[Conditional access: Require multifactor authentication for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<br> <li>[Achieving NIST authenticator assurance levels with the Microsoft identity platform](nist-overview.md) | | IA-02(11)| **Implement Azure AD multifactor authentication to access customer-deployed resources remotely so that one of the factors is provided by a device separate from the system gaining access where the device meets FIPS-140-2, NIAP certification, or NSA approval.**<p>See guidance for IA-02(1-4). Azure AD authentication methods to consider at AAL3 meeting the separate device requirements are:<p> FIDO2 security keys<br> <li>Windows Hello for Business with hardware TPM (TPM is recognized as a valid "something you have" factor by NIST 800-63B Section 5.1.7.1.)<br> <li>Smart card<p>References<br><li>[Achieving NIST authenticator assurance levels with the Microsoft identity platform](nist-overview.md)<br> <li>[NIST 800-63B Section 5.1.7.1](https://pages.nist.gov/800-63-3/sp800-63b.html) |
-| IA-02(12)| **Accept and verify personal identity verification (PIV) credentials. This control isn't applicable if the customer doesn't deploy PIV credentials.**<p>Configure federated authentication by using Active Directory Federation Services (AD FS) to accept PIV (certificate authentication) as both primary and multifactor authentication methods and issue the multifactor authentication (MultipleAuthN) claim when PIV is used. Configure the federated domain in Azure AD with SupportsMFA to direct multifactor authentication requests originating at Azure AD to AD FS. Alternatively, you can use PIV for sign-in on Windows devices and later use integrated Windows authentication along with seamless single sign-on. Windows Server and client verify certificates by default when used for authentication. <p>Resources<br><li>[What is federation with Azure AD?](../hybrid/whatis-fed.md)<br> <li>[Configure AD FS support for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br> <li>[Configure authentication policies](/windows-server/identity/ad-fs/operations/configure-authentication-policies)<br> <li>[Secure resources with Azure AD multifactor authentication and AD FS](../authentication/howto-mfa-adfs.md)<br><li>[Set-MsolDomainFederationSettings](/powershell/module/msonline/set-msoldomainfederationsettings)<br> <li>[Azure AD Connect: Seamless single sign-on](../hybrid/how-to-connect-sso.md) |
+| IA-02(12)| **Accept and verify personal identity verification (PIV) credentials. This control isn't applicable if the customer doesn't deploy PIV credentials.**<p>Configure federated authentication by using Active Directory Federation Services (AD FS) to accept PIV (certificate authentication) as both primary and multifactor authentication methods and issue the multifactor authentication (MultipleAuthN) claim when PIV is used. Configure the federated domain in Azure AD with setting [federatedIdpMfaBehavior](/graph/api/resources/federatedIdpMfaBehavior?view=graph-rest-beta&preserve-view=true) to `enforceMfaByFederatedIdp` (recommended) or SupportsMfa to `$True` to direct multifactor authentication requests originating at Azure AD to AD FS. Alternatively, you can use PIV for sign-in on Windows devices and later use integrated Windows authentication along with seamless single sign-on. Windows Server and client verify certificates by default when used for authentication. <p>Resources<br><li>[What is federation with Azure AD?](../hybrid/whatis-fed.md)<br> <li>[Configure AD FS support for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br> <li>[Configure authentication policies](/windows-server/identity/ad-fs/operations/configure-authentication-policies)<br> <li>[Secure resources with Azure AD multifactor authentication and AD FS](../authentication/howto-mfa-adfs.md)<br><li>[Set-MsolDomainFederationSettings](/powershell/module/msonline/set-msoldomainfederationsettings)<br> <li>[Azure AD Connect: Seamless single sign-on](../hybrid/how-to-connect-sso.md) |
| IA-03| **Implement device identification and authentication prior to establishing a connection.**<p>Configure Azure AD to identify and authenticate Azure AD Registered, Azure AD Joined, and Azure AD Hybrid joined devices.<p> Resources<br><li>[What is a device identity?](../devices/overview.md)<br> <li>[Plan an Azure AD devices deployment](../devices/plan-device-deployment.md)<br><li>[Require managed devices for cloud app access with conditional access](../conditional-access/require-managed-devices.md) | | IA-04<br>IA-04(4)| **Disable account identifiers after 35 days of inactivity and prevent their reuse for two years. Manage individual identifiers by uniquely identifying each individual (for example, contractors and foreign nationals).**<p>Assign and manage individual account identifiers and status in Azure AD in accordance with existing organizational policies defined in AC-02. Follow AC-02(3) to automatically disable user and device accounts after 35 days of inactivity. Ensure that organizational policy maintains all accounts that remain in the disabled state for at least two years. After this time, you can remove them. <p>Determine inactivity<br> <li>[Manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md)<br> <li>[Manage stale devices in Azure AD](../devices/manage-stale-devices.md)<br> <li>[See AC-02 guidance](fedramp-access-controls.md) | | IA-05| **Configure and manage information system authenticators.**<p>Azure AD supports various authentication methods. You can use your existing organizational policies for management. See guidance for authenticator selection in IA-02(1-4). Enable users in combined registration for SSPR and Azure AD multifactor authentication and require users to register a minimum of two acceptable multifactor authentication methods to facilitate self-remediation. You can revoke user-configured authenticators at any time with the authentication methods API. <p>Authenticator strength/protecting authenticator content<br> <li>[Achieving NIST authenticator assurance levels with the Microsoft identity platform](nist-overview.md)<p>Authentication methods and combined registration<br> <li>[What authentication and verification methods are available in Azure Active Directory?](../authentication/concept-authentication-methods.md)<br> <li>[Combined registration for SSPR and Azure AD multifactor authentication](../authentication/concept-registration-mfa-sspr-combined.md)<p>Authenticator revokes<br> <li>[Azure AD authentication methods API overview](/graph/api/resources/authenticationmethods-overview) |
app-service App Service Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-best-practices.md
When Azure resources composing a solution such as a web app and a database are l
Colocation in the same region is best for Azure resources composing a solution such as a web app and a database or storage account used to hold content or data. When creating resources, make sure they are in the same Azure region unless you have specific business or design reason for them not to be. You can move an App Service app to the same region as your database by using the [App Service cloning feature](app-service-web-app-cloning.md) currently available for Premium App Service Plan apps.
+## <a name ="certificatepinning"></a>Certificate Pinning
+Applications should never have a hard dependency or pin to the default \*.azurewebsites.net TLS certificate because the \*.azurewebsites.net TLS certificate could be rotated anytime given the nature of App Service as a Platform as a Service (PaaS). Certificate pinning is a practice where an application only allows a specific list of acceptable Certificate Authorities (CAs), public keys, thumbprints, or any part of the certificate hierarchy. In the event that the service rotates the App Service default wildcard TLS certificate, certificate pinned applications will break and disrupt the connectivity for applications that are hardcoded to a specific set of certificate attributes. The periodicity with which the \*.azurewebsites.net TLS certificate is rotated is also not guaranteed since the rotation frequency can change at any time.
+
+Note that applications which rely on certificate pinning should also not have a hard dependency on an App Service Managed Certificate. App Service Managed Certificates could be rotated anytime, leading to similar problems for applications that rely on stable certificate properties. It is best practice to provide a custom TLS certificate for applications that rely on certificate pinning.
+
+If an application needs to rely on certificate pinning behavior, it is recommended to add a custom domain to a web app and provide a custom TLS certificate for the domain which can then be relied on for certificate pinning.
+ ## <a name="memoryresources"></a>When apps consume more memory than expected When you notice an app consumes more memory than expected as indicated via monitoring or service recommendations, consider the [App Service Auto-Healing feature](https://azure.microsoft.com/blog/auto-healing-windows-azure-web-sites). One of the options for the Auto-Healing feature is taking custom actions based on a memory threshold. Actions span the spectrum from email notifications to investigation via memory dump to on-the-spot mitigation by recycling the worker process. Auto-healing can be configured via web.config and via a friendly user interface as described at in this blog post for the [App Service Support Site Extension](https://azure.microsoft.com/blog/additional-updates-to-support-site-extension-for-azure-app-service-web-apps).
When backup failures happen, review most recent results to understand which type
## <a name="nodejs"></a>When new Node.js apps are deployed to Azure App Service Azure App Service default configuration for Node.js apps is intended to best suit the needs of most common apps. If configuration for your Node.js app would benefit from personalized tuning to improve performance or optimize resource usage for CPU/memory/network resources, see [Best practices and troubleshooting guide for Node applications on Azure App Service](app-service-web-nodejs-best-practices-and-troubleshoot-guide.md). This article describes the iisnode settings you may need to configure for your Node.js app, describes the various scenarios or issues that your app may be facing, and shows how to address these issues.
+## <a name=""></a>When Internet of Things (IoT) devices are connected to apps on App Service
+There are a few scenarios where you can improve your environment when running Internet of Things (IoT) devices that are connected to App Service. One very common practice with IoT devices is "certificate pinning". To avoid any unforseen downtime due to changes in the service's managed certificates, you should never pin certificates to the default \*.azurewebsites.net certificate nor to an App Service Managed Certificate. If your system needs to rely on certificate pinning behavior, it is recommended to add a custom domain to a web app and provide a custom TLS certificate for the domain which can then be relied on for certificate pinning. You can refer to the [certificate pinning](#certificatepinning) section of this article for more information.
+
+To increase resiliency in your environment, you should not rely on a single endpoint for all your devices. You should at least host your web apps in two different regions to avoid a single point of failure and be ready to failover traffic. On App Service, you can add identical custom domain to different web apps as long as these web apps are hosted in different regions. This ensures that if you need to pin certificates, you can also pin on the custom TLS certificate that you provided. Another option would be to use a load balancer in front of the web apps, such as Azure Front Door or Traffic Manager, to ensure high availabilty for your web apps. You can refer to [Quickstart: Create a Front Door for a highly available global web application](../frontdoor/quickstart-create-front-door.md) or [Controlling Azure App Service traffic with Azure Traffic Manager](./web-sites-traffic-manager.md) for more information.
## Next Steps For more information on best practices, visit [App Service Diagnostics](./overview-diagnostics.md) to find out actionable best practices specific to your resource.
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Learn how to map a custom DNS name to your app:
Learn how App Service runs a Python app: > [!div class="nextstepaction"]
-> [Configure Python app](configure-language-python.md)
+> [Configure Python app](configure-language-python.md)
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
For this scenario, use NSGs on the Application Gateway subnet. Put the following
1. Allow incoming traffic from a source IP or IP range with the destination as the entire Application Gateway subnet address range and destination port as your inbound access port, for example, port 80 for HTTP access. 2. Allow incoming requests from source as **GatewayManager** service tag and destination as **Any** and destination ports as 65503-65534 for the Application Gateway v1 SKU, and ports 65200-65535 for v2 SKU for [back-end health status communication](./application-gateway-diagnostics.md). This port range is required for Azure infrastructure communication. These ports are protected (locked down) by Azure certificates. Without appropriate certificates in place, external entities can't initiate changes on those endpoints. 3. Allow incoming Azure Load Balancer probes (*AzureLoadBalancer* tag) on the [network security group](../virtual-network/network-security-groups-overview.md).
-4. Allow inbound virtual network traffic (*VirtualNetwork* tag) on the [network security group](../virtual-network/network-security-groups-overview.md).
+4. Allow expected inbound traffic to match your listener configuration (i.e. if you have listeners configured for port 80, you will want an allow inbound rule for port 80)
5. Block all other incoming traffic by using a deny-all rule. 6. Allow outbound traffic to the Internet for all destinations.
azure-app-configuration Concept Config File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-config-file.md
+
+ Title: Azure App Configuration support for configuration files
+description: Tooling support for using configuration files with Azure App Configuration
++++ Last updated : 04/01/2022++
+# Azure App Configuration support for configuration files
+
+Files are one of the most common ways to store configuration data. To help you start quickly, App Configuration has tools to assist you in [importing your configuration files](./howto-import-export-data.md), so you don't have to type in your data manually. This operation is a one-time data migration if you plan to manage your data in App Configuration after importing them. In some other cases, for example, where you adopt [configuration as code](./howto-best-practices.md#configuration-as-code), you may continue managing your configuration data in files and importing them as part of your CI/CD process recurrently. You may find one of these two scenarios applies to you:
+
+- You keep the configuration file in the format you had before. This format is helpful if you want to use the file as the fallback configuration for your application or the local configuration during development. When you import the configuration file, specify how you want the data transformed to App Configuration key-values. This option is the [**default file content profile**](#file-content-profile-default) in App Configuration importing tools such as portal, Azure CLI, Azure Pipeline Push task, GitHub Actions, etc.
+- You keep the configuration file in the format that contains all App Configuration key-value properties. When you import the file, you don't need to specify any transformation rules because all properties of a key-value is already in the file. This option is called [**KVSet file content profile**](#file-content-profile-kvset) in App Configuration importing tools. It's helpful if you want to manage all your App Configuration data, including regular key-values, Key Vault references, and feature flags, in one file and import them in one shot.
+
+The rest of this document will discuss both file content profiles in detail and use Azure CLI as an example. The same concept applies to other App Configuration importing tools too.
+
+## File content profile: default
+
+The default file content profile in App Configuration tools refers to the conventional configuration file schema widely adopted by existing programming frameworks or systems. App Configuration supports JSON, Yaml, or Properties file formats.
+
+The following example is a configuration file named `appsettings.json` containing one configuration setting and one feature flag.
+
+```json
+{
+ "Logging": {
+ "LogLevel": {
+ "Default": "Warning"
+ }
+ },
+ "FeatureManagement": {
+ "Beta": false
+ }
+}
+```
+
+Run the following CLI command to import it to App Configuration with the `dev` label and use the colon (`:`) as the separator to flatten the key name. You can optionally add parameter "**--profile appconfig/default**". It's skipped in the example as it's the default value.
+
+```azurecli-interactive
+az appconfig kv import --label dev --separator : --name <your store name> --source file --path appsettings.json --format json
+```
+
+Key Vault references require a particular content type during importing, so you keep them in a separate file. The following example is a file named `keyvault-refs.json`.
+
+```json
+{
+ "Database": {
+ "ConnectionString": "{\"uri\":\"https://<your-vault-name>.vault.azure.net/secrets/db-secret\"}"
+ }
+}
+```
+
+Run the following CLI command to import it with the `test` label, the colon (`:`) separator, and the Key Vault reference content type.
+
+```azurecli-interactive
+az appconfig kv import --label test --separator : --content-type application/vnd.microsoft.appconfig.keyvaultref+json;charset=utf-8 --name <your store name> --source file --path keyvault-refs.json --format json
+```
+
+The following table shows all the imported data in your App Configuration store.
+
+| Key | Value | Label | Content type |
+|||||
+| .appconfig.featureflag/Beta | {"id":"Beta","description":"","enabled":false,"conditions":{"client_filters":[]}} | dev | application/vnd.microsoft.appconfig.ff+json;charset=utf-8 |
+| Logging:LogLevel:Default | Warning | dev | |
+| Database:ConnectionString | "{\"uri\":\"https://\<your-vault-name\>.vault.azure.net/secrets/db-secret\"}" | test | application/vnd.microsoft.appconfig.keyvaultref+json;charset=utf-8 |
+
+## File content profile: KVSet
+
+The KVSet file content profile in App Configuration tools refers to a file schema that contains all properties of an App Configuration key-value, including key, value, label, content type, and tags. The file is in JSON format. See [KVSet file schema](https://aka.ms/latest-kvset-schema) for the schema specification.
+
+The following example is a file based upon the KVSet file content profile, named `appconfigdata.json`, containing a feature flag, a Key Vault reference, and a regular key-value.
+
+```json
+{
+ "items": [
+ {
+ "key": ".appconfig.featureflag/Beta",
+ "value": "{\"id\":\"Beta\",\"description\":\"Beta feature\",\"enabled\":true,\"conditions\":{\"client_filters\":[]}}",
+ "label": "dev",
+ "content_type": "application/vnd.microsoft.appconfig.ff+json;charset=utf-8",
+ "tags": {}
+ },
+ {
+ "key": "Database:ConnectionString",
+ "value": "{\"uri\":\"https://<your-vault-name>.vault.azure.net/secrets/db-secret\"}",
+ "label": "test",
+ "content_type": "application/vnd.microsoft.appconfig.keyvaultref+json;charset=utf-8",
+ "tags": {}
+ },
+ {
+ "key": "Logging:LogLevel:Default",
+ "value": "Debug",
+ "label": "dev",
+ "content_type": null,
+ "tags": {}
+ }
+ ]
+}
+```
+
+> [!TIP]
+> If you followed the example in the previous section and have the data in your App Configuration store, you can export it to a file using the CLI command:
+> ```azurecli-interactive
+> az appconfig kv export --profile appconfig/kvset --label * --name <your store name> --destination file --path appconfigdata.json --format json
+> ```
+> After the file is exported, update the `Beta` feature flag `enabled` property to `true` and change the `Logging:LogLevel:Default` to `Debug`.
+
+Run the following CLI command with the parameter "**--profile appconfig/kvset**" to import the file to your App Configuration store. You don't need to specify any data transformation rules such as separator, label, or content type like you did in the default file content profile section because all information is already in the file.
+
+```azurecli-interactive
+az appconfig kv import --profile appconfig/kvset --name <your store name> --source file --path appconfigdata.json --format json
+```
+
+> [!NOTE]
+> The KVSet file content profile is currently supported in Azure CLI only and requires CLI version 2.30.0 or later.
+
+The following table shows all the imported data in your App Configuration store.
+
+| Key | Value | Label | Content type |
+|||||
+| .appconfig.featureflag/Beta | {"id":"Beta","description":"Beta feature","enabled":**true**,"conditions":{"client_filters":[]}} | dev | application/vnd.microsoft.appconfig.ff+json;charset=utf-8 |
+| Logging:LogLevel:Default | **Debug** | dev | |
+| Database:ConnectionString | "{\"uri\":\"https://\<your-vault-name\>.vault.azure.net/secrets/db-secret\"}" | test | application/vnd.microsoft.appconfig.keyvaultref+json;charset=utf-8 |
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configuration as code](./howto-best-practices.md#configuration-as-code)
+
+> [!div class="nextstepaction"]
+> [Import and export configuration data](./howto-import-export-data.md)
azure-app-configuration Howto Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md
To address these concerns, we recommend that you use a proxy service between you
## Configuration as Code
-Configuration as code is a practice of managing configuration files under your source control system, for example, a git repository. It gives you benefits like traceability and approval process for any configuration changes. If you adopt configuration as code, App Configuration has tools to assist you in deploying your configuration data. This way, your applications can access the latest data from your App Configuration store(s).
+Configuration as code is a practice of managing configuration files under your source control system, for example, a git repository. It gives you benefits like traceability and approval process for any configuration changes. If you adopt configuration as code, App Configuration has tools to assist you in [managing your configuration data in files](./concept-config-file.md) and deploying them as part of your build, release, or CI/CD process. This way, your applications can access the latest data from your App Configuration store(s).
- For GitHub, you can enable the [App Configuration Sync GitHub Action](concept-github-action.md) for your repository. Changes to configuration files are synchronized to App Configuration automatically whenever a pull request is merged. - For Azure DevOps, you can include the [Azure App Configuration Push](push-kv-devops-pipeline.md), an Azure pipeline task, in your build or release pipelines for data synchronization.
azure-app-configuration Howto Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-import-export-data.md
Title: Import or export data with Azure App Configuration description: Learn how to import or export configuration data to or from Azure App Configuration. Exchange data between your App Configuration store and code project. -+ Previously updated : 02/25/2020- Last updated : 04/06/2022+ # Import or export configuration data
-Azure App Configuration supports data import and export operations. Use these operations to work with configuration data in bulk and exchange data between your App Configuration store and code project. For example, you can set up one App Configuration store for testing and another for production. You can copy application settings between them so that you don't have to enter data twice.
+Azure App Configuration supports data import and export operations. Use these operations to work with configuration data in bulk and exchange data between your App Configuration store and code project. For example, you can set up one App Configuration store for testing and another one for production. You can copy application settings between them so that you don't have to enter data twice.
-This article provides a guide for importing and exporting data with App Configuration. If youΓÇÖd like to set up an ongoing sync with your GitHub repo, take a look at our [GitHub Action](./concept-github-action.md).
+This article provides a guide for importing and exporting data with App Configuration. If youΓÇÖd like to set up an ongoing sync with your GitHub repo, take a look at [GitHub Actions](./concept-github-action.md) and [Azure Pipeline tasks](./pull-key-value-devops-pipeline.md).
+
+You can import or export data using either the [Azure portal](https://portal.azure.com) or the [Azure CLI](./scripts/cli-import.md).
## Import data
-Import brings configuration data into an App Configuration store from an existing source. Use the import function to migrate data into an App Configuration store or aggregate data from multiple sources. App Configuration supports importing from a JSON, YAML, or properties file.
+Import brings configuration data into an App Configuration store from an existing source. Use the import function to migrate data into an App Configuration store or aggregate data from multiple sources. App Configuration supports importing from another App Configuration store, an App Service resource or a configuration file in JSON, YAML or .properties.
+
+### [Portal](#tab/azure-portal)
+
+From the Azure portal, follow these steps:
+
+1. Browse to your App Configuration store, and select **Import/export** from the **Operations** menu.
-Import data by using either the [Azure portal](https://portal.azure.com) or the [Azure CLI](./scripts/cli-import.md). From the Azure portal, follow these steps:
+ :::image type="content" source="./media/import-file.png" alt-text="Screenshot of the Azure portal, importing a file.":::
-1. Browse to your App Configuration store, and select **Import/Export** from the **Operations** menu.
+1. On the **Import** tab, select **Configuration file** under **Source service**. Other options are **App Configuration** and **App Services**.
-1. On the **Import** tab, select **Source service** > **Configuration File**.
+1. Fill out the form with the following parameters:
-1. Select **For language** and select your desired input type.
+ | Parameter | Description | Examples |
+ |--|||
+ | For language | Choose the language of the file you're importing between .NET, Java (Spring) and Other. | .NET |
+ | File type | Select the type of file you're importing between YAML, properties or JSON. | JSON |
1. Select the **Folder** icon, and browse to the file to import.
- ![Import file](./media/import-file.png)
+1. Fill out the next part of the form:
+
+ | Parameter | Description | Example |
+ |--|--|-|
+ | Separator | The separator is the character parsed in your imported configuration file to separate key-values which will be added to your configuration store. Select one of the following options: `.`, `,`,`:`, `;`, `/`, `-`. | : |
+ | Prefix | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. | TestApp:Settings:Backgroundcolor |
+ | Label | Optional. Select an existing label or enter a new label that will be assigned to your imported key-values. | prod |
+ | Content type | Optional. Indicate if the file you're importing is a Key Vault reference or a JSON file. For more information about Key Vault references, go to [Use Key Vault references in an ASP.NET Core app](/azure/azure-app-configuration/use-key-vault-references-dotnet-core). | JSON (application/json) |
+
+1. Select **Apply** to proceed with the import.
+
+### [Azure CLI](#tab/azure-cli)
+
+Use the Azure CLI as explained below to import App Configuration data. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](/azure/cloud-shell/overview). Specify the source of the data: `appconfig`, `appservice` or `file`. Optionally specify a source label with `--src-label` and a label to apply with `--label`.
-1. Select a **Separator**, and optionally enter a **Prefix** to use for imported key names.
+Import all keys and feature flags from a file and apply test label.
-1. Optionally, select a **Label**.
+```azurecli
+az appconfig kv import --name <your-app-config-store-name> --label test --source file --path D:/abc.json --format json
+```
-1. Select **Apply** to finish the import.
+Import all keys with label test and apply test2 label.
- ![Import file finished](./media/import-file-complete.png)
+```azurecli
+az appconfig kv import --name <your-app-config-store-name> --source appconfig --src-label test --label test2 --src-name <another-app-config-store-name>
+```
+
+Import all keys and apply null label from an App Service application.
+
+For `--appservice-account` use the ARM ID for AppService or use the name of the AppService, assuming it's in the same subscription and resource group as the App Configuration.
+
+```python
+az appconfig kv import --name <your-app-config-store-name> --source appservice --appservice-account <your-app-service>
+```
+
+For more details and examples, go to [az appconfig kv import](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-import&preserve-view=true).
++ ## Export data
-Export writes configuration data stored in App Configuration to another destination. Use the export function, for example, to save data in an App Configuration store to a file that's embedded with your application code during deployment.
+Export writes configuration data stored in App Configuration to another destination. Use the export function, for example, to save data from an App Configuration store to a file that can be embedded in your application code during deployment. You can export data from an App Configuration store, an App Service resource or a configuration file in JSON, YAML or .properties.
+
+### [Portal](#tab/azure-portal)
-Export data by using either the [Azure portal](https://portal.azure.com) or the [Azure CLI](./scripts/cli-export.md). From the Azure portal, follow these steps:
+From the [Azure portal](https://portal.azure.com), follow these steps:
-1. Browse to your App Configuration store, and select **Import/Export**.
+1. Browse to your App Configuration store, and select **Import/export**.
-1. On the **Export** tab, select **Target service** > **Configuration File**.
+1. On the **Export** tab, select **Target service** > **Configuration file**.
-1. Optionally enter a **Prefix** and select a **Label** and a point-in-time for keys to be exported.
+1. Fill out the form with the following parameters:
-1. Select a **File type** > **Separator**.
+ | Parameter | Description | Example |
+ ||--|-|
+ | Prefix | Optional. A key prefix is the beginning part of a key. Enter a prefix to restrict your export to key-values with the specified prefix. | TestApp:Settings:Backgroundcolor |
+ | From label | Optional. Select an existing label to restrict your export to key-values with a specific label. If you don't select a label, only key-values without a label will be exported. See note below. | prod |
+ | At a specific time | Optional. Fill out to export key-values from a specific point in time. | 01/28/2021 12:00:00 AM |
+ | File type | Select the type of file you're importing between YAML, properties or JSON. | JSON |
+ | Separator | The separator is the character that will be used in the configuration file to separate the exported key-values from one another. Select one of the following options: `.`, `,`,`:`, `;`, `/`, `-`. | ; |
-1. Select **Apply** to finish the export.
+ > [!IMPORTANT]
+ > If you don't select a label, only keys without labels will be exported. To export a key-value with a label, you must select its label. Note that you can only select one label per export, so to export keys with multiple labels, you may need to export multiple times, once per label you select.
- ![Export file finished](./media/export-file-complete.png)
+1. Select **Export** to finish the export.
+
+ :::image type="content" source="./media/export-file-complete.png" alt-text="Screenshot of the Azure portal, exporting a file":::
+
+### [Azure CLI](#tab/azure-cli)
+
+Use the Azure CLI as explained below to export configurations from App Configuration to another place. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](/azure/cloud-shell/overview). Specify the destination of the data: `appconfig`, `appservice` or `file`. Specify a label for the data you want to export with `--label` or export data with no label by not entering a label.
+
+> [!IMPORTANT]
+> If the keys you want to export have labels, do select the corresponding labels. If you don't select a label, only keys without labels will be exported.
+
+Export all keys and feature flags with label test to a json file.
+
+```python
+az appconfig kv export --name <your-app-config-store-name> --label test --destination file --path D:/abc.json --format json
+```
+
+Export all keys with null label excluding feature flags to a json file.
+
+```python
+az appconfig kv export --name <your-app-config-store-name> --destination file --path D:/abc.json --format json --skip-features
+```
+
+Export all keys and feature flags with all labels to another App Configuration.
+
+```python
+az appconfig kv export --name <your-app-config-store-name> --destination appconfig --dest-name <another-app-config-store-name> --key * --label * --preserve-labels
+```
+
+Export all keys and feature flags with all labels to another App Configuration and overwrite destination labels.
+
+```python
+az appconfig kv export --name <your-app-config-store-name> --destination appconfig --dest-name <another-app-config-store-name> --key * --label * --dest-label ExportedKeys
+```
+
+For more details and examples, go to [az appconfig kv export](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-export&preserve-view=true).
++ ## Next steps > [!div class="nextstepaction"]
-> [Create an ASP.NET Core web app](./quickstart-aspnet-core-app.md)
+> [Create an ASP.NET Core web app](./quickstart-aspnet-core-app.md)
azure-app-configuration Quickstart Dotnet Core App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-core-app.md
Title: Quickstart for Azure App Configuration with .NET Core | Microsoft Docs description: In this quickstart, create a .NET Core app with Azure App Configuration to centralize storage and management of application settings separate from your code. -+ ms.devlang: csharp Previously updated : 09/28/2020- Last updated : 04/05/2022+ #Customer intent: As a .NET Core developer, I want to manage all my app settings in one place. # Quickstart: Create a .NET Core app with App Configuration
You use the [.NET Core command-line interface (CLI)](/dotnet/core/tools/) to cre
1. Set an environment variable named **ConnectionString**, and set it to the access key to your App Configuration store. At the command line, run the following command:
- ```cmd
+ ### [Windows command prompt](#tab/windowscommandprompt)
+
+ To build and run the app locally using the Windows command prompt, run the following command:
+
+ ```console
setx ConnectionString "connection-string-of-your-app-configuration-store" ```
+ Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it is set properly.
+
+ ### [PowerShell](#tab/powershell)
+ If you use Windows PowerShell, run the following command: ```azurepowershell $Env:ConnectionString = "connection-string-of-your-app-configuration-store" ```
- If you use macOS or Linux, run the following command:
+ ### [macOS](#tab/unix)
+
+ If you use macOS, run the following command:
+
+ ```console
+ export ConnectionString='connection-string-of-your-app-configuration-store'
+ ```
+
+ Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it is set properly.
+
+ ### [Linux](#tab/linux)
+
+ If you use Linux, run the following command:
```console export ConnectionString='connection-string-of-your-app-configuration-store'
You use the [.NET Core command-line interface (CLI)](/dotnet/core/tools/) to cre
Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it is set properly.
-2. Run the following command to build the console app:
+
+
+1. Run the following command to build the console app:
```dotnetcli dotnet build ```
-3. After the build successfully completes, run the following command to run the app locally:
+1. After the build successfully completes, run the following command to run the app locally:
```dotnetcli dotnet run
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Basic metrics include request, dependency, and exception rate. Performance metri
Live Metrics Stream uses different IP addresses than other Application Insights telemetry. Make sure [those IP addresses](./ip-addresses.md) are open in your firewall. Also check the [outgoing ports for Live Metrics Stream](./ip-addresses.md#outgoing-ports) are open in the firewall of your servers.
-As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Live Metrics now only supports TLS 1.2 by default. If you are using an older version of TLS , Live Metrics will not display any data. For applications based on .NET Framework 4.5.1 refer to [How to enable Transport Layer Security (TLS) 1.2 on clients - Configuration Manager](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client#bkmk_net) to support newer TLS version.
+As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Live Metrics now only supports TLS 1.2. If you are using an older version of TLS , Live Metrics will not display any data. For applications based on .NET Framework 4.5.1 refer to [How to enable Transport Layer Security (TLS) 1.2 on clients - Configuration Manager](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client#bkmk_net) to support newer TLS version.
+
+> [!WARNING]
+> Currently, authenticated channel only supports manual SDK instrumentation. The authenticated channel cannot be configured with auto-instrumentation (used to be known as "codeless attach").
+
+### Missing configuration for .NET
+
+1. Verify you are using the latest version of the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector)
+2. Edit the `ApplicationInsights.config` file
+ * Verify that the connection string points to the Application Insights resource you are using
+ * Locate the `QuickPulseTelemetryModule` configuration option; if it is not there add it
+ * Locate the `QuickPulseTelemetryProcessor` configuration option; if it is not there add it
+
+ ```xml
+<TelemetryModules>
+<Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.
+QuickPulse.QuickPulseTelemetryModule, Microsoft.AI.PerfCounterCollector"/>
+</TelemetryModules>
+
+<TelemetryProcessors>
+<Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.
+QuickPulse.QuickPulseTelemetryProcessor, Microsoft.AI.PerfCounterCollector"/>
+<TelemetryProcessors>
+````
+3. Restart the application
## Next steps
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md
The schema for resource logs varies depending on the resource and log category.
| Azure Firewall | [Logging for Azure Firewall](../../firewall/logs-and-metrics.md#diagnostic-logs) | | Azure Front Door | [Logging for Azure Front Door](../../frontdoor/front-door-diagnostics.md) | | Azure IoT Hub | [IoT Hub operations](../../iot-hub/monitor-iot-hub-reference.md#resource-logs) |
+| Azure IoT Hub Device Provisioning Service| [Device Provisioning Service operations](../../iot-dps/monitor-iot-dps-reference.md#resource-logs) |
| Azure Key Vault |[Azure Key Vault logging](../../key-vault/general/logging.md) | | Azure Kubernetes Service |[Azure Kubernetes Service logging](../../aks/monitor-aks-reference.md#resource-logs) | | Azure Load Balancer |[Log Analytics for Azure Load Balancer](../../load-balancer/monitor-load-balancer.md) |
azure-netapp-files Azacsnap Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-troubleshoot.md
Cannot get SAP HANA version, exiting with error: 127
If running `azacsnap` presents an error such as `* 258: insufficient privilege`, check to ensure the appropriate privilege has been asssigned to the "AZACSNAP" database user (assuming this is the user created per the [installation guide](azacsnap-installation.md#enable-communication-with-database)). Verify the user's current privilege with the following command: ```bash
-hdbsql -U AZACSNAP "select GRANTEE,GRANTEE_TYPE,PRIVILEGE,IS_VALID,IS_GRANTABLE from sys.granted_privileges "' | grep -i -e GRANTEE -e azacsnap
+hdbsql -U AZACSNAP "select GRANTEE,GRANTEE_TYPE,PRIVILEGE,IS_VALID,IS_GRANTABLE from sys.granted_privileges " | grep -i -e GRANTEE -e azacsnap
``` ```output
ERROR: Could not create StorageANF object [authFile = 'azureauth.json']
## Next steps -- [Tips](azacsnap-tips.md)
+- [Tips](azacsnap-tips.md)
azure-resource-manager Publish Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-managed-identity.md
Last updated 05/13/2019+ # Azure Managed Application with Managed Identity
A basic Azure Resource Manager template that deploys a Managed Application with
## Granting access to Azure resources
-Once a Managed Application is granted an identity, it can be granted access to existing Azure resources. This process can be done through the Access control (IAM) interface in the Azure portal. The name of the Managed Application or **user-assigned identity** can be searched to add a role assignment.
+Once a Managed Application is granted an identity, it can be granted access to existing Azure resources by creating a role assignment.
-![Add role assignment for Managed Application](./media/publish-managed-identity/identity-role-assignment.png)
+To do so, search for and select the name of the Managed Application or **user-assigned identity**, and then select **Access control (IAM)**. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Linking existing Azure resources
Once the Managed Application package is created, the Managed Application can be
The token of the Managed Application can now be accessed through the `listTokens` api from the publisher tenant. An example request might look like:
-``` HTTP
+```http
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Solutions/applications/{applicationName}/listTokens?api-version=2018-09-01-preview HTTP/1.1 {
userAssignedIdentities | *no* | The list of user-assigned managed identities to
A sample response might look like:
-``` HTTP
+```http
HTTP/1.1 200 OK Content-Type: application/json
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| | - | | Microsoft.AAD | [Azure Active Directory Domain Services](../../active-directory-domain-services/index.yml) | | Microsoft.Addons | core |
+| Microsoft.App | [Azure Container Apps](../../container-apps/index.yml) |
| Microsoft.ADHybridHealthService - [registered](#registration) | [Azure Active Directory](../../active-directory/index.yml) | | Microsoft.Advisor | [Azure Advisor](../../advisor/index.yml) | | Microsoft.AlertsManagement | [Azure Monitor](../../azure-monitor/index.yml) |
azure-signalr Signalr Howto Authorize Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-application.md
Last updated 09/06/2021
ms.devlang: csharp+ # Authorize request to SignalR resources with Azure AD from Azure applications
To learn more about adding credentials, see
## Add role assignments on Azure portal
-This sample shows how to assign a `SignalR App Server` role to a service principal (application) over a SignalR resource.
+The following steps describe how to assign a `SignalR App Server` role to a service principal (application) over a SignalR resource. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
> [!Note] > A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md)
-1. On the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource.
+1. From the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource.
-1. Click **Access Control (IAM)** to display access control settings for the Azure SignalR.
+1. Select **Access Control (IAM)**.
-1. Click the **Role assignments** tab to view the role assignments at this scope.
+1. Select **Add > Add role assignment**.
- The following screenshot shows an example of the Access control (IAM) page for a SignalR resource.
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
- ![Screenshot of access control](./media/authenticate/access-control.png)
+1. On the **Roles** tab, select **SignalR App Server**.
-1. Click **Add > Add role assignment**.
+1. On the **Members** tab, select **User, group, or service principal**, and then select **Select members**.
-1. On the **Roles** tab, select `SignalR App Server`.
+1. Search for and select the application that to which you'd like to assign the role.
-1. Click **Next**.
-
- ![Screenshot of adding role assignment](./media/authenticate/add-role-assignment.png)
-
-1. On the **Members** tab, under **Assign access to** section, select **User, group, or service principal**.
-
-1. Click **Select Members**
-
-3. Search for and select the application that you would like to assign the role to.
-
-1. Click **Select** to confirm the selection.
-
-4. Click **Next**.
-
- ![Screenshot of assigning role to service principals](./media/authenticate/assign-role-to-service-principals.png)
-
-5. Click **Review + assign** to confirm the change.
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
> [!IMPORTANT] > Azure role assignments may take up to 30 minutes to propagate.
To learn more about how to assign and manage Azure role assignments, see these a
- [Assign Azure roles using Azure CLI](../role-based-access-control/role-assignments-cli.md) - [Assign Azure roles using Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md)
-## Configure you app
+## Configure your app
### App server
azure-signalr Signalr Howto Authorize Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-managed-identity.md
Last updated 09/06/2021
ms.devlang: csharp+ # Authorize request to SignalR resources with Azure AD from managed identities
See [How to use managed identities for App Service and Azure Functions](../app-s
## Add role assignments on Azure portal
-This sample shows how to assign a `SignalR App Server` role to a system-assigned identity over a SignalR resource.
+The following steps describe how to assign a `SignalR App Server` role to a system-assigned identity over a SignalR resource. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
> [!Note] > A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md)
-1. Open [Azure portal](https://portal.azure.com/), navigate to your SignalR resource.
+1. From the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource.
-1. Click **Access Control (IAM)** to display access control settings for the Azure SignalR.
+1. Select **Access Control (IAM)**.
- The following shows an example of the Access control (IAM) page for a resource group.
+1. Select **Add > Add role assignment**.
-1. Click the **Role assignments** tab to view the role assignments at this scope.
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
- The following screenshot shows an example of the Access control (IAM) page for a SignalR resource.
+1. On the **Roles** tab, select **SignalR App Server**.
- ![Screenshot of access control](./media/authenticate/access-control.png)
+1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
-1. Click **Add > Add role assignment**.
+1. Select **System-assigned managed identity**, search for a virtual machine to which would you'd like to assign the role, and then select it.
-1. On the **Roles** tab, select `SignalR App Server`.
-
-1. Click **Next**.
-
- ![Screenshot of adding role assignment](./media/authenticate/add-role-assignment.png)
-
-1. On the **Members** tab, under **Assign access to** section, select **Managed identity**.
-
-1. Click **Select Members**.
-
-1. In the **Select managed identities** pane, select **System-assigned managed identity > Virtual machine**
-
-1. Search for and select the virtual machine that you would like to assign the role to.
-
-1. Click **Select** to confirm the selection.
-
-2. Click **Next**.
-
- ![Screenshot of assigning role to managed identities](./media/authenticate/assign-role-to-managed-identities.png)
-
-3. Click **Review + assign** to confirm the change.
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
> [!IMPORTANT] > Azure role assignments may take up to 30 minutes to propagate.
azure-video-analyzer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-embed-widgets.md
Title: Embed Azure Video Analyzer for Media (formerly Video Indexer) widgets in your apps description: Learn how to embed Azure Video Analyzer for Media (formerly Video Indexer) widgets in your apps. Previously updated : 03/29/2022 Last updated : 04/15/2022
A Cognitive Insights widget includes all visual insights that were extracted fro
|Name|Definition|Description| ||||
-|`widgets` | Strings separated by comma | Allows you to control the insights that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?widgets=people,keywords` renders only people and keywords UI insights.<br/>Available options: people, animatedCharacters, keywords, labels, sentiments, emotions, topics, keyframes, transcript, ocr, speakers, scenes, and namedEntities.|
+|`widgets` | Strings separated by comma | Allows you to control the insights that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?widgets=people,keywords` renders only people and keywords UI insights.<br/>Available options: people, animatedCharacters ,keywords, audioEffects, labels, sentiments, emotions, topics, keyframes, transcript, ocr, speakers, scenes, spokenLanguage, observedPeople and namedEntities.|
|`controls`|Strings separated by comma|Allows you to control the controls that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?controls=search,download` renders only search option and download button.<br/>Available options: search, download, presets, language.| |`language`|A short language code (language name)|Controls insights language.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?language=es-es` <br/>or `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?language=spanish`| |`locale` | A short language code | Controls the language of the UI. The default value is `en`. <br/>Example: `locale=de`.| |`tab` | The default selected tab | Controls the **Insights** tab that's rendered by default. <br/>Example: `tab=timeline` renders the insights with the **Timeline** tab selected.|
+|`search` | String | Allows you to control the initial search term.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?search=azure` renders the insights filtered by the word ΓÇ£azureΓÇ¥. |
+|`sort` | Strings separated by comma | Allows you to control the sorting of an insight.<br/>Each sort consist of 3 values: widget name, property and order, connected with '_' `sort=name_property_order`<br/>Available options:<br/>widgets: keywords, audioEffects, labels, sentiments, emotions, keyframes, scenes, namedEntities and spokenLanguage.<br/>property: startTime, endTime, seenDuration, name and id.<br/>order: asc and desc.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?sort=labels_id_asc,keywords_name_desc` renders the labels sorted by id in ascending order and keywords sorted by name in descending order.|
|`location` ||The `location` parameter must be included in the embedded links, see [how to get the name of your region](regions.md). If your account is in preview, the `trial` should be used for the location value. `trial` is the default value for the `location` parameter.|
-|`search`|A free text for search |Allows you to control the initial search term. Example - `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?search=vi` renders the insights filtered by the word "vi".|
### Player widget
You can use the Player widget to stream video by using adaptive bit rate. The Pl
|Name|Definition|Description| |||| |`t` | Seconds from the start | Makes the player start playing from the specified time point.<br/> Example: `t=60`. |
-|`captions` | A language code | Fetches the caption in the specified language during the widget loading to be available on the **Captions** menu.<br/> Example: `captions=en-US`. |
+|`captions` | A language code / A language code array | Fetches the caption in the specified language during the widget loading to be available on the **Captions** menu.<br/> Example: `captions=en-US`, `captions=en-US,es-ES` |
|`showCaptions` | A Boolean value | Makes the player load with the captions already enabled.<br/> Example: `showCaptions=true`. | |`type`| | Activates an audio player skin (the video part is removed).<br/> Example: `type=audio`. | |`autoplay` | A Boolean value | Indicates if the player should start playing the video when loaded. The default value is `true`.<br/> Example: `autoplay=false`. |
See the [code samples](https://github.com/Azure-Samples/media-services-video-ind
For more information, see [supported browsers](video-indexer-overview.md#supported-browsers).
+## Embed and customize Azure Video Analyzer for Media widgets in your app using npm package
+Using our [@azure/video-analyzer-for-media-widgets](https://www.npmjs.com/package/@azure/video-analyzer-for-media-widgets) NPM package, you can add the insights widgets to your app and customize it according to your needs.
+
+Instead of adding an iframe element to embed the insights widget, with this new package you can easily embed & communicate between our widgets. Customizing your widget is only supported in this package - all in one place.
+
+For more information, see our official [GitHub](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/Embedding%20widgets/widget-customization#readme).
+ ## Next steps For information about how to view and edit Video Analyzer for Media insights, see [View and edit Video Analyzer for Media insights](video-indexer-view-edit.md).
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
For the final step, you'll need to delete the resource bridge VM and the VM temp
## Preview FAQ
-**How do you onboard a customer?**
+**Is Arc supported in all the Azure VMware Solution regions?**
-Fill in the [Customer Enrollment form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR0SUP-7nYapHr1Tk0MFNflVUNEJQNzFONVhVOUlVTVk3V1hNTjJPVDM5WS4u) and we'll be in touch.
+Arc is supported in EastUS and WestEU regions however we are working to extend the regional support.
**How does support work?**
cognitive-services Custom Question Answering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/custom-question-answering.md
- Title: QnA Maker managed is now renamed Custom question answering-
-description: This article contains news about QnA Maker feature changes.
----- Previously updated : 05/11/2021---
-# QnA Maker managed is now renamed to custom question answering
-
-[QnA Maker managed (preview)](https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575) was launched in November 2020 as a free public preview offering. It introduced several new features including enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support.
-
-As part of our effort to consolidate the language offerings from Cognitive Services, QnA Maker managed is now a feature within Text Analytics, and it has been renamed to custom question answering.
-
-## Creating a new custom question answering service
-
-[Create a Text Analytics resource](https://portal.azure.com/?quickstart=true#create/Microsoft.CognitiveServicesTextAnalytics) to use question answering and other features such as entity recognition, sentiment analysis, etc.
-
-Now when you create a new Text Analytics resource, you can select features that you want included. Select **custom question answering (preview)** and continue to create your resource.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of create a Text Analytics resource UI menu with custom question answering feature selected]( ./media/select-feature.png) ]( ./media/select-feature.png#lightbox)
--
-You can no longer create a QnA Maker managed resource from the QnA Maker create flow, instead you will be redirected to the Text Analytics service. There is no change to the QnA Maker stable release.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of resource creation menu]( ./media/create-resource.png) ]( ./media/create-resource.png#lightbox)
-
-## Details
--- All existing QnA Maker managed (preview) resources continue to work as before. There is no action required for these resources at this time.-- The creation flow for Custom question answering (preview) is the primary change. The service, portal, endpoints, SDK, etc. remain as before.-- Custom question answering (preview) continues to be offered as a free public preview. This feature is only available as part of Text Analytics Standard resources. Do not change your pricing tier for Text Analytics resources to free.-- Custom question answering (preview) is available in the following regions:
- - South Central US
- - North Europe
- - Australia East.
-
-## Next steps
-
-* [Get started with QnA Maker client library](./quickstarts/quickstart-sdk.md)
-* [Get started with QnA Maker portal](./quickstarts/create-publish-knowledge-base.md)
-
cognitive-services Improve Accuracy Phrase List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/improve-accuracy-phrase-list.md
Previously updated : 01/26/2022 Last updated : 04/14/2022 zone_pivot_groups: programming-languages-set-two-with-js-spx
zone_pivot_groups: programming-languages-set-two-with-js-spx
A phrase list is a list of words or phrases provided ahead of time to help improve their recognition. Adding a phrase to a phrase list increases its importance, thus making it more likely to be recognized.
+For supported phrase list locales, see [Language and voice support for the Speech service](language-support.md?tabs=phraselist).
+ Examples of phrases include: * Names * Geographical locations
Phrase lists are simple and lightweight:
- **Just-in-time**: A phrase list is provided just before starting the speech recognition, eliminating the need to train a custom model. - **Lightweight**: You don't need a large data set. Simply provide a word or phrase to give it importance.
-You can use the Speech SDK or Speech Command Line Interface (CLI). The Batch transcription API does not support phrase lists.
+You can use phrase lists with the [Speech Studio](speech-studio-overview.md), [Speech SDK](quickstarts/setup-platform.md), or [Speech Command Line Interface (CLI)](spx-overview.md). The Batch transcription API does not support phrase lists.
There are some situations where [training a custom model](custom-speech-overview.md) that includes phrases is likely the best option to improve accuracy. In these cases you would not use a phrase list: - If you need to use a large list of phrases. A phrase list shouldn't have more than 500 phrases. -- If you need a phrase list for languages that are not currently supported. For supported phrase list locales see [Language and voice support for the Speech service](language-support.md?tabs=phraselist).
+- If you need a phrase list for languages that are not currently supported.
- If you use a custom endpoint. Phrase lists can't be used with custom endpoints. ## Try it in Speech Studio
-You can use Speech Studio to test how phrase list would help improve recognition for your audio. To implement a phrase list with your application in production, you'll use the Speech SDK or Speech CLI.
+You can use [Speech Studio](speech-studio-overview.md) to test how phrase list would help improve recognition for your audio. To implement a phrase list with your application in production, you'll use the Speech SDK or Speech CLI.
For example, let's say that you want the Speech service to recognize this sentence: "Hi Rehaan, this is Jessie from Contoso bank. "
In this case you would want to add "Rehaan", "Jessie", and "Contoso" to your phr
Now try Speech Studio to see how phrase list can improve recognition accuracy. > [!NOTE]
-> You may be prompted to select your Azure subscription and Speech resource, and then acknowledge billing for your region. If you are new to Azure or Speech, see [Try the Speech service for free](overview.md#try-the-speech-service-for-free).
+> You may be prompted to select your Azure subscription and Speech resource, and then acknowledge billing for your region.
1. Sign in to [Speech Studio](https://speech.microsoft.com/). 1. Select **Real-time Speech-to-text**.
cognitive-services Migrate Qnamaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/migrate-qnamaker.md
Resource level settings such as Role-based access control (RBAC) are not migrate
## Steps to migrate SDKs
-This [SDK Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Language.QuestionAnswering_1.1.0-beta.1/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering/MigrationGuide.md) is intended to assist in the migration to the new Question Answering client library, Azure.AI.Language.QuestionAnswering, from the old one, Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker. It will focus on side-by-side comparisons for similar operations between the two packages.
+This [SDK Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Language.QuestionAnswering_1.1.0-beta.1/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering/MigrationGuide.md) is intended to assist in the migration to the new Question Answering client library, [Azure.AI.Language.QuestionAnswering](https://www.nuget.org/packages/Azure.AI.Language.QuestionAnswering), from the old one, [Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker). It will focus on side-by-side comparisons for similar operations between the two packages.
## Steps to migrate knowledge bases
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/sdk-features.md
The following list presents the set of features which are currently available in
| | Get notified when participants are actively typing a message in a chat thread | ✔️ | ❌ | ❌ | ❌ | ✔️ | ✔️ | | | Get all messages in a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | | Send Unicode emojis as part of message content | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Add metadata to chat messages | ✔️ | ✔️ | ✔️ | ✔️ | ❌ | ✔️ |
-| | Add display name to typing indicator notification | ✔️ | ✔️ | ✔️ | ✔️ | ❌ | ✔️ |
+| | Add metadata to chat messages | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Add display name to typing indicator notification | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
|Real-time notifications (enabled by proprietary signaling package**)| Chat clients can subscribe to get real-time updates for incoming messages and other operations occurring in a chat thread. To see a list of supported updates for real-time notifications, see [Chat concepts](concepts.md#real-time-notifications) | ✔️ | ❌ | ❌ | ❌ | ✔️ | ✔️ | |Mobile push notifications with Notification Hub | The Chat SDK provides APIs allowing clients to be notified for incoming messages and other operations occurring in a chat thread by connecting an Azure Notification Hub to your Communication Services resource. In situations where your mobile app is not running in the foreground, patterns are available to [fire pop-up notifications](../notifications.md) ("toasts") to inform end-users, see [Chat concepts](concepts.md#push-notifications). | ❌ | ❌ | ❌ | ❌ | ❌ | ✔️ | | Server Events with Event Grid | Use the chat events available in Azure Event Grid to plug custom notification services or post that event to a webhook to execute business logic like updating CRM records after a chat is finished | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
communication-services Create Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/create-communication-resource.md
Get started with Azure Communication Services by provisioning your first Communi
> [!WARNING]
-> Note that while Communication Services is available in multiple geographies, in order to get a phone number the resource must have a data location set to 'US'.
-> Also note it is not possible to create a resource group at the same time as a resource for Azure Communication Services. When creating a resource, a resource group that has been created already must be used.
+> While Communication Services is available in multiple geographies, in order to get a phone number the resource must have a data location set to 'US'.
+> Also, it is not possible to create a resource group at the same time as a resource for Azure Communication Services. When creating a resource, a resource group that has been created already must be used.
::: zone pivot="platform-azp" [!INCLUDE [Azure portal](./includes/create-resource-azp.md)]
After you add the environment variable, run `source ~/.bash_profile` from your c
## Clean up resources
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. [Deleting the resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md#delete-resource-groups) also deletes any other resources associated with it.
If you have any phone numbers assigned to your resource upon resource deletion, the phone numbers will be released from your resource automatically at the same time.
container-apps Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/billing.md
Azure Container Apps billing consists of two types of charges: - **[Resource consumption](#resource-consumption-charges)**: The amount of resources allocated to your container app on a per-second basis, billed in vCPU-seconds and GiB-seconds.- - **[HTTP requests](#request-charges)**: The number of HTTP requests your container app receives. The following resources are free during each calendar month, per subscription:
This article describes how to calculate the cost of running your container app.
## Resource consumption charges
-Azure Container Apps runs replicas of your application based on the [scaling rules and replica count limits](scale-app.md) you configure. You're charged for the amount of resources allocated to each replica while it's running.
-
-There are two meters for resource consumption:
+Azure Container Apps runs replicas of your application based on the [scaling rules and replica count limits](scale-app.md) you configure for each revision. You're charged for the amount of resources allocated to each replica while it's running.
-- **vCPU-seconds**: The amount of vCPU cores allocated to your container app on a per-second basis.
+There are 2 meters for resource consumption:
+- **vCPU-seconds**: The number of vCPU cores allocated to your container app on a per-second basis.
- **GiB-seconds**: The amount of memory allocated to your container app on a per-second basis. The first 180,000 vCPU-seconds and 360,000 GiB-seconds in each subscription per calendar month are free.
-The rate you pay for resource consumption depends on the state of your container app and replicas. By default, replicas are charged at an *active* rate. However, in certain conditions, a replica can enter an *idle* state. While in an *idle* state, resources are billed at a reduced rate.
+The rate you pay for resource consumption depends on the state of your container app's revisions and replicas. By default, replicas are charged at an *active* rate. However, in certain conditions, a replica can enter an *idle* state. While in an *idle* state, resources are billed at a reduced rate.
### No replicas are running
-When your container app is scaled down to zero replicas, no resource consumption charges are incurred.
+When a revision is scaled to zero replicas, no resource consumption charges are incurred.
### Minimum number of replicas are running
-Idle usage charges are applied when your replicas are running under a specific set of circumstances. The criteria for idle charges include:
+Idle usage charges may apply when a revision is running under a specific set of circumstances. To be eligible for idle charges, a revision must meet the following criteria.
-- When your container app<sup>1</sup> is configured with a [minimum replica count](scale-app.md) of at least one.-- The app is scaled down to the minimum replica count.
+- It is configured with a [minimum replica count](scale-app.md) greater than zero.
+- It is scaled to the minimum replica count.
Usage charges are calculated individually for each replica. A replica is considered idle when *all* of the following conditions are true:
+- The replica is running in a revision that is currently eligible for idle charges.
- All of the containers in the replica have started and are running. - The replica isn't processing any HTTP requests. - The replica is using less than 0.01 vCPU cores.
When a replica is idle, resource consumption charges are calculated at the reduc
### More than the minimum number of replicas are running
-When your container app<sup>1</sup> is scaled above the [minimum replica count](scale-app.md), all running replicas are charged for resource consumption at the active rate.
-
-<sup>1</sup> For container apps in multiple revision mode, charges are based on the current replica count in a revision relative to its configured minimum replica count.
+When a revision is scaled above the [minimum replica count](scale-app.md), all of its running replicas are charged for resource consumption at the active rate.
## Request charges In addition to resource consumption, Azure Container Apps also charges based on the number of HTTP requests received by your container app. The first 2 million requests in each subscription per calendar month are free.-
data-factory Source Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/source-control.md
It imports the code from live mode into collaboration branch. It considers the c
1. Remove your current Git repository 1. Reconfigure Git with the same settings, but make sure **Import existing Data Factory resources to repository** is selected and choose **Collaboration branch (same branch)**
-1. Create a pull request to merge the changes to the collaboration branch
+1. Create a pull request to merge the changes to the collaboration branch.
+
+> [!NOTE]
+> It is only necessary to create and merge a pull request if you are working in a repository that does not allow direct commits. In most organizations, submissions into the repository will require review before merging so the best practice is usually to use this approach. But in some cases no review is required, in which case it isn't necessary to create and merge a pull request, but changes can be directly committed to the collaboration branch.
Choose either method appropriately as needed.
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** |Supported |Supported |London | | **MTN Global Connect** |Supported |Supported |Cape Town, Johannesburg| | **[National Telecom](https://www.nc.ntplc.co.th/cat/category/264/855/CAT+Direct+Cloud+Connect+for+Microsoft+ExpressRoute?lang=en_EN)** |Supported |Supported |Bangkok |
-| **[Neutrona Networks](https://www.neutrona.com/index.php/azure-expressroute/)** |Supported |Supported |Dallas, Los Angeles, Miami, Sao Paulo, Washington DC |
+| **[Neutrona Networks](https://flo.net/)** |Supported |Supported |Dallas, Los Angeles, Miami, Sao Paulo, Washington DC |
| **[Next Generation Data](https://vantage-dc-cardiff.co.uk/)** |Supported |Supported | Newport(Wales) | | **[NEXTDC](https://www.nextdc.com/services/axon-ethernet/microsoft-expressroute)** |Supported |Supported | Melbourne, Perth, Sydney, Sydney2 | | **NL-IX** |Supported |Supported | Amsterdam2 |
firewall Rule Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/rule-processing.md
Previously updated : 11/09/2021 Last updated : 04/15/2022
Here's an example policy:
|Name |Type |Priority |Rules |Inherited from |||||-| |BaseRCG1 |Rule collection group |200 |8 |Parent policy|
-|DNATRc1 |DNAT rule collection | 600 | 7 |Parent policy|
+|DNATRC1 |DNAT rule collection | 600 | 7 |Parent policy|
+|DNATRC3|DNAT rule collection|610|3|Parent policy|
|NetworkRc1 |Network rule collection | 800 | 1 |Parent policy| |BaseRCG2 |Rule collection group |300 | 3 |Parent policy| |AppRCG2 |Application rule collection | 1200 |2 |Parent policy
SSH connections are denied because a higher priority network rule collection blo
If you change a rule to deny previously allowed traffic, any relevant existing sessions are dropped.
-## 3-way handshake behavior
+## Three-way handshake behavior
-As a stateful service, Azure Firewall completes a TCP 3-way handshake for allowed traffic, from a source to the destination. For example, VNet-A to VNet-B.
+As a stateful service, Azure Firewall completes a TCP three-way handshake for allowed traffic, from a source to the destination. For example, VNet-A to VNet-B.
-Creating an allow rule from VNet-A to VNet-B does not mean that new initiated connections from VNet-B to VNet-A are allowed.
+Creating an allow rule from VNet-A to VNet-B doesn't mean that new initiated connections from VNet-B to VNet-A are allowed.
-As a result, there is no need to create an explicit deny rule from VNet-B to VNet-A. If you create this deny rule, you'll interrupt the 3-way handshake from the initial allow rule from VNet-A to VNet-B.
+As a result, there's no need to create an explicit deny rule from VNet-B to VNet-A. If you create this deny rule, you'll interrupt the three-way handshake from the initial allow rule from VNet-A to VNet-B.
## Next steps
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r4.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 description: Details of the NIST SP 800-53 Rev. 4 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/17/2021 Last updated : 04/15/2022 ++ # Details of the NIST SP 800-53 Rev. 4 Regulatory Compliance built-in initiative
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
-|[Diagnostic logs in App Services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit.json) |
|[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](../concepts/guest-configuration.md). |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) | |[Log Analytics agent health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd62cfe2b-3ab0-4d41-980d-76803b58ca65) |Security Center uses the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA). To make sure your virtual machines are successfully monitored, you need to make sure the agent is installed on the virtual machines and properly collects security events to the configured workspace. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ResolveLaHealthIssues.json) | |[Log Analytics agent should be installed on your Linux Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F842c54e8-c2f9-4d79-ae8d-38d8b8019373) |This policy audits Linux Azure Arc machines if the Log Analytics agent is not installed. |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Arc_Linux_LogAnalytics_Audit.json) |
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
-|[Diagnostic logs in App Services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit.json) |
|[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](../concepts/guest-configuration.md). |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) | |[Log Analytics agent health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd62cfe2b-3ab0-4d41-980d-76803b58ca65) |Security Center uses the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA). To make sure your virtual machines are successfully monitored, you need to make sure the agent is installed on the virtual machines and properly collects security events to the configured workspace. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ResolveLaHealthIssues.json) | |[Log Analytics agent should be installed on your Linux Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F842c54e8-c2f9-4d79-ae8d-38d8b8019373) |This policy audits Linux Azure Arc machines if the Log Analytics agent is not installed. |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Arc_Linux_LogAnalytics_Audit.json) |
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
-|[Diagnostic logs in App Services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit.json) |
|[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](../concepts/guest-configuration.md). |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) | |[Log Analytics agent health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd62cfe2b-3ab0-4d41-980d-76803b58ca65) |Security Center uses the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA). To make sure your virtual machines are successfully monitored, you need to make sure the agent is installed on the virtual machines and properly collects security events to the configured workspace. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ResolveLaHealthIssues.json) | |[Log Analytics agent should be installed on your Linux Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F842c54e8-c2f9-4d79-ae8d-38d8b8019373) |This policy audits Linux Azure Arc machines if the Log Analytics agent is not installed. |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Arc_Linux_LogAnalytics_Audit.json) |
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
-|[Diagnostic logs in App Services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit.json) |
|[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](../concepts/guest-configuration.md). |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) | |[Log Analytics agent health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd62cfe2b-3ab0-4d41-980d-76803b58ca65) |Security Center uses the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA). To make sure your virtual machines are successfully monitored, you need to make sure the agent is installed on the virtual machines and properly collects security events to the configured workspace. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ResolveLaHealthIssues.json) | |[Log Analytics agent should be installed on your Linux Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F842c54e8-c2f9-4d79-ae8d-38d8b8019373) |This policy audits Linux Azure Arc machines if the Log Analytics agent is not installed. |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Arc_Linux_LogAnalytics_Audit.json) |
hdinsight Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/quickstart-bicep.md
+
+ Title: 'Quickstart: Create Apache HBase cluster using Bicep - Azure HDInsight'
+description: This quickstart shows how to use Bicep to create an Apache HBase cluster in Azure HDInsight.
+++++ Last updated : 04/14/2022
+#Customer intent: As a developer new to Apache HBase on Azure, I need to see how to create an HBase cluster.
++
+# Quickstart: Create Apache HBase cluster in Azure HDInsight using Bicep
+
+In this quickstart, you use Bicep to create an [Apache HBase](./apache-hbase-overview.md) cluster in Azure HDInsight. HBase is an open-source, NoSQL database that is built on Apache Hadoop and modeled after [Google BigTable](https://cloud.google.com/bigtable/).
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/hdinsight-hbase-linux/).
++
+Two Azure resources are defined in the Bicep file:
+
+* [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts): create an Azure Storage Account.
+* [Microsoft.HDInsight/cluster](/azure/templates/microsoft.hdinsight/clusters): create an HDInsight cluster.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters clusterName=<cluster-name> clusterLoginUserName=<cluster-username> sshUserName=<ssh-username>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -clusterName "<cluster-name>" -clusterLoginUserName "<cluster-username>" -sshUserName "<ssh-username>"
+ ```
+
+
+
+ You need to provide values for the parameters:
+
+ * Replace **\<cluster-name\>** with the name of the HDInsight cluster to create.
+ * Replace **\<cluster-username\>** with the credentials used to submit jobs to the cluster and to log in to cluster dashboards.
+ * Replace **\<ssh-username\>** with the credentials used to remotely access the cluster.
+
+ You'll be prompted to enter the following:
+
+ * **clusterLoginPassword**, which must be at least 10 characters long and must contain at least one digit, one uppercase letter, one lowercase letter, and one non-alphanumeric character except single-quote, double-quote, backslash, right-bracket, full-stop. It also must not contain three consecutive characters from the cluster username or SSH username.
+ * **sshPassword**, which must be 6-72 characters long and must contain at least one digit, one uppercase letter, and one lowercase letter. It must not contain any three consecutive characters from the cluster login name.
+
+ > [!NOTE]
+ > When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you learned how to create an Apache HBase cluster in HDInsight using Bicep. In the next article, you learn how to query HBase in HDInsight with HBase Shell.
+
+> [!div class="nextstepaction"]
+> [Query Apache HBase in Azure HDInsight with HBase Shell](./query-hbase-with-hbase-shell.md)
hdinsight Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/quickstart-bicep.md
+
+ Title: 'Quickstart: Create Interactive Query cluster using Bicep - Azure HDInsight'
+description: This quickstart shows how to use Bicep to create an Interactive Query cluster in Azure HDInsight.
+++++ Last updated : 04/14/2022
+#Customer intent: As a developer new to Interactive Query on Azure, I need to see how to create an Interactive Query cluster.
++
+# Quickstart: Create Interactive Query cluster in Azure HDInsight using Bicep
+
+In this quickstart, you use a Bicep to create an [Interactive Query](./apache-interactive-query-get-started.md) cluster in Azure HDInsight. Interactive Query (also called Apache Hive LLAP, or [Low Latency Analytical Processing](https://cwiki.apache.org/confluence/display/Hive/LLAP)) is an Azure HDInsight [cluster type](../hdinsight-hadoop-provision-linux-clusters.md#cluster-type).
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/hdinsight-interactive-hive/).
++
+Two Azure resources are defined in the Bicep file:
+
+* [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts): create an Azure Storage Account.
+* [Microsoft.HDInsight/cluster](/azure/templates/microsoft.hdinsight/clusters): create an HDInsight cluster.
+
+### Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters clusterName=<cluster-name> clusterLoginUserName=<cluster-username> sshUserName=<ssh-username>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -clusterName "<cluster-name>" -clusterLoginUserName "<cluster-username>" -sshUserName "<ssh-username>"
+ ```
+
+
+
+ You need to provide values for the parameters:
+
+ * Replace **\<cluster-name\>** with the name of the HDInsight cluster to create.
+ * Replace **\<cluster-username\>** with the credentials used to submit jobs to the cluster and to log in to cluster dashboards.
+ * Replace **\<ssh-username\>** with the credentials used to remotely access the cluster. The username cannot be admin.
+
+ You'll also be prompted to enter the following:
+
+ * **clusterLoginPassword**, which must be at least 10 characters long and contain one digit, one uppercase letter, one lowercase letter, and one non-alphanumeric character except single-quote, double-quote, backslash, right-bracket, full-stop. It also must not contain three consecutive characters from the cluster username or SSH username.
+ * **sshPassword**, which must be 6-72 characters long and must contain at least one digit, one uppercase letter, and one lowercase letter. It must not contain any three consecutive characters from the cluster login name.
+
+ > [!NOTE]
+ > When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you learned how to create an Interactive Query cluster in HDInsight using Bicep. In the next article, you learn how to use Apache Zeppelin to run Apache Hive queries.
+
+> [!div class="nextstepaction"]
+> [Execute Apache Hive queries in Azure HDInsight with Apache Zeppelin](./hdinsight-connect-hive-zeppelin.md)
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Previously updated : 03/21/2022 Last updated : 04/14/2022
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document provides details about the features and enhancements made to Azure API for FHIR. +
+## March 2022
+
+### **Features**
+
+|Feature &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
+| :-- | : |
+|FHIRPath Patch |This new feature enables you to use the FHIRPath Patch operation on FHIR resources. For more information, see [FHIR REST API capabilities for Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md). |
+
+### **Bug fixes**
+
+|Bug fixes |Related information |
+| :-- | : |
+|Duplicate resources in search with `_include` |Fixed issue where a single resource can be returned twice in a search that has `_include`. For more information, see [PR #2448](https://github.com/microsoft/fhir-server/pull/2448). |
+|PUT creates on versioned update |Fixed issue where creates with PUT resulted in an error when the versioning policy is configured to `versioned-update`. For more information, see [PR #2457](https://github.com/microsoft/fhir-server/pull/2457). |
+|Invalid header handling on versioned update |Fixed issue where invalid `if-match` header would result in an HTTP 500 error. Now an HTTP Bad Request is returned instead. For more information, see [PR #2467](https://github.com/microsoft/fhir-server/pull/2467). |
+ ## February 2022 ### **Features and enhancements**
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
|Bug fixes |Related information | | :-- | : | |Fixed 500 error when `SearchParameter` Code is null |Fixed an issue with `SearchParameter` if it had a null value for Code, the result would be a 500. Now it will result in an `InvalidResourceException` like the other values do. [#2343](https://github.com/microsoft/fhir-server/pull/2343) |
-|Returned `BadRequestException` with valid message when input JSON body is invalid |For invalid JSON body requests, the FHIR server was returning a 500 error. Now we will return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) |
-|`_sort` can cause `ChainedSearch` to return incorrect results |Previously, the sort options from the chained search's `SearchOption` object was not cleared, causing the sorting options to be passed through to the chained sub-search, which are not valid. This could result in no results when there should be results. This bug is now fixed [#2347](https://github.com/microsoft/fhir-server/pull/2347). It addressed GitHub bug [#2344](https://github.com/microsoft/fhir-server/issues/2344). |
+|Returned `BadRequestException` with valid message when input JSON body is invalid |For invalid JSON body requests, the FHIR server was returning a 500 error. Now we'll return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) |
+|`_sort` can cause `ChainedSearch` to return incorrect results |Previously, the sort options from the chained search's `SearchOption` object wasn't cleared, causing the sorting options to be passed through to the chained subsearch, which aren't valid. This could result in no results when there should be results. This bug is now fixed [#2347](https://github.com/microsoft/fhir-server/pull/2347). It addressed GitHub bug [#2344](https://github.com/microsoft/fhir-server/issues/2344). |
## November 2021
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
| :- | : | |Process Patient-everything links |We've expanded the Patient-everything capabilities to process patient links [#2305](https://github.com/microsoft/fhir-server/pull/2305). For more information, see [Patient-everything in FHIR](../../healthcare-apis/fhir/patient-everything.md#processing-patient-links) documentation. | |Added software name and version to capability statement |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Health Data Services. The software version will now specify which open-source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) |
-|Log 500's to `RequestMetric` |Previously, 500s or any unknown/unhandled errors were not getting logged in `RequestMetric`. They're now getting logged [#2240](https://github.com/microsoft/fhir-server/pull/2240). For more information, see [Enable diagnostic settings in Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md) |
+|Log 500's to `RequestMetric` |Previously, 500s or any unknown/unhandled errors weren't getting logged in `RequestMetric`. They're now getting logged [#2240](https://github.com/microsoft/fhir-server/pull/2240). For more information, see [Enable diagnostic settings in Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md) |
|Compress continuation tokens |In certain instances, the continuation token was too long to be able to follow the [next link](../../healthcare-apis/azure-api-for-fhir/overview-of-search.md#pagination) in searches and would result in a 404. To resolve this, we compressed the continuation token to ensure it stays below the size limit [#2279](https://github.com/microsoft/fhir-server/pull/2279). Addresses issue [#2250](https://github.com/microsoft/fhir-server/issues/2250). | ### **Bug fixes**
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
|Allows for search history bundles with Patch requests. |[#2156](https://github.com/microsoft/fhir-server/pull/2156) | |Enabled JSON patch in bundles using Binary resources. |[#2143](https://github.com/microsoft/fhir-server/pull/2143) |
-|New audit event sub-types |Related information |
+|New audit event subtypes |Related information |
| :-- | : | |Added new audit [OperationName subtypes](././../azure-api-for-fhir/enable-diagnostic-logging.md#audit-log-details).| [#2170](https://github.com/microsoft/fhir-server/pull/2170) |
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Previously updated : 03/21/2022 Last updated : 04/14/2022
Azure Health Data Services is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Health Data Services including the different service types (FHIR service, DICOM service, and MedTech service) that seamlessly work with one another.
+## March 2022
+
+### Azure Health Data Services
+
+### **Features**
+
+|Feature &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
+| :- | : |
+|Private Link |The Private Link feature is now available. With Private Link, you can access Azure Health Data Services securely from your VNet as a first-party service without having to go through a public Domain Name System (DNS). For more information, see [Configure Private Link for Azure Health Data Services](./../healthcare-apis/healthcare-apis-configure-private-link.md). |
+
+### FHIR service
+
+### **Features**
+
+|Feature | Related information |
+| : | -: |
+|FHIRPath Patch |This new feature enables you to use the FHIRPath Patch operation on FHIR resources. For more information, see [FHIR Rest API capabilities for Azure Health Data Services FHIR service](./../healthcare-apis/fhir/fhir-rest-api-capabilities.md). |
+
+### **Bug fixes**
+
+|Bug fixes |Related information |
+| :-- | : |
+|SQL timeout returns 408 |Previously, a SQL timeout would return a 500. Now a timeout in SQL will return a FHIR OperationOutcome with a 408 status code. For more information, see [PR #2497](https://github.com/microsoft/fhir-server/pull/2497). |
+|Duplicate resources in search with `_include` |Fixed issue where a single resource can be returned twice in a search that has `_include`. For more information, see [PR #2448](https://github.com/microsoft/fhir-server/pull/2448). |
+|PUT creates on versioned update |Fixed issue where creates with PUT resulted in an error when the versioning policy is configured to `versioned-update`. For more information, see [PR #2457](https://github.com/microsoft/fhir-server/pull/2457). |
+|Invalid header handling on versioned update |Fixed issue where invalid `if-match` header would result in an HTTP 500 error. Now an HTTP Bad Request is returned instead. For more information, see [PR #2467](https://github.com/microsoft/fhir-server/pull/2467). |
+
+### MedTech service
+
+### **Features and enhancements**
+
+|Enhancements | Related information |
+| : | -: |
+|Events |The Events feature within Health Data Services is now generally available (GA). The Events feature allows customers to receive notifications and triggers when FHIR observations are created, updated, or deleted. For more information, see [Events message structure](events/events-message-structure.md) and [What are events?](events/events-overview.md). |
+|Events documentation for Azure Health Data Services |Updated docs to allow for better understanding, knowledge, and help for Events as it went GA. Updated troubleshooting for ease of use for the customer. |
+|One touch deploy button for MedTech service launch in the portal |Enables easier deployment and use of MedTech service for customers without the need to go back and forth between pages or interfaces. |
+ ## January 2022 ### **Features and enhancements**
Azure Health Data Services is a set of managed API services based on open standa
|Enhancements | Related information | | : | -: | |Customers can define their own query tags using the Extended Query Tags feature |With Extended Query Tags feature, customers now efficiently query non-DICOM metadata for capabilities like multitenancy and cohorts. It's available for all customers in Azure Health Data Services. |+ ## December 2021 ### Azure Health Data Services
Azure Health Data Services is a set of managed API services based on open standa
| :- | -:| |Allows for search history bundles with Patch requests. |[#2156](https://github.com/microsoft/fhir-server/pull/2156) | |Enabled JSON patch in bundles using Binary resources. |[#2143](https://github.com/microsoft/fhir-server/pull/2143) |
-|Added new audit event [OperationName sub-types](./././azure-api-for-fhir/enable-diagnostic-logging.md#audit-log-details)| [#2170](https://github.com/microsoft/fhir-server/pull/2170) |
+|Added new audit event [OperationName subtypes](./././azure-api-for-fhir/enable-diagnostic-logging.md#audit-log-details)| [#2170](https://github.com/microsoft/fhir-server/pull/2170) |
| Running a reindex job | [Reindex improvements](./././fhir/how-to-run-a-reindex.md)| | :- | -:|
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
To create a device template for an IoT Edge transparent gateway device:
The following screenshot shows the **Relationships** page for an IoT Edge gateway device with downstream devices that use the **Thermostat** device template: The previous screenshot shows an IoT Edge gateway device template with no modules defined. A transparent gateway doesn't require any modules because the IoT Edge runtime forwards messages from the downstream devices directly to IoT Central. If the gateway itself needs to send telemetry, synchronize properties, or handle commands, you can define these capabilities in the root component or in a module.
To add the devices:
The following screenshot shows you can view the list of devices attached to a gateway on the **Downstream Devices** page: In a transparent gateway, the downstream devices connect to the gateway itself, not to a custom module hosted by the gateway.
When the two virtual machines are deployed and running, verify the IoT Edge gate
1. Open the IoT Edge gateway device and verify the status of the modules on the **Modules** page. If the IoT Edge runtime started successfully, the status of the **$edgeAgent** and **$edgeHub** modules is **Running**:
- :::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime.png" alt-text="Screenshot showing the $edgeAgent and $edgeHub modules running on the IoT Edge gateway.":::
+ :::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime.png" alt-text="Screenshot showing the $edgeAgent and $edgeHub modules running on the IoT Edge gateway." lightbox="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime.png":::
> [!TIP] > You may have to wait for several minutes while the virtual machine starts up and the device is provisioned in your IoT Central application.
To run the thermostat simulator on the `leafdevice` virtual machine:
1. To see the telemetry in IoT Central, navigate to the **Overview** page for the **thermostat1** device:
- :::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/downstream-device-telemetry.png" alt-text="Screenshot showing telemetry from the downstream device.":::
+ :::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/downstream-device-telemetry.png" alt-text="Screenshot showing telemetry from the downstream device." lightbox="media/how-to-connect-iot-edge-transparent-gateway/downstream-device-telemetry.png":::
On the **About** page you can view property values sent from the downstream device, and on the **Command** page you can call commands on the downstream device.
iot-central Howto Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-rules.md
Conditions are what rules trigger on. You can add multiple conditions to a rule
In the following screenshot, the conditions check when the temperature is greater than 70&deg; F and the humidity is less than 10. When any of these statements are true, the rule evaluates to true and triggers an action. > [!NOTE] > Currently only Telemetry Conditions are supported.
iot-central Howto Control Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-control-devices-with-rest-api.md
Components let you group and reuse device capabilities. To learn more about comp
Not all device templates use components. The following screenshot shows the device template for a simple [thermostat](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/thermostat-2.json) where all the capabilities are defined in a single interface called the **Root component**: The following screenshot shows a [temperature controller](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/temperaturecontroller-2.json) device template that uses components. The temperature controller has two thermostat components and a device information component:
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-organizations.md
When you give users access to your application, the higher in the hierarchy you
The following screenshot shows an organization hierarchy definition in IoT Central: ## Create a hierarchy To start using organizations, you need to define your organization hierarchy. Each organization in the hierarchy acts as a logical container where you place devices, save dashboards and device groups, and invite users. To create your organizations, go to the **Permissions** section in your IoT Central application, select the **Organizations** tab, and select either **+ New** or use the context menu for an existing organization. To create one or many organizations at a time, select **+ Add another organization**: > [!TIP] > The initial setup of organizations must be done by a member of the **App Administrator** role.
When you create a new device in your application, assign it to an organization i
To assign or reassign an existing device to an organization, select the device in the device list and then select **Organization**: > [!TIP] > You can see which organization a device belongs to in the device list. Use the filter tool in the device list to show devices in a particular organization.
You can assign the same user to multiple organizations. The user can have a diff
| Name | Role | Organization | | - | - | |
-| user1@contoso.com | Org Administrator | Contoso Inc/Lamna Health |
-| user1@contoso.com | Org Viewer | Contoso Inc/Adatum Solar |
+| user1@contoso.com | Org Administrator | Custom app |
+| user1@contoso.com | Org Viewer | Custom app |
When you invite a new user, you need to share the application URL with them and ask them to sign in. After the user has signed in for the first time, the application appears on the user's [My apps](https://apps.azureiotcentral.com/myapps) page.
You can set an organization as the default organization to use in your applicati
To set the default organization, select **Settings** on the top menu bar: ## Add organizations to an existing application
iot-central Howto Customize Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-customize-ui.md
The following screenshot shows a page using a custom screenshot with the customi
## Create theme
-To create a custom theme, navigate to the **Appearance** page in the **Customization** section under **Settings**:
+To create a custom theme, navigate to the **Appearance** section in the **Customization** page.
![IoT Central themes](./media/howto-customize-ui/themes.png)
A PNG image, no larger than 32 x 32 pixels, with a transparent background. A web
You can change the color of the page header and the color used for accenting buttons and other highlights. Use a six character hex color value in the format `##ff6347`. For more information about **HEX Value** color notation, see [HTML Colors](https://www.w3schools.com/html/html_colors.asp). > [!NOTE]
-> You can always revert back to the default options on the **Customize your application** page.
+> You can always revert back to the default options on the **Appearance** section.
### Changes for operators
-If an administrator creates a custom theme, then operators and other users of your application can no longer choose a theme in **Settings**.
+If an administrator creates a custom theme, then operators and other users of your application can no longer choose a theme in **Appearance**.
## Replace help links To provide custom help information to your operators and other users, you can modify the links on the application **Help** menu.
-To modify the help links, navigate to the **Help links** page in the **Customization** section under **Settings**:
+To modify the help links, navigate to the **Help links** section in the **Customization** page.
![Customize IoT Central help links](./media/howto-customize-ui/help-links.png)
You can also add new entries to the help menu and remove default entries:
![Customized IoT Central help](./media/howto-customize-ui/custom-help.png) > [!NOTE]
-> You can always revert back to the default help links on the **Customize help** page.
+> You can always revert back to the default help links on the **Customization** page.
## Change application text
-To change text labels in the application, navigate to the **Text** page in the **Customization** section under **Settings**.
+To change text labels in the application, navigate to the **Text** section in the **Customization** page.
On this page, you can customize the text of your application for all supported languages. You can change 'Device' related text to any word you prefer using the text customization file. After you upload the file, the application text automatically appears with the updated words. You can make further customizations by editing and overwriting the customization file. You can repeat the process for any language that the IoT Central UI supports.
Following example shows how to change the word `Device` to `Asset` when you view
:::image type="content" source="media/howto-customize-ui/updated-ui-text.png" alt-text="Screenshot that shows updated text in the U I.":::
-You can reupload the customization file with further changes by selecting the relevant language from the list on the **Text** page in the **Customization** section.
+You can reupload the customization file with further changes by selecting the relevant language from the list on the **Text** section in the **Customization** page.
## Next steps
iot-central Howto Use Location Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-location-data.md
This article shows you how to use location data in an IoT Central application. A
You can use the location data to: * Plot the reported location on a map.
-* Plot the telemetry location history om a map.
+* Plot the telemetry location history on a map.
* Create geofencing rules to notify an operator when a device enters or leaves a specific area. ## Add location capabilities to a device template The following screenshot shows a device template with examples of a device property and telemetry type that use location data. The definitions use the **location** semantic type and the **geolocation** schema type: For reference, the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) definitions for these capabilities look like the following snippet:
You can display location data in multiple places in your IoT Central application
When you create a view for a device, you can choose to plot the location on a map, or show the individual values: You can add map tiles to a dashboard to plot the location of one or more devices. When you add a map tile to show location telemetry, you can plot the location over a time period. The following screenshot shows the location reported by a simulated device over the last 30 minutes: ## Create a geofencing rule You can use location telemetry to create a geofencing rule that generates an alert when a device moves into or out of a rectangular area. The following screenshot shows a rule that uses four conditions to define a rectangular area using latitude and longitude values. The rule generates an email when the device moves into the rectangular area: ## Next steps Now that you've learned how to use properties in your Azure IoT Central application, see: * [Payloads](concepts-telemetry-properties-commands.md)
-* [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)
+* [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)
iot-dps How To Manage Enrollments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-manage-enrollments.md
Title: Manage device enrollments for Azure IoT Hub Device Provisioning Service i
description: How to manage device enrollments for your Device Provisioning Service (DPS) in the Azure portal Previously updated : 10/25/2021 Last updated : 03/21/2022
The Azure IoT Device Provisioning Service supports two types of enrollments:
* [Enrollment groups](concepts-service.md#enrollment-group): Used to enroll multiple related devices. * [Individual enrollments](concepts-service.md#individual-enrollment): Used to enroll a single device.
+> [!IMPORTANT]
+> If you have trouble accessing enrollments from the Azure portal, it may be because you have public network access disabled or IP filtering rules configured that block access for the Azure portal. To learn more, see [Disable public network access limitations](public-network-access.md#disable-public-network-access-limitations) and [IP filter rules limitations](iot-dps-ip-filtering.md#ip-filter-rules-limitations).
+ ## Create an enrollment group An enrollment group is an entry for a group of devices that share a common attestation mechanism. We recommend that you use an enrollment group for a large number of devices that share an initial configuration, or for devices that go to the same tenant. Devices that use either [symmetric key](concepts-symmetric-key-attestation.md) or [X.509 certificates](concepts-x509-attestation.md) attestation are supported.
iot-dps How To Troubleshoot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-troubleshoot-dps.md
Previously updated : 02/14/2021 Last updated : 04/15/2022 #Customer intent: As an operator for Azure IoT Hub DPS, I need to know how to find out when devices are disconnecting unexpectedly and troubleshoot resolve those issues right away. # Troubleshooting with Azure IoT Hub Device Provisioning Service
-Connectivity issues for IoT devices can be difficult to troubleshoot because there are many possible points of failures such as attestation failures, registration failures etc. This article provides guidance on how to detect and troubleshoot device connectivity issues via [Azure Monitor](../azure-monitor/overview.md).
+Connectivity issues for IoT devices can be difficult to troubleshoot because there are many possible points of failures such as attestation failures, registration failures etc. This article provides guidance on how to detect and troubleshoot device connectivity issues via Azure Monitor. To learn more about using Azure Monitor with DPS, see [Monitor Device Provisioning Service](monitor-iot-dps.md).
## Using Azure Monitor to view metrics and set up alerts
-The following procedure describes how to view and set up alert on IoT Hub Device Provisioning Service metric.
+To view and set up alerts on IoT Hub Device Provisioning Service metrics:
1. Sign in to the [Azure portal](https://portal.azure.com).
The following procedure describes how to view and set up alert on IoT Hub Device
3. Select **Metrics**.
-4. Select the desired metric.
- <br />Currently there are three metrics for DPS:
-
- | Metric Name | Description |
- |-||
- | Attestation attempts | Number of devices that attempted to authenticate with Device Provisioning Service|
- | Registration attempts | Number of devices that attempted to register to IoT Hub after successful authentication|
- | Device assigned | Number of devices that successfully assigned to IoT Hub|
+4. Select the desired metric. For supported metrics, see [Metrics](monitor-iot-dps-reference.md#metrics).
5. Select desired aggregation method to create a visual view of the metric.
The following procedure describes how to view and set up alert on IoT Hub Device
7. Select **Add condition**, then select the desired metric and threshold by following prompts.
-To learn more, see [alerts in Azure Monitor](../azure-monitor/alerts/alerts-overview.md).
+To learn more about viewing metrics and setting up alerts on your DPS instance, see [Analyzing metrics](monitor-iot-dps.md#analyzing-metrics) and [Alerts](monitor-iot-dps.md#alerts) in Monitor Device Provisioning Service.
-## Using Log Analytic to view and resolve errors
+## Using Log Analytics to view and resolve errors
1. Sign in to the [Azure portal](https://portal.azure.com).
To learn more, see [alerts in Azure Monitor](../azure-monitor/alerts/alerts-over
4. Select **Add diagnostic setting**.
-5. Configure the desired logs to be collected.
-
- | Log Name | Description |
- |-||
- | DeviceOperations | Logs related to device connection events |
- | ServiceOperations | Event logs related to using service SDK (e.g. Creating or updating enrollment groups)|
+5. Configure the desired logs to be collected. For supported categories, see [Resource logs](monitor-iot-dps-reference.md#resource-logs).
6. Tick the box **Send to Log Analytics** ([see pricing](https://azure.microsoft.com/pricing/details/log-analytics/)) and save.
To learn more, see [alerts in Azure Monitor](../azure-monitor/alerts/alerts-over
9. If there are results, look for `OperationName`, `ResultType`, `ResultSignature`, and `ResultDescription` (error message) to get more detail on the error. - ## Common error codes+ Use this table to understand and resolve common errors. | Error Code| Description | HTTP Status Code |
Use this table to understand and resolve common errors.
| 412 | The ETag in the request does not match the ETag of the existing resource, as per RFC7232. | 412 Precondition failed | | 429 | Operations are being throttled by the service. For specific service limits, see [IoT Hub Device Provisioning Service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#iot-hub-device-provisioning-service-limits). | 429 Too many requests | | 500 | An internal error occurred. | 500 Internal Server Error|+
+## Next Steps
+
+- To learn more about using Azure Monitor with DPS, see [Monitor Device Provisioning Service](monitor-iot-dps.md).
+
+- To learn about metrics, logs, and schemas emitted for DPS in Azure Monitor, see [Monitoring Device Provisioning Service data reference](monitor-iot-dps-reference.md).
iot-dps Iot Dps Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/iot-dps-ip-filtering.md
There are two specific use-cases where it is useful to block connections to a DP
* You need to reject traffic from IP addresses that have been identified as suspicious by the DPS administrator.
->[!Note]
->If IP filtering is enabled, you'll no longer be able use the Azure portal to perform service operations (i.e. managing enrollments). To perform service operations using the portal, you'll have to temporarily deactivate IP filtering, complete your work, and then re-enable the IP filtering feature. If you want to use your own clients and avoid the deactivation of the IP filter, you can choose to add your machine's IP address to the `ipFilterRules` and manage the enrollments in the DPS through CLI.
+## IP filter rules limitations
+
+Note the following limitations if IP filtering is enabled:
+
+* You might not be able to use the Azure portal to manage enrollments. If this occurs, you can add the IP address of one or more machines to the `ipFilterRules` and manage enrollments in the DPS instance from those machines with Azure CLI, PowerShell, or service APIs.
+
+ This scenario is most likely to happen when you want to use IP filtering to allow access only to selected IP addresses. In this case, you configure rules to enable certain addresses or address ranges and a default rule that blocks all other addresses (0.0.0.0/0). This default rule will block Azure portal from performing operations like managing enrollments on the DPS instance. For more information, see [IP filter rule evaluation](iot-dps-ip-filtering.md#ip-filter-rule-evaluation) later in this article.
## How filter rules are applied
iot-dps Monitor Iot Dps Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/monitor-iot-dps-reference.md
+
+ Title: Monitoring Azure IoT Hub Device Provisioning Service data reference #Required; *your official service name*
+description: Important reference material needed when you monitor Azure IoT Hub Device Provisioning Service
+++++ Last updated : 04/15/2022++
+# Monitoring Azure IoT Hub Device Provisioning Service data reference
+
+See [Monitoring Iot Hub Device Provisioning Service](monitor-iot-dps.md) for details on collecting and analyzing monitoring data for Azure IoT Hub Device Provisioning Service (DPS).
+
+## Metrics
+
+This section lists all the automatically collected platform metrics collected for DPS.
+
+Resource Provider and Type: [Microsoft.Devices/provisioningServices](/azure/azure-monitor/platform/metrics-supported#microsoftdevicesprovisioningservices).
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|AttestationAttempts|Yes|Attestation attempts|Count|Total|Number of device attestations attempted|ProvisioningServiceName, Status, Protocol|
+|DeviceAssignments|Yes|Devices assigned|Count|Total|Number of devices assigned to an IoT hub|ProvisioningServiceName, IotHubName|
+|RegistrationAttempts|Yes|Registration attempts|Count|Total|Number of device registrations attempted|ProvisioningServiceName, IotHubName, Status|
+
+For more information, see a list of [all platform metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
+
+## Metric dimensions
+
+DPS has the following dimensions associated with its metrics.
+
+| Dimension Name | Description |
+| - | -- |
+| IotHubName | The name of the target IoT hub. |
+| Protocol | The device or service protocol used. |
+| ProvisioningServiceName | The name of the DPS instance. |
+| Status | The status of the operation. |
+
+For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+
+## Resource logs
+
+This section lists the types of resource logs you can collect for DPS.
+
+Resource Provider and Type: [Microsoft.Devices/provisioningServices](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdevicesprovisioningservices).
+
+| Category | Description |
+|:||
+| DeviceOperations | Logs related to device attestation events. See device APIs listed in [Billable service operations and pricing](about-iot-dps.md#billable-service-operations-and-pricing). |
+| ServiceOperations | Logs related to DPS service events. See DPS service APIs listed in [Billable service operations and pricing](about-iot-dps.md#billable-service-operations-and-pricing). |
+
+For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
+
+DPS uses the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table to store resource log information. The following columns are relevant.
+
+| Property | Data type | Description |
+|: |:|:|
+| ApplicationId | GUID | Application ID used in bearer authorization. |
+| CallerIpAddress | String | A masked source IP address for the event. |
+| Category | String | Type of operation, either **ServiceOperations** or **DeviceOperations**. |
+| CorrelationId | GUID | Customer provided unique identifier for the event. |
+| DurationMs | String | How long it took to perform the event in milliseconds. |
+| Level | Int | The logging severity of the event. For example, Information or Error. |
+| OperationName | String | The type of action performed during the event. For example: Query, Get, Upsert, and so on. |
+| OperationVersion | String | The API Version used during the event. |
+| Resource | String | The name forOF the resource where the event took place. For example, "MYEXAMPLEDPS". |
+| ResourceGroup | String | The name of the resource group where the resource is located. |
+| ResourceId | String | The Azure Resource Manager Resource ID for the resource where the event took place. |
+| ResourceProvider | String | The resource provider for the the event. For example, "MICROSOFT.DEVICES". |
+| ResourceType | String | The resource type for the event. For example, "PROVISIONINGSERVICES". |
+| ResultDescription | String | Error details for the event if unsuccessful. |
+| ResultSignature | String | HTTP status code for the event if unsuccessful. |
+| ResultType | String | Outcome of the event: Success, Failure, ClientError, and so on. |
+| SubscriptionId | GUID | The subscription ID of the Azure subscription where the resource is located. |
+| TenantId | GUID | The tenant ID for the Azure tenant where the resource is located. |
+| TimeGenerated | DateTime | The date and time that this event occurred, in UTC. |
+| location_s | String | The Azure region where the event took place. |
+| properties_s | JSON | Additional information details for the event. |
+
+### DeviceOperations
+
+The following JSON is an example of a successful attestation attempt from a device. The registration ID for the device is identified in the `properties_s` property.
+
+```json
+ {
+ "CallerIPAddress": "24.18.226.XXX",
+ "Category": "DeviceOperations",
+ "CorrelationId": "68952383-80c0-436f-a2e3-f8ae9a41c69d",
+ "DurationMs": "226",
+ "Level": "Information",
+ "OperationName": "AttestationAttempt",
+ "OperationVersion": "March2019",
+ "Resource": "MYEXAMPLEDPS",
+ "ResourceGroup": "MYRESOURCEGROUP",
+ "ResourceId": "/SUBSCRIPTIONS/747F1067-xxx-xxx-xxxx-9DEAA894152F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DEVICES/PROVISIONINGSERVICES/MYEXAMPLEDPS",
+ "ResourceProvider": "MICROSOFT.DEVICES",
+ "ResourceType": "PROVISIONINGSERVICES",
+ "ResultDescription": "",
+ "ResultSignature": "",
+ "ResultType": "Success",
+ "SourceSystem": "Azure",
+ "SubscriptionId": "747F1067-xxx-xxx-xxxx-9DEAA894152F",
+ "TenantId": "37dcb621-xxxx-xxxx-xxxx-e8c8addbc4e5",
+ "TimeGenerated": "2022-04-02T00:05:51Z",
+ "Type": "AzureDiagnostics",
+ "_ResourceId": "/subscriptions/747F1067-xxx-xxx-xxxx-9DEAA894152F/resourcegroups/myresourcegroup/providers/microsoft.devices/provisioningservices/myexampledps",
+ "location_s": "centralus",
+ "properties_s": "{\"id\":\"my-device-1\",\"type\":\"Registration\",\"protocol\":\"Mqtt\"}",
+ }
+
+```
+
+### ServiceOperations
+
+The following JSON is an example of a successful add (`Upsert`) individual enrollment operation. The registration ID for the enrollment and the type of enrollment are identified in the `properties_s` property.
+
+```json
+ {
+ "CallerIPAddress": "13.91.244.XXX",
+ "Category": "ServiceOperations",
+ "CorrelationId": "23bd419d-d294-452b-9b1b-520afef5ef52",
+ "DurationMs": "98",
+ "Level": "Information",
+ "OperationName": "Upsert",
+ "OperationVersion": "October2021",
+ "Resource": "MYEXAMPLEDPS",
+ "ResourceGroup": "MYRESOURCEGROUP",
+ "ResourceId": "/SUBSCRIPTIONS/747F1067-xxxx-xxxx-xxxx-9DEAA894152F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DEVICES/PROVISIONINGSERVICES/MYEXAMPLEDPS",
+ "ResourceProvider": "MICROSOFT.DEVICES",
+ "ResourceType": "PROVISIONINGSERVICES",
+ "ResultDescription": "",
+ "ResultSignature": "",
+ "ResultType": "Success",
+ "SourceSystem": "Azure",
+ "SubscriptionId": "747f1067-xxxx-xxxx-xxxx-9deaa894152f",
+ "TenantId": "37dcb621-xxxx-xxxx-xxxx-e8c8addbc4e5",
+ "TimeGenerated": "2022-04-01T00:52:00Z",
+ "Type": "AzureDiagnostics",
+ "_ResourceId": "/subscriptions/747F1067-xxxx-xxxx-xxxx-9DEAA894152F/resourcegroups/myresourcegroup/providers/microsoft.devices/provisioningservices/myexampledps",
+ "location_s": "centralus",
+ "properties_s": "{\"id\":\"my-device-1\",\"type\":\"IndividualEnrollment\",\"protocol\":\"Http\"}",
+ }
+```
+
+## Azure Monitor Logs tables
+
+This section refers to all of the Azure Monitor Logs Kusto tables relevant to DPS and available for query by Log Analytics. For a list of these tables and links to more information for the DPS resource type, see [Device Provisioning Services](/azure/azure-monitor/reference/tables/tables-resourcetype#device-provisioning-services) in the Azure Monitor Logs table reference.
+
+For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
+
+## Activity log
+
+For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema).
+
+## See Also
+
+- See [Monitoring Azure IoT Hub Device Provisioning Service](monitor-iot-dps.md) for a description of monitoring Azure IoT Hub Device Provisioning Service.
+
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources.
iot-dps Monitor Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/monitor-iot-dps.md
+
+ Title: Monitoring Azure IoT Hub Device Provisioning Service
+description: Start here to learn how to monitor Azure IoT Hub Device Provisioning Service
+++++ Last updated : 04/15/2022++
+# Monitoring Azure IoT Hub Device Provisioning Service
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+
+This article describes the monitoring data generated by Azure IoT Hub Device Provisioning Service (DPS). DPS uses [Azure Monitor](/azure/azure-monitor/overview). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+
+## Monitoring data
+
+DPS collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources).
+
+See [Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md) for detailed information on the metrics and logs created by DPS.
+
+## Collection and routing
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+In Azure portal, you can select **Diagnostic settings** under **Monitoring** on the left-pane of your DPS instance followed by **Add diagnostic setting** to create diagnostic settings scoped to the logs and platform metrics emitted by your instance.
+
+The following screenshot shows a diagnostic setting for routing to a Log Analytics workspace.
++
+See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for DPS are listed in [Resource logs in the Azure IoT Hub Device Provisioning Service monitoring data reference](monitor-iot-dps-reference.md#resource-logs).
+
+The metrics and logs you can collect are discussed in the following sections.
+
+## Analyzing metrics
+
+You can analyze metrics for DPS with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started) for details on using this tool.
+
+In Azure portal, you can select **Metrics** under **Monitoring** on the left-pane of your DPS instance to open metrics explorer scoped, by default, to the platform metrics emitted by your instance:
++
+For a list of the platform metrics collected for DPS, see [Metrics in the Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md#metrics).
+
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/essentials/metrics-supported).
+
+## Analyzing logs
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+
+To route data to Azure Monitor Logs, you must create a diagnostic setting to send resource logs or platform metrics to a Log Analytics workspace. To learn more, see [Collection and routing](#collection-and-routing).
+
+In Azure portal, you can select **Logs** under **Monitoring** on the left-pane of your DPS instance to perform Log Analytics queries scoped, by default, to the logs and metrics collected in Azure Monitor Logs for your instance.
++
+> [!IMPORTANT]
+> When you select **Logs** from the DPS menu, Log Analytics is opened with the query scope set to the current DPS instance. This means that log queries will only include data from that resource. If you want to run a query that includes data from other DPS instances or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/logs/scope) for details.
+
+Run queries against the **AzureDiagnostics** table to see the resource logs collected for the diagnostic settings you've created for your DPS instance.
+
+```kusto
+AzureDiagnostics
+```
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema) The schema for DPS resource logs is found in [Resource logs in the Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md#resource-logs).
+
+The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+For a list of the types of resource logs collected for DPS, see [Resource logs in the Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md#resource-logs).
+
+For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Azure Monitor Logs tables in the Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md#azure-monitor-logs-tables).
+
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have benefits and drawbacks.
+
+## Next steps
+
+- See [Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md) for a reference of the metrics, logs, and other important values created by DPS.
+
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
iot-dps Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/public-network-access.md
Previously updated : 10/18/2021 Last updated : 03/21/2022 # Manage public network access for your IoT Device Provisioning Service
To turn on public network access:
1. Select **All networks**. 2. Select **Save**.
-## Access the DPS after disabling the public network access
+## Disable public network access limitations
-After public network access is disabled, the DPS instance is accessible only through [its VNet private endpoint using Azure private link](virtual-network-support.md). This restriction includes accessing through the Azure portal.
+Note the following limitations when public network access is disabled:
+
+- The DPS instance is accessible only through [its VNET private endpoint using Azure private link](virtual-network-support.md).
+
+- You can no longer use the Azure portal to manage enrollments for the DPS instance. Instead you can manage enrollments using the Azure CLI, PowerShell, or service APIs from machines inside the virtual network(s) configured on the DPS instance. To learn more, see [Private endpoint limitations](virtual-network-support.md#private-endpoint-limitations).
## DPS endpoint, IP address, and ports after disabling public network access
iot-dps Quick Create Simulated Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-symm-key.md
This quickstart demonstrates a solution for a Windows-based workstation. However
::: zone pivot="programming-language-csharp"
-* Install [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
+* Install [.NET SDK 6.0](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
```cmd dotnet --info
iot-dps Quick Create Simulated Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-tpm.md
The following prerequisites are for a Windows development environment. For Linux
* A TPM 2.0 hardware security module on your Windows-based machine.
-* Install [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
+* Install [.NET Core SDK 6.0](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
```bash dotnet --info
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
The following prerequisites are for a Windows development environment. For Linux
::: zone pivot="programming-language-csharp"
-* Install [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
+* Install [.NET SDK 6.0](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
```bash dotnet --info
iot-dps Virtual Network Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/virtual-network-support.md
- Last updated 10/06/2021
+ Last updated 03/21/2022
Note the following current limitations for DPS when using private endpoints:
* Current DPS VNET support is for data ingress into DPS only. Data egress, which is the traffic from DPS to IoT Hub, uses an internal service-to-service mechanism rather than a dedicated VNET. Support for full VNET-based egress lockdown between DPS and IoT Hub is not currently available.
-* The lowest latency allocation policy is used to assign a device to the IoT hub with the lowest latency. This allocation policy is not reliable in a virtual network environment.
+* The lowest latency allocation policy is used to assign a device to the IoT hub with the lowest latency. This allocation policy is not reliable in a virtual network environment.
+
+* Enabling one or more private endpoints typically involves [disabling public access](public-network-access.md) to your DPS instance. This means that you can no longer use the Azure portal to manage enrollments. Instead you can manage enrollments using the Azure CLI, PowerShell, or service APIs from machines inside the VNET(s)/private endpoint(s) configured on the DPS instance.
>[!NOTE] >**Data residency consideration:**
iot-edge Iot Edge Limits And Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-limits-and-restrictions.md
IoT Hub only supports file upload APIs for device identities, not module identit
For more information on uploading files with IoT Hub, see [Upload files with IoT Hub](../iot-hub/iot-hub-devguide-file-upload.md).
+<!-- 1.1 -->
+### AMQP transport
+When using Node.js to send device to cloud messages with the AMQP protocol to an IoT Edge runtime, messages stop sending after 2047 messages. No error is thrown and the messages eventually start sending again, then cycle repeats. If the client connects directly to Azure IoT Hub, there's no issue with sending messages. This issue has been fixed in IoT Edge 1.2.
+
+<!-- end 1.1 -->
+ ## Next steps For more information, see [IoT Hub other limits](../iot-hub/iot-hub-devguide-quotas-throttling.md#other-limits).
iot-hub-device-update Device Update Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-error-codes.md
The following table lists error codes pertaining to the content service componen
| "UpdateName" | Cannot import a new update name for the specified provider. | You have reached a [limit](device-update-limits.md) on the number of different __Names__ allowed under one Provider in your instance of Device Update for IoT Hub. Delete some updates from your instance and try again. | | "UpdateVersion" | Cannot import a new update version for the specified provider and name. | You have reached a [limit](device-update-limits.md) on the number of different __Versions__ allowed under one Provider and Name in your instance of Device Update for IoT Hub. Delete some updates with that Name from your instance and try again. | | "UpdateProviderCompatibility" | Cannot import additional update provider with the specified compatibility. | When defining device manufacturer and device model compatibility properties in an import manifest, keep in mind that Device Update for IoT Hub supports a single Provider and Name combination for a given manufacturer/model. This means if you try to use the same manufacturer/model compatibility properties with more than one Provider/Name combination, you will see these errors. To resolve this, make sure that all updates for a given device (as defined by manufacturer/model) use the same Provider and Name. While not required, you may want to consider making the Provider the same as the manufacturer and the Name the same as the model, just for simplicity. |
-| "UpdateNameCompatibility" | Cannot import additional update name with the specified compatibility. | Same as for UpdateProviderCompatibility.ContentLimitNamespaceCompatibility. |
-| "UpdateVersionCompatibility" | Cannot import additional update version with the specified compatibility. | Same as for UpdateProviderCompatibility.ContentLimitNamespaceCompatibility. |
+| "UpdateNameCompatibility" | Cannot import additional update name with the specified compatibility. | When defining device manufacturer and device model compatibility properties in an import manifest, keep in mind that Device Update for IoT Hub supports a single Provider and Name combination for a given manufacturer/model. This means if you try to use the same manufacturer/model compatibility properties with more than one Provider/Name combination, you will see these errors. To resolve this, make sure that all updates for a given device (as defined by manufacturer/model) use the same Provider and Name. While not required, you may want to consider making the Provider the same as the manufacturer and the Name the same as the model, just for simplicity. |
+| "UpdateVersionCompatibility" | Cannot import additional update version with the specified compatibility. | When defining device manufacturer and device model compatibility properties in an import manifest, keep in mind that Device Update for IoT Hub supports a single Provider and Name combination for a given manufacturer/model. This means if you try to use the same manufacturer/model compatibility properties with more than one Provider/Name combination, you will see these errors. To resolve this, make sure that all updates for a given device (as defined by manufacturer/model) use the same Provider and Name. While not required, you may want to consider making the Provider the same as the manufacturer and the Name the same as the model, just for simplicity. |
| "CannotProcessUpdateFile" | Error processing source file. | | | "ContentFileCannotDownload" | Cannot download source file. | Check to make sure the URL for the update file(s) is still valid. | | "SourceFileMalwareDetected" | A known malware signature was detected in a file being imported. | Content imported into Device Update for IoT Hub is scanned for malware by several different mechanisms. If a known malware signature is identified, the import will fail and a unique error message will be returned. The error message contains the description of the malware signature, and a file hash for each file where the signature was detected. You can use the file hash to find the exact file being flagged, and use the description of the malware signature to check that file for malware. <br><br>Once you have removed the malware from any files being imported, you can start the import process again. |
iot-hub Iot Hub Create Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-through-portal.md
You can change the settings of an existing IoT hub after it's created from the I
**Pricing and scale**: You can use this property to migrate to a different tier or set the number of IoT Hub units.
-**Operations monitoring**: Turn the different monitoring categories on or off, such as logging for events related to device-to-cloud messages or cloud-to-device messages.
- **IP Filter**: Specify a range of IP addresses that will be accepted or rejected by the IoT hub. **Properties**: Provides the list of properties that you can copy and use elsewhere, such as the resource ID, resource group, location, and so on.
iot-hub Iot Hub Devguide Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-endpoints.md
The following list describes the endpoints:
* *Receive file notifications*. This messaging endpoint allows you to receive notifications of when your devices successfully upload a file. * *Direct method invocation*. This endpoint allows a back-end service to invoke a [direct method](iot-hub-devguide-direct-methods.md) on a device.
-
- * *Receive operations monitoring events*. This endpoint allows you to receive operations monitoring events if your IoT hub has been configured to emit them. For more information, see [IoT Hub operations monitoring](iot-hub-operations-monitoring.md).
The [Azure IoT SDKs](iot-hub-devguide-sdks.md) article describes the various ways to access these endpoints.
iot-hub Iot Hub Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ha-dr.md
Both these failover options offer the following recovery point objectives (RPOs)
| Cloud-to-device messages<sup>1</sup> |0-5 mins data loss | | Parent<sup>1</sup> and device jobs |0-5 mins data loss | | Device-to-cloud messages |All unread messages are lost |
-| Operations monitoring messages |All unread messages are lost |
| Cloud-to-device feedback messages |All unread messages are lost | <sup>1</sup>Cloud-to-device messages and parent jobs do not get recovered as a part of manual failover.
iot-hub Iot Hub Migrate To Diagnostics Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-migrate-to-diagnostics-settings.md
- Title: Migrate Azure IoT Hub operations monitoring to IoT Hub resource logs in Azure Monitor | Microsoft Docs
-description: How to update Azure IoT Hub to use Azure Monitor instead of operations monitoring to monitor the status of operations on your IoT hub in real time.
----- Previously updated : 03/11/2019---
-# Migrate your IoT Hub from operations monitoring to Azure Monitor resource logs
-
-Customers using [operations monitoring](iot-hub-operations-monitoring.md) to track the status of operations in IoT Hub can migrate that workflow to [Azure Monitor resource logs](../azure-monitor/essentials/platform-logs-overview.md), a feature of Azure Monitor. Resource logs supply resource-level diagnostic information for many Azure services.
-
->[!IMPORTANT]
->**IoT Hub operations monitoring is retired and was removed from IoT Hub on March 10, 2019.** Accordingly, this article is no longer being updated. IoT Hub operations monitoring was replaced by Azure Monitor. To learn about monitoring the operations and health of IoT Hub with Azure Monitor, see [Monitor IoT Hub](monitor-iot-hub.md).
-
-This article provides steps to move your workloads from operations monitoring to Azure Monitor resource logs.
-
-## Update IoT Hub
-
-To update your IoT Hub in the Azure portal, first create a diagnostic setting, then turn off operations monitoring.
-
-### Create a diagnostic setting
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your IoT hub.
-
-1. On the left pane, under **Monitoring**, select **Diagnostics settings**. Then select **Add diagnostic setting**.
-
- :::image type="content" source="media/iot-hub-migrate-to-diagnostics-settings/open-diagnostic-settings.png" alt-text="Screenshot that highlights Diagnostic settings in the Monitoring section.":::
-
-1. On the **Diagnostic setting** pane, give the diagnostic setting a name.
-
-1. Under **Category details**, select the categories for the operations you want to monitor. For more information about the categories of operations available with IoT Hub, see [Resource logs](monitor-iot-hub-reference.md#resource-logs).
-
-1. Under **Destination details**, choose where you want to send the logs. You can select any combination of these destinations:
-
- * Archive to a storage account
- * Stream to an event hub
- * Send to Azure Monitor Logs via a Log Analytics workspace
-
- The following screenshot shows a diagnostic setting that routes operations in the Connections and Device telemetry categories to a Log Analytics workspace:
-
- :::image type="content" source="media/iot-hub-migrate-to-diagnostics-settings/add-diagnostic-setting.png" alt-text="Screenshot showing a completed diagnostic setting.":::
-
-1. Select **Save** to save the settings.
-
-New settings take effect in about 10 minutes. After that, logs appear in the configured destination. For more information about configuring diagnostics, see [Collect and consume log data from your Azure resources](../azure-monitor/essentials/platform-logs-overview.md).
-
-For more detailed information about how to create diagnostic settings, including with PowerShell and the Azure CLI, see [Diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md) in the Azure Monitor documentation.
-
-### Turn off operations monitoring
-
-> [!NOTE]
-> As of March 11, 2019, the operations monitoring feature is removed from IoT Hub's Azure portal interface. The steps below no longer apply. To migrate, make sure that the correct categories are routed to a destination with an Azure Monitor diagnostic setting above.
-
-Once you test the new diagnostics settings in your workflow, you can turn off the operations monitoring feature.
-
-1. In your IoT Hub menu, select **Operations monitoring**.
-
-2. Under each monitoring category, select **None**.
-
-3. Save the operations monitoring changes.
-
-## Update applications that use operations monitoring
-
-The schemas for operations monitoring and resource logs vary slightly. It's important that you update the applications that use operations monitoring today to map to the schema used by resource logs.
-
-Also, IoT Hub resource logs offers five new categories for tracking. After you update applications for the existing schema, add the new categories as well:
-
-* Cloud-to-device twin operations
-* Device-to-cloud twin operations
-* Twin queries
-* Jobs operations
-* Direct Methods
-
-For the specific schema structures, see [Resource logs](monitor-iot-hub-reference.md#resource-logs).
-
-## Monitoring device connect and disconnect events with low latency
-
-To monitor device connect and disconnect events in production, we recommend subscribing to the [**device disconnected** event](iot-hub-event-grid.md#event-types) on Event Grid to get alerts and monitor the device connection state. Use this [tutorial](iot-hub-how-to-order-connection-state-events.md) to learn how to integrate Device Connected and Device Disconnected events from IoT Hub in your IoT solution.
-
-## Next steps
-
-[Monitor IoT Hub](monitor-iot-hub.md)
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
If a device cannot use the device SDKs, it can still connect to the public devic
`contoso.azure-devices.net/MyDevice01/?api-version=2021-04-12` It's strongly recommended to include api-version in the field. Otherwise it could cause unexpected behaviors.
-
+ * For the **Password** field, use a SAS token. The format of the SAS token is the same as for both the HTTPS and AMQP protocols: `SharedAccessSignature sig={signature-string}&se={expiry}&sr={URL-encoded-resourceURI}`
If a device cannot use the device SDKs, it can still connect to the public devic
`SharedAccessSignature sr={your hub name}.azure-devices.net%2Fdevices%2FMyDevice01%2Fapi-version%3D2016-11-14&sig=vSgHBMUG.....Ntg%3d&se=1456481802`
-For MQTT connect and disconnect packets, IoT Hub issues an event on the **Operations Monitoring** channel. This event has additional information that can help you to troubleshoot connectivity issues.
- The device app can specify a **Will** message in the **CONNECT** packet. The device app should use `devices/{device_id}/messages/events/` or `devices/{device_id}/messages/events/{property_bag}` as the **Will** topic name to define **Will** messages to be forwarded as a telemetry message. In this case, if the network connection is closed, but a **DISCONNECT** packet was not previously received from the device, then IoT Hub sends the **Will** message supplied in the **CONNECT** packet to the telemetry channel. The telemetry channel can be either the default **Events** endpoint or a custom endpoint defined by IoT Hub routing. The message has the **iothub-MessageType** property with a value of **Will** assigned to it. ## Using the MQTT protocol directly (as a module)
iot-hub Iot Hub Operations Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-operations-monitoring.md
- Title: Azure IoT Hub operations monitoring (deprecated) | Microsoft Docs
-description: How to use Azure IoT Hub operations monitoring to monitor the status of operations on your IoT hub in real time.
----- Previously updated : 03/11/2019----
-# IoT Hub operations monitoring (retired)
-
-IoT Hub operations monitoring enables you to monitor the status of operations on your IoT hub in real time. IoT Hub tracks events across several categories of operations. You can opt into sending events from one or more categories to an endpoint of your IoT hub for processing. You can monitor the data for errors or set up more complex processing based on data patterns.
-
->[!IMPORTANT]
->**IoT Hub operations monitoring is retired and was removed from IoT Hub on March 10, 2019.** Accordingly, this article is no longer being updated. IoT Hub operations monitoring was replaced by Azure Monitor. To learn about monitoring the operations and health of IoT Hub with Azure Monitor, see [Monitor IoT Hub](monitor-iot-hub.md).
-
-IoT Hub monitors six categories of events:
-
-* Device identity operations
-* Device telemetry
-* Cloud-to-device messages
-* Connections
-* File uploads
-* Message routing
-
-> [!IMPORTANT]
-> IoT Hub operations monitoring does not guarantee reliable or ordered delivery of events. Depending on IoT Hub underlying infrastructure, some events might be lost or delivered out of order. Use operations monitoring to generate alerts based on error signals such as failed connection attempts, or high-frequency disconnections for specific devices. You should not rely on operations monitoring events to create a consistent store for device state, e.g. a store tracking connected or disconnected state of a device.
-
-## How to enable operations monitoring
-
-1. Create an IoT hub. You can find instructions on how to create an IoT hub in the [Get Started](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) guide.
-
-2. Open the blade of your IoT hub. From there, click **Operations monitoring**.
-
- ![Access operations monitoring configuration in the portal](./media/iot-hub-operations-monitoring/enable-OM-1.png)
-
-3. Select the monitoring categories you wish to monitor, and then click **Save**. The events are available for reading from the Event Hub-compatible endpoint listed in **Monitoring settings**. The IoT Hub endpoint is called `messages/operationsmonitoringevents`.
-
- ![Configure operations monitoring on your IoT hub](./media/iot-hub-operations-monitoring/enable-OM-2.png)
-
-> [!NOTE]
-> Selecting **Verbose** monitoring for the **Connections** category causes IoT Hub to generate additional diagnostics messages. For all other categories, the **Verbose** setting changes the quantity of information IoT Hub includes in each error message.
-
-## Event categories and how to use them
-
-Each operations monitoring category tracks a different type of interaction with IoT Hub, and each monitoring category has a schema that defines how events in that category are structured.
-
-### Device identity operations
-
-The device identity operations category tracks errors that occur when you attempt to create, update, or delete an entry in your IoT hub's identity registry. Tracking this category is useful for provisioning scenarios.
-
-```json
-{
- "time": "UTC timestamp",
- "operationName": "create",
- "category": "DeviceIdentityOperations",
- "level": "Error",
- "statusCode": 4XX,
- "statusDescription": "MessageDescription",
- "deviceId": "device-ID",
- "durationMs": 1234,
- "userAgent": "userAgent",
- "sharedAccessPolicy": "accessPolicy"
-}
-```
-
-### Device telemetry
-
-The device telemetry category tracks errors that occur at the IoT hub and are related to the telemetry pipeline. This category includes errors that occur when sending telemetry events (such as throttling) and receiving telemetry events (such as unauthorized reader). This category cannot catch errors caused by code running on the device itself.
-
-```json
-{
- "messageSizeInBytes": 1234,
- "batching": 0,
- "protocol": "Amqp",
- "authType": "{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\"}",
- "time": "UTC timestamp",
- "operationName": "ingress",
- "category": "DeviceTelemetry",
- "level": "Error",
- "statusCode": 4XX,
- "statusType": 4XX001,
- "statusDescription": "MessageDescription",
- "deviceId": "device-ID",
- "EventProcessedUtcTime": "UTC timestamp",
- "PartitionId": 1,
- "EventEnqueuedUtcTime": "UTC timestamp"
-}
-```
-
-### Cloud-to-device commands
-
-The cloud-to-device commands category tracks errors that occur at the IoT hub and are related to the cloud-to-device message pipeline. This category includes errors that occur when sending cloud-to-device messages (such as unauthorized sender), receiving cloud-to-device messages (such as delivery count exceeded), and receiving cloud-to-device message feedback (such as feedback expired). This category does not catch errors from a device that improperly handles a cloud-to-device message if the cloud-to-device message was delivered successfully.
-
-```json
-{
- "messageSizeInBytes": 1234,
- "authType": "{\"scope\":\"hub\",\"type\":\"sas\",\"issuer\":\"iothub\"}",
- "deliveryAcknowledgement": 0,
- "protocol": "Amqp",
- "time": " UTC timestamp",
- "operationName": "ingress",
- "category": "C2DCommands",
- "level": "Error",
- "statusCode": 4XX,
- "statusType": 4XX001,
- "statusDescription": "MessageDescription",
- "deviceId": "device-ID",
- "EventProcessedUtcTime": "UTC timestamp",
- "PartitionId": 1,
- "EventEnqueuedUtcTime": "UTC timestamp"
-}
-```
-
-### Connections
-
-The connections category tracks errors that occur when devices connect or disconnect from an IoT hub. Tracking this category is useful for identifying unauthorized connection attempts and for tracking when a connection is lost for devices in areas of poor connectivity.
-
-```json
-{
- "durationMs": 1234,
- "authType": "{\"scope\":\"hub\",\"type\":\"sas\",\"issuer\":\"iothub\"}",
- "protocol": "Amqp",
- "time": " UTC timestamp",
- "operationName": "deviceConnect",
- "category": "Connections",
- "level": "Error",
- "statusCode": 4XX,
- "statusType": 4XX001,
- "statusDescription": "MessageDescription",
- "deviceId": "device-ID"
-}
-```
-
-### File uploads
-
-The file upload category tracks errors that occur at the IoT hub and are related to file upload functionality. This category includes:
-
-* Errors that occur with the SAS URI, such as when it expires before a device notifies the hub of a completed upload.
-
-* Failed uploads reported by the device.
-
-* Errors that occur when a file is not found in storage during IoT Hub notification message creation.
-
-This category cannot catch errors that directly occur while the device is uploading a file to storage.
-
-```json
-{
- "authType": "{\"scope\":\"hub\",\"type\":\"sas\",\"issuer\":\"iothub\"}",
- "protocol": "HTTP",
- "time": " UTC timestamp",
- "operationName": "ingress",
- "category": "fileUpload",
- "level": "Error",
- "statusCode": 4XX,
- "statusType": 4XX001,
- "statusDescription": "MessageDescription",
- "deviceId": "device-ID",
- "blobUri": "http//bloburi.com",
- "durationMs": 1234
-}
-```
-
-### Message routing
-
-The message routing category tracks errors that occur during message route evaluation and endpoint health as perceived by IoT Hub. This category includes events such as when a rule evaluates to "undefined", when IoT Hub marks an endpoint as dead, and any other errors received from an endpoint. This category does not include specific errors about the messages themselves (such as device throttling errors), which are reported under the "device telemetry" category.
-
-```json
-{
- "messageSizeInBytes": 1234,
- "time": "UTC timestamp",
- "operationName": "ingress",
- "category": "routes",
- "level": "Error",
- "deviceId": "device-ID",
- "messageId": "ID of message",
- "routeName": "myroute",
- "endpointName": "myendpoint",
- "details": "ExternalEndpointDisabled"
-}
-```
-
-## Connect to the monitoring endpoint
-
-The monitoring endpoint on your IoT hub is an Event Hub-compatible endpoint. You can use any mechanism that works with Event Hubs to read monitoring messages from this endpoint. The following sample creates a basic reader that is not suitable for a high throughput deployment. For more information about how to process messages from Event Hubs, see the [Get Started with Event Hubs](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md) tutorial.
-
-To connect to the monitoring endpoint, you need a connection string and the endpoint name. The following steps show you how to find the necessary values in the portal:
-
-1. In the portal, navigate to your IoT Hub resource blade.
-
-2. Choose **Operations monitoring**, and make a note of the **Event Hub-compatible name** and **Event Hub-compatible endpoint** values:
-
- ![Event Hub-compatible endpoint values](./media/iot-hub-operations-monitoring/monitoring-endpoint.png)
-
-3. Choose **Shared access policies**, then choose **service**. Make a note of the **Primary key** value:
-
- ![Service shared access policy primary key](./media/iot-hub-operations-monitoring/service-key.png)
-
-The following C# code sample is taken from a Visual Studio **Windows Classic Desktop** C# console app. The project has the **WindowsAzure.ServiceBus** NuGet package installed.
-
-* Replace the connection string placeholder with a connection string that uses the **Event Hub-compatible endpoint** and service **Primary key** values you noted previously as shown in the following example:
-
- ```csharp
- "Endpoint={your Event Hub-compatible endpoint};SharedAccessKeyName=service;SharedAccessKey={your service primary key value}"
- ```
-
-* Replace the monitoring endpoint name placeholder with the **Event Hub-compatible name** value you noted previously.
-
-```csharp
-class Program
-{
- static string connectionString = "{your monitoring endpoint connection string}";
- static string monitoringEndpointName = "{your monitoring endpoint name}";
- static EventHubClient eventHubClient;
-
- static void Main(string[] args)
- {
- Console.WriteLine("Monitoring. Press Enter key to exit.\n");
-
- eventHubClient = EventHubClient.CreateFromConnectionString(connectionString, monitoringEndpointName);
- var d2cPartitions = eventHubClient.GetRuntimeInformation().PartitionIds;
- CancellationTokenSource cts = new CancellationTokenSource();
- var tasks = new List<Task>();
-
- foreach (string partition in d2cPartitions)
- {
- tasks.Add(ReceiveMessagesFromDeviceAsync(partition, cts.Token));
- }
-
- Console.ReadLine();
- Console.WriteLine("Exiting...");
- cts.Cancel();
- Task.WaitAll(tasks.ToArray());
- }
-
- private static async Task ReceiveMessagesFromDeviceAsync(string partition, CancellationToken ct)
- {
- var eventHubReceiver = eventHubClient.GetDefaultConsumerGroup().CreateReceiver(partition, DateTime.UtcNow);
- while (true)
- {
- if (ct.IsCancellationRequested)
- {
- await eventHubReceiver.CloseAsync();
- break;
- }
-
- EventData eventData = await eventHubReceiver.ReceiveAsync(new TimeSpan(0,0,10));
-
- if (eventData != null)
- {
- string data = Encoding.UTF8.GetString(eventData.GetBytes());
- Console.WriteLine("Message received. Partition: {0} Data: '{1}'", partition, data);
- }
- }
- }
-}
-```
-
-## Next steps
-
-To further explore using Azure Monitor to monitor IoT Hub, see:
-
-* [Monitor IoT Hub](monitor-iot-hub.md)
-
-* [Migrate from IoT Hub operations monitoring to Azure Monitor](iot-hub-migrate-to-diagnostics-settings.md)
iot-hub Troubleshoot Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/troubleshoot-error-codes.md
The supported pattern for cloud-to-device messages with HTTPS is intermittently
Alternatively, enhance device side logic to complete, reject, or abandon queued messages quickly, shorten the time to live, or consider sending fewer messages. See [C2D message time to live](./iot-hub-devguide-messages-c2d.md#message-expiration-time-to-live).
-Lastly, consider using the [Purge Queue API](/azure/iot-hub/iot-c-sdk-ref/iothub-registrymanager-h/iothubregistrymanager-deletedevice) to periodically clean up pending messages before the limit is reached.
+Lastly, consider using the [Purge Queue API](/rest/api/iothub/service/cloud-to-device-messages/purge-cloud-to-device-message-queue) to periodically clean up pending messages before the limit is reached.
## 403006 DeviceMaximumActiveFileUploadLimitExceeded
load-balancer Load Balancer Ha Ports Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ha-ports-overview.md
Title: High availability ports overview in Azure description: Learn about high availability ports load balancing on an internal load balancer. - na Previously updated : 09/19/2019 Last updated : 04/14/2022 # High availability ports overview
-Azure Standard Load Balancer helps you load-balance **all** protocol flows on **all** ports simultaneously when you're using an internal Load Balancer via HA Ports.
+Azure Standard Load Balancer helps you load-balance **all** protocol flows on **all** ports simultaneously when you're using an internal load balancer via HA Ports.
-High availability (HA) ports is a type of load balancing rule that provides an easy way to load-balance **all** flows that arrive on **all** ports of an internal Standard Load Balancer. The load-balancing decision is made per flow. This action is based on the following five-tuple connection: source IP address, source port, destination IP address, destination port, and protocol
+High availability (HA) ports are a type of load balancing rule that provides an easy way to load-balance **all** flows that arrive on **all** ports of an internal standard load balancer. The load-balancing decision is made per flow. This action is based on the following five-tuple connection: source IP address, source port, destination IP address, destination port, and protocol
The HA ports load-balancing rules help you with critical scenarios, such as high availability and scale for network virtual appliances (NVAs) inside virtual networks. The feature can also help when a large number of ports must be load-balanced.
-The HA ports load-balancing rules is configured when you set the front-end and back-end ports to **0** and the protocol to **All**. The internal load balancer resource then balances all TCP and UDP flows, regardless of port number
+The HA ports load-balancing rules are configured when you set the front-end and back-end ports to **0** and the protocol to **All**. The internal load balancer resource then balances all TCP and UDP flows, regardless of port number
## Why use HA ports?
-### <a name="nva"></a>Network virtual appliances
+### Network virtual appliances
-You can use NVAs to help secure your Azure workload from multiple types of security threats. When you use NVAs in these scenarios, they must be reliable and highly available, and they must scale out for demand.
-
-You can achieve these goals simply by adding NVA instances to the back-end pool of your internal load balancer and configuring an HA ports load-balancer rule.
+You can use NVAs to help secure your Azure workload from multiple types of security threats. When you use NVAs in these scenarios, they must be reliable and highly available, and they must scale out for demand. Add NVA instances to the backend pool of your internal load balancer and configure an HA ports rule.
For NVA HA scenarios, HA ports offer the following advantages:+ - Provide fast failover to healthy instances, with per-instance health probes-- Ensure higher performance with scale-out to *n*-active instances-- Provide *n*-active and active-passive scenarios+
+- Ensure higher performance with scale-out to **n**-active instances
+
+- Provide **n**-active and active-passive scenarios
+ - Eliminate the need for complex solutions, such as Apache ZooKeeper nodes for monitoring appliances The following diagram presents a hub-and-spoke virtual network deployment. The spokes force-tunnel their traffic to the hub virtual network and through the NVA, before leaving the trusted space. The NVAs are behind an internal Standard Load Balancer with an HA ports configuration. All traffic can be processed and forwarded accordingly. When configured as show in the following diagram, an HA Ports load-balancing rule additionally provides flow symmetry for ingress and egress traffic.
-<a name="diagram"></a>
-![Diagram of hub-and-spoke virtual network, with NVAs deployed in HA mode](./media/load-balancer-ha-ports-overview/nvaha.png)
>[!NOTE] > If you are using NVAs, confirm with their providers how to best use HA ports and to learn which scenarios are supported.
-### Load-balancing large numbers of ports
+### Load balance a large number of ports
-You can also use HA ports for applications that require load balancing of large numbers of ports. You can simplify these scenarios by using an internal [Standard Load Balancer](./load-balancer-overview.md) with HA ports. A single load-balancing rule replaces multiple individual load-balancing rules, one for each port.
+You can also use HA ports for applications that require load balancing of large numbers of ports. You can simplify these scenarios by using an internal [standard load balancer](./load-balancer-overview.md) with HA ports. A single load-balancing rule replaces multiple individual load-balancing rules, one for each port.
## Region availability
The HA ports feature is available in all the global Azure regions.
## Supported configurations
-### A single, non-floating IP (non-Direct Server Return) HA-ports configuration on an internal Standard Load Balancer
+### A single, non-floating IP (non-Direct Server Return) HA-ports configuration on an internal standard load balancer
+
+This configuration is a basic HA ports configuration. Use the following steps to configure an HA ports load-balancing rule on a single frontend IP address:
+
+1. When configuring a standard load balancer, select the **HA ports** check box in the load balancer rule configuration.
-This configuration is a basic HA ports configuration. You can configure an HA ports load-balancing rule on a single front-end IP address by doing the following:
-1. While configuring Standard Load Balancer, select the **HA ports** check box in the Load Balancer rule configuration.
2. For **Floating IP**, select **Disabled**.
-This configuration does not allow any other load-balancing rule configuration on the current load balancer resource. It also allows no other internal load balancer resource configuration for the given set of back-end instances.
+This configuration doesn't allow any other load-balancing rule configuration on the current load balancer resource. It also allows no other internal load balancer resource configuration for the given set of back-end instances.
However, you can configure a public Standard Load Balancer for the back-end instances in addition to this HA ports rule.
-### A single, floating IP (Direct Server Return) HA-ports configuration on an internal Standard Load Balancer
+### A single, floating IP (Direct Server Return) HA-ports configuration on an internal standard load balancer
You can similarly configure your load balancer to use a load-balancing rule with **HA Port** with a single front end by setting the **Floating IP** to **Enabled**.
-By using this configuration, you can add more floating IP load-balancing rules and/or a public load balancer. However, you cannot use a non-floating IP, HA-ports load-balancing configuration on top of this configuration.
+With this configuration, you can add more floating IP load-balancing rules and/or a public load balancer. However, you can't use a non-floating IP, HA-ports load-balancing configuration on top of this configuration.
+
+### Multiple HA-ports configurations on an internal standard load balancer
+
+To configure more than one HA port frontend for the same backend pool, use the following steps:
-### Multiple HA-ports configurations on an internal Standard Load Balancer
+- Configure more than one front-end private IP address for a single internal standard load balancer resource.
-If your scenario requires that you configure more than one HA port front end for the same back-end pool, you can do the following:
-- Configure more than one front-end private IP address for a single internal Standard Load Balancer resource. - Configure multiple load-balancing rules, where each rule has a single unique front-end IP address selected.+ - Select the **HA ports** option, and then set **Floating IP** to **Enabled** for all the load-balancing rules.
-### An internal load balancer with HA ports and a public load balancer on the same back-end instance
+### An internal load balancer with HA ports and a public load balancer on the same backend instance
-You can configure *one* public Standard Load Balancer resource for the backend resources, along with a single internal Standard Load Balancer with HA ports.
+You can configure **one** public standard load balancer resource for the backend resources with a single internal standard load balancer with HA ports.
## Limitations -- HA ports load-balancing rules are available only for internal Standard Load Balancer.-- The combining of an HA ports load-balancing rule and a non-HA ports load-balancing rule pointing to same backend ipconfiguration(s) is **not** supported on a single Frontend IP configuration unless both have Floating IP enabled.-- IP fragmenting is not supported. If a packet is already fragmented, it will be forwarded based on the 2-tuple [distribution mode](distribution-mode-concepts.md) when enabled on HA ports load-balancing rules.-- Flow symmetry (primarily for NVA scenarios) is supported with backend instance and a single NIC (and single IP configuration) only when used as shown in the diagram above and using HA Ports load-balancing rules. It is not provided in any other scenario. This means that two or more Load Balancer resources and their respective rules make independent decisions and are never coordinated. See the description and diagram for [network virtual appliances](#nva). When you are using multiple NICs or sandwiching the NVA between a public and internal Load Balancer, flow symmetry is not available. You may be able to work around this by source NAT'ing the ingress flow to the IP of the appliance to allow replies to arrive on the same NVA. However, we strongly recommend using a single NIC and using the reference architecture shown in the diagram above.
+- HA ports load-balancing rules are available only for an internal standard load balancer.
+
+- The combining of an HA ports load-balancing rule and a non-HA ports load-balancing rule pointing to the same backend **ipconfiguration(s)** isn't supported on a single front-end IP configuration unless both have Floating IP enabled.
+
+- IP fragmenting isn't supported. If a packet is already fragmented, it's forwarded based on the two-tuple [distribution mode](distribution-mode-concepts.md) when enabled on HA ports load-balancing rules.
+- Flow symmetry for NVA scenarios with a backend instance and a single IP/single NIC configuration is supported only when used as shown in the diagram above. Flow symmetry isn't provided in any other scenario. Two or more load balancer resources and their rules make independent decisions and aren't coordinated. Flow symmetry isn't available with the use of multiple IP configurations. Flow symmetry isn't available when placing the NVA between a public and internal load balancer. We recommend the use of a single IP/single NIC configuration referenced in the architecture above.
## Next steps
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
The specified VM Size failed to provision due to a lack of Azure Machine Learnin
Below is a list of reasons you might run into this error: * [Resource request was greater than limits](#resource-requests-greater-than-limits)
-* [Unable to download resources](#unable-to-download-resources)
+* [Startup task failed due to authorization error](#authorization-error)
+* [Startup task failed due to incorrect role assignments on resource](#authorization-error)
+* [Unable to download user container image](#unable-to-download-user-container-image)
+* [Unable to download user model or code artifacts](#unable-to-download-user-model-or-code-artifacts)
#### Resource requests greater than limits Requests for resources must be less than or equal to limits. If you don't set limits, we set default values when you attach your compute to an Azure Machine Learning workspace. You can check limits in the Azure portal or by using the `az ml compute show` command.
-#### Unable to download resources
+#### Authorization error
After provisioning the compute resource, during deployment creation, Azure tries to pull the user container image from the workspace private Azure Container Registry (ACR) and mount the user model and code artifacts into the user container from the workspace storage account.
To pull blobs, Azure uses [managed identities](../active-directory/managed-ident
- If you created the associated endpoint with UserAssigned, the user's managed identity must have Storage blob data reader permission on the workspace storage account.
-During this process, you can run into a few different issues depending on which stage the operation failed at:
-
-* [Unable to download user container image](#unable-to-download-user-container-image)
-* [Unable to download user model or code artifacts](#unable-to-download-user-model-or-code-artifacts)
-
-To get more details about these errors, run:
-
-```azurecli
-az ml online-deployment get-logs -n <endpoint-name> --deployment <deployment-name> --l 100
-```
- #### Unable to download user container image
-It is possible that the user container could not be found.
+It is possible that the user container could not be found. Check [container logs](#get-container-logs) to get more details.
Make sure container image is available in workspace ACR.
For example, if image is `testacr.azurecr.io/azureml/azureml_92a029f831ce58d2ed0
#### Unable to download user model or code artifacts
-It is possible that the user model or code artifacts can't be found.
+It is possible that the user model or code artifacts can't be found. Check [container logs](#get-container-logs) to get more details.
Make sure model and code artifacts are registered to the same workspace as the deployment. Use the `show` command to show details for a model or code artifact in a workspace.
marketplace Azure Vm Plan Pricing And Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-pricing-and-availability.md
Previously updated : 02/18/2022 Last updated : 04/15/2022 # Configure pricing and availability for a virtual machine offer
On this pane, you configure:
- The price per hour. - Whether to make the plan visible to everyone or only to specific customers (a private audience).
-### Markets
+## Markets
Every plan must be available in at least one market. Most markets are selected by default. To edit the list, select **Edit markets** and select or clear check boxes for each market location where this plan should (or shouldn't) be available for purchase. Users in selected markets can still deploy the offer to all Azure regions selected in the ["Plan setup"](azure-vm-plan-setup.md) section.
When you remove a market, customers from that market who are using active deploy
Select **Save** to continue.
-### Pricing
+## Pricing
For the **License model**, select **Usage-based monthly billed plan** to configure pricing for this plan, or **Bring your own license** to let customers use this plan with their existing license. For a usage-based monthly billed plan, Microsoft will charge the customer for their hourly usage and they're billed monthly. This is our _Pay-as-you-go_ plan, where customers are only billed for the hours that they've used. When you select this plan, choose one of the following pricing options: - **Free** ΓÇô Your VM offer is free.-- **Flat rate (recommended)** ΓÇô Your VM offer is the same hourly price regardless of the hardware it runs on.
+- **Flat rate** ΓÇô Your VM offer is the same hourly price regardless of the hardware it runs on.
- **Per core** ΓÇô Your VM offer pricing is based on per CPU core count. You provide the price for one CPU core and weΓÇÖll increment the pricing based on the size of the hardware. - **Per core size** ΓÇô Your VM offer is priced based on the number of CPU cores on the hardware it's deployed on. - **Per market and core size** ΓÇô Assign prices based on the number of CPU cores on the hardware it's deployed on, and also for all markets. Currency conversion is done by you, the publisher. This option is easier if you use the import pricing feature.
For **Per core size** and **Per market and core size**, enter a **Price per core
> [!NOTE] > To ensure that the prices are right before you publish them, export the pricing spreadsheet and review the prices in each market. Before you export pricing data, first select **Save draft** near the bottom of the page to save pricing changes.
-Some things to consider when selecting a pricing option:
-- For the first four options, Microsoft does the currency conversion.-- Microsoft suggests using a flat rate pricing for software solutions.-- Prices are fixed, so once the plan is published the prices can't be adjusted. However, if you would like to reduce prices for your VM offers you can open a [support ticket](support.md).
+When selecting a pricing option, Microsoft does the currency conversion for the Flat rate, Per core, and Per core size pricing options.
-> [!IMPORTANT]
-> Occasionally, Microsoft expands the list of supported core sizes available. When this occurs, we will notify you and request that you take action on your offer within a specified timeframe. If you do not review your offer within the timeframe specified, weΓÇÖll publish the new core sizes at the price that we have calculated for you. For details about updating core sizes, see [Update core size for an Azure virtual machine offer](azure-vm-plan-manage.md).
+### Configure reservation pricing (optional)
-### Free Trial
+When you select either the _Flat rate_, _Per core_, and _Per core size_ price option, the **Reservation pricing** section appears. You can choose to offer savings for a 1-year commitment, 3-year commitment, or both. For more information about reservation pricing, including how prices are calculated, see [Plan a virtual machine offer](marketplace-virtual-machines.md#reservation-pricing-optional).
+
+These steps assume you have already selected either the _Flat rate_, _Per core_, or _Per core size_ price option and entered a per hour price.
+
+1. Under **Reservation pricing**, select **Yes, offer reservation pricing**.
+1. To offer a 1-year discount, select the **1-year saving %** check box and then enter the percentage discount you want to offer.
+1. To offer a 3-year discount, select the **3-year saving %** check box and then enter the percentage discount you want to offer.
+1. To see the discounted prices, select **Price per core size**. A table with the 1-year and 3-year prices for each core size is shown. These prices are calculated based on the number of hours in the term with the percentage discount subtracted.
+
+ > [!TIP]
+ > For Per core size plans, you can optionally change the price for a particular core size in the **Price/hour** column of the table.
+
+1. Make sure to select **Save draft** before you leave the page. The changes are applied once you publish the offer.
+
+## Free trial
You can offer a one-month, three-month, or six-month **Free Trial** to your customers.
-### Plan visibility
+## Plan visibility
You can design each plan to be visible to everyone or only to a preselected private audience.
You can design each plan to be visible to everyone or only to a preselected priv
Private offers aren't supported with Azure subscriptions established through a reseller of the Cloud Solution Provider program (CSP).
-### Hide plan
+## Hide plan
If your virtual machine is meant to be used only indirectly when it's referenced through another solution template or managed application, select this check box to publish the virtual machine but hide it from customers who might be searching or browsing for it directly.
marketplace Marketplace Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-virtual-machines.md
Previously updated : 02/18/2022 Last updated : 04/15/2022 # Plan a virtual machine offer
A preview audience can access your offer prior to it being published live in the
## Plans, pricing, and trials
-VM offers require at least one plan. A plan defines the solution scope and limits, and the associated pricing. You can create multiple plans for your offer to give your customers different technical and licensing options, as well as trial opportunities. For VM offers with more than one plan, you can change the order that your plans are shown to customers. The first plan listed will become the default plan that customers will see. For info about how to reorder plans, see [Reorder plans](azure-vm-plan-reorder-plans.md). For general guidance about plans, including pricing models, free trials, and private plans, see [Plans and pricing for commercial marketplace offers](plans-pricing.md).
+VM offers require at least one plan. A plan defines the solution scope and limits, and the associated pricing. You can create multiple plans for your offer to give your customers different technical and pricing options, as well as trial opportunities. For VM offers with more than one plan, you can change the order that your plans are shown to customers. The first plan listed will become the default plan that customers will see. For info about how to reorder plans, see [Reorder plans](azure-vm-plan-reorder-plans.md). For general guidance about plans, including pricing models, free trials, and private plans, see [Plans and pricing for commercial marketplace offers](plans-pricing.md).
VMs are fully commerce-enabled, using usage-based pay-as-you-go or bring-your-own-license (BYOL) licensing models. Microsoft hosts the commerce transaction and bills your customer on your behalf. You get the benefit of using the preferred payment relationship between your customer and Microsoft, including any Enterprise Agreements. For more information, see [Commercial marketplace transact capabilities](./marketplace-commercial-transaction-capabilities-and-considerations.md). > [!NOTE] > The Azure Prepayment (previously called monetary commitment) associated with an Enterprise Agreement can be used against the Azure usage of your VM, but not against your software licensing fees.
+### Reservation pricing (optional)
+
+You can offer savings to customers who commit to an annual or three-year agreement through **VM software reservations**. This is called _Reservation pricing_.
+
+Reservation pricing applies to usage-based monthly billed plans with the following price options:
+
+- Flat rate
+- Per core
+- Per core size
+
+Reservation pricing doesnΓÇÖt apply to _Bring your own license_ plans or to plans with the following price options:
+
+- Free
+- Per market and core size price
+
+#### How prices are calculated
+
+The 1-year and 3-year prices are calculated based on the per hour usage-based price and the percentage savings you configure for a plan.
+
+In this example, weΓÇÖll configure a plan with the ΓÇ£Per coreΓÇ¥ price option as follows:
+
+- Hourly price per core: $1.
+- 1-year savings: 30% discount
+- 3-year savings: 50% discount
+
+All calculations are based on 8,760 hours per year. Without VM software reservation pricing, the yearly cost of a 1 core VM would be $8,760.00. If the customer purchases a VM software reservation, the price would be as follows:
+
+1-year price with 30% discount = $6,132.00
+
+3-year price with 50% discount = $13,140.00
+ ### Private plans Private plans restrict the discovery and deployment of your solution to a specific set of customers you choose and offer customized software, terms, and pricing. The customized terms enable you to highlight a variety of scenarios, including field-led deals with specialized pricing and terms as well as early access to limited release software.
openshift Built In Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/built-in-container-registry.md
Last updated 10/15/2020
-# Configure built-in container registry for Azure Red Hat OpenShift 4
+# Configure the built-in container registry for Azure Red Hat OpenShift 4
-Azure Red Hat OpenShift provides an integrated container image registry called [OpenShift Container Registry (OCR)](https://docs.openshift.com/container-platform/4.5/registry/architecture-component-imageregistry.html) that adds the ability to automatically provision new image repositories on demand. This provides users with a built-in location for their application builds to push the resulting images.
+Azure Red Hat OpenShift provides an [integrated container image registry](https://docs.openshift.com/container-platform/4.9/registry/https://docsupdatetracker.net/index.html) that adds the ability to automatically provision new image repositories on demand. This provides users with a built-in location for their application builds to push the resulting images.
In this article, you'll configure the built-in container image registry for an Azure Red Hat OpenShift (ARO) 4 cluster. You'll learn how to: > [!div class="checklist"]
-> * Set up Azure AD
-> * Set up OpenID Connect
-> * Access the built-in container image registry
+> * Authorize an identity to access to the registry
+> * Access the built-in container image registry from inside the cluster
+> * Access the built-in container image registry from outside the cluster
## Before you begin
-This article assumed you have an existing ARO cluster. If you need an ARO cluster, see the ARO tutorial, [Create an Azure Red Hat OpenShift 4 cluster](./tutorial-create-cluster.md). Make sure to create the cluster with the `--pull-secret` argument to `az aro create`. This is necessary to configure Azure Active Directory authentication and the built-in container registry.
+This article assumes you have an existing ARO cluster (see [Create an Azure Red Hat OpenShift 4 cluster](./tutorial-create-cluster.md)). If you would like to configure Azure AD integration, make sure to create the cluster with the `--pull-secret` argument to `az aro create`.
-Once you have your cluster, connect to the cluster by following the steps in [Connect to an Azure Red Hat OpenShift 4 cluster](./tutorial-connect-cluster.md).
- * Be sure to follow the steps in "Install the OpenShift CLI" because we'll use the `oc` command later in this article.
- * Make note of the cluster console URL, which looks like `https://console-openshift-console.apps.<random>.<region>.aroapp.io/`. The values for `<random>` and `<region>` will be used later in this article.
- * Note the `kubeadmin` credentials. They will also be used later in this article.
+> [!NOTE]
+> [Configuring Azure AD Authentication](./configure-azure-ad-ui.md#configure-openshift-openid-authentication) for your cluster is the easiest way to interact with the internal registry from outside the cluster.
-### Configure Azure Active Directory authentication
+Once you have your cluster, [connect to the cluster](./tutorial-connect-cluster.md) by authenticating as the `kubeadmin` user.
-Azure Active Directory (Azure AD) implements OpenID Connect (OIDC). OIDC lets you use Azure AD to sign in to the ARO cluster. Follow the steps in [Configure Azure Active Directory authentication](configure-azure-ad-cli.md) to set up your cluster.
+## Configure authentication to the registry
-## Access the built-in container image registry
+For any identity (a cluster user, Azure AD user, or ServiceAccount) to access the internal registry, it must be granted permissions inside the cluster:
-Now that you've set up the authentication methods to the ARO cluster, let's enable access to the built-in registry.
-
-#### Define the Azure AD user to be an administrator
-
-1. Sign in to the OpenShift web console from your browser using the credentials of an Azure AD user. We'll leverage the OpenShift OpenID authentication against Azure Active Directory to use OpenID to define the administrator.
-
- 1. Use an InPrivate, Incognito or other equivalent browser window feature to sign in to the console. The window will look different after having enabled OIDC.
-
- :::image type="content" source="media/built-in-container-registry/oidc-enabled-login-window.png" alt-text="OpenID Connect enabled sign in window.":::
- 1. Select **AAD**
-
- > [!NOTE]
- > Take note of the username and password you use to sign in here. This username and password will function as an administrator for other actions in this and other articles.
-2. Sign in with the OpenShift CLI by using the following steps. For discussion, this process is known as `oc login`.
- 1. At the right-top of the web console, expand the context menu of the signed-in user, then select **Copy Login Command**.
- 2. Sign in to a new tab window with the same user if necessary.
- 3. Select **Display Token**.
- 4. Copy the value listed below **Login with this token** to the clipboard and run it in a shell, as shown here.
+As `kubeadmin`, execute the following commands:
+ ```bash
+ # Note: replace "<user>" with the identity you need to access the registry
+ oc policy add-role-to-user -n openshift-image-registry registry-viewer <user>
+ oc policy add-role-to-user -n openshift-image-registry registry-editor <user>
+ ```
- ```bash
- oc login --token=XOdASlzeT7BHT0JZW6Fd4dl5EwHpeBlN27TAdWHseob --server=https://api.aqlm62xm.rnfghf.aroapp.io:6443
- Logged into "https://api.aqlm62xm.rnfghf.aroapp.io:6443" as "kube:admin" using the token provided.
+> [!Note]
+> For cluster users and Azure AD users - this will be the same name you use to authenticate into the cluster. For OpenShift ServiceAccounts, format the name as `system:serviceaccount:<project>:<name>`
- You have access to 57 projects, the list has been suppressed. You can list all projects with 'oc projects'
+## Access the registry
- Using project "default".
- ```
+Now that you've configured authentication for the registry, you can interact with it:
-3. Run `oc whoami` in the console and note the output as **\<aad-user>**. We'll use this value later in the article.
-4. Sign out of the OpenShift web console. Select the button in the top right of the browser window labeled as the **\<aad-user>** and choose **Log Out**.
+### From inside the cluster
+If you need to access the registry from inside the cluster (e.g. you are running a CI/CD platform as Pods that will push/pull images to the registry), you can access the registry via its [ClusterIP Service](https://docs.openshift.com/container-platform/4.9/rest_api/network_apis/service-core-v1.html) at the fully qualified domain name `image-registry.openshift-image-registry.svc.cluster.local:5000`, which is accessible to all Pods within the cluster.
-#### Grant the Azure AD user the necessary roles for registry interaction
+### From outside the cluster
-1. Sign in to the OpenShift web console from your browser using the `kubeadmin` credentials.
-1. Sign in to the OpenShift CLI with the token for `kubeadmin` by following the steps for `oc login` above, but do them after signing in to the web console with `kubeadmin`.
-1. Execute the following commands to enable the access to the built-in registry for the **aad-user**.
+If your workflows require you access the internal registry from outside the cluster (e.g. you want to push/pull images from a developer's laptop, external CI/CD platform, and/or a different ARO cluster), you will need to perform a few additional steps:
+As `kubeadmin`, execute the following commands to expose the built-in registry outside the cluster via a [Route](https://docs.openshift.com/container-platform/4.9/rest_api/network_apis/route-route-openshift-io-v1.html):
```bash
- # Switch to project "openshift-image-registry"
- oc project openshift-image-registry
-
- # Output should look similar to the following.
- # Now using project "openshift-image-registry" on server "https://api.x8xl3f4y.eastus.aroapp.io:6443".
+ oc patch config.imageregistry.operator.openshift.io/cluster --patch='{"spec":{"defaultRoute":true}}' --type=merge
+ oc patch config.imageregistry.operator.openshift.io/cluster --patch='[{"op": "add", "path": "/spec/disableRedirect", "value": true}]' --type=json
```
- ```bash
- # Expose the registry using "DefaultRoute"
- oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
-
- # Output should look similar to the following.
- # config.imageregistry.operator.openshift.io/cluster patched
- ```
+You can then find the registry's externally-routable fully qualified domain name:
+As `kubeadmin`, execute:
```bash
- # Add roles to "aad-user" for pulling and pushing images
- # Note: replace "<aad-user>" with the one you wrote down before
- oc policy add-role-to-user registry-viewer <aad-user>
-
- # Output should look similar to the following.
- # clusterrole.rbac.authorization.k8s.io/registry-viewer added: "kaaIjx75vFWovvKF7c02M0ya5qzwcSJ074RZBfXUc34"
- ```
-
- ```bash
- oc policy add-role-to-user registry-editor <aad-user>
- # Output should look similar to the following.
- # clusterrole.rbac.authorization.k8s.io/registry-editor added: "kaaIjx75vFWovvKF7c02M0ya5qzwcSJ074RZBfXUc34"
+ oc get route -n openshift-image-registry default-route --template='{{ .spec.host }}'
```
-#### Obtain the container registry URL
-
-Use the `oc get route` command as shown next to get the container registry URL.
-
-```bash
-# Note: the value of "Container Registry URL" in the output is the fully qualified registry name.
-HOST=$(oc get route default-route --template='{{ .spec.host }}')
-echo "Container Registry URL: $HOST"
-```
-
- > [!NOTE]
- > Note the console output of **Container Registry URL**. It will be used as the fully qualified registry name for this guide and subsequent ones.
- ## Next steps Now that you've set up the built-in container image registry, you can get started by deploying an application on OpenShift. For Java applications, check out [Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Red Hat OpenShift 4 cluster](howto-deploy-java-liberty-app.md).
openshift Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/concepts-networking.md
The following networking features are specific to Azure Red Hat OpenShift:
The following network settings are available for Azure Red Hat OpenShift 4 clusters:
-* **API Visibility** - Set the API visibility when running the [az are create command](tutorial-create-cluster.md#create-the-cluster).
+* **API Visibility** - Set the API visibility when running the [az aro create command](tutorial-create-cluster.md#create-the-cluster).
* "Public" - API Server is accessible by external networks. * "Private" - API Server assigned a private IP from the control plane subnet, only accessible using connected networks (peered virtual networks and other subnets in the cluster). A private DNS Zone will be created on the customer's behalf. * **Ingress Visibility** - Set the API visibility when running the [az aro create command](tutorial-create-cluster.md#create-the-cluster).
As included in the diagram above, you'll notice a few changes:
For more information on OpenShift 4.5 and later, check out the [OpenShift 4.5 release notes](https://docs.openshift.com/container-platform/4.5/release_notes/ocp-4-5-release-notes.html). ## Next steps
-For more information on outbound traffic and what Azure Red Hat OpenShift supports for egress, see the [support policies](support-policies-v4.md) documentation.
+For more information on outbound traffic and what Azure Red Hat OpenShift supports for egress, see the [support policies](support-policies-v4.md) documentation.
openshift Howto Secure Openshift With Front Door https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-secure-openshift-with-front-door.md
This article explains how to use Azure Front Door Premium to secure access to Az
The following prerequisites are required: -- You have an existing Azure Red Hat OpenShift cluster. For information on creating an Azure Red Hat OpenShift Cluster, learn how to [create-an-aks-cluster](../aks/kubernetes-walkthrough-portal.md#create-an-aks-cluster).
+- You have an existing Azure Red Hat OpenShift cluster. Follow this guide to to [create a private Azure Red Hat OpenShift cluster](howto-create-private-cluster-4x.md).
- The cluster is configured with private ingress visibility.
Because Azure Front Door is a global service, the application can take up to 30
Create a Azure Web Application Firewall on Azure Front Door using the Azure portal: > [!div class="nextstepaction"]
-> [Tutorial: Create a Web Application Firewall policy on Azure Front Door using the Azure portal](../web-application-firewall/afds/waf-front-door-create-portal.md)
+> [Tutorial: Create a Web Application Firewall policy on Azure Front Door using the Azure portal](../web-application-firewall/afds/waf-front-door-create-portal.md)
orbital Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/contact-profile.md
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Next steps - [Quickstart: Schedule a contact](schedule-contact.md)-- [How-to: Cancel a contact](delete-contact.md)
+- [Tutorial: Cancel a contact](delete-contact.md)
orbital Delete Contact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/delete-contact.md
Title: 'How to cancel a scheduled contact on Azure Orbital Earth Observation service'
-description: 'How to cancel a scheduled contact'
+ Title: 'Cancel a scheduled contact on Azure Orbital Earth Observation service'
+description: 'Cancel a scheduled contact'
-+ Last updated 11/16/2021
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
6. The scheduled contact will be canceled once the contact entry is deleted. ## Next steps -- [How-to: Schedule a contact](schedule-contact.md)-- [How-to: Update the spacecraft TLE](update-tle.md)
+- [Quickstart: Schedule a contact](schedule-contact.md)
+- [Tutorial: Update the spacecraft TLE](update-tle.md)
orbital Howto Downlink Aqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/howto-downlink-aqua.md
+
+ Title: Schedule a contact with NASA's AQUA public satellite using Azure Orbital Earth Observation Service
+description: How to schedule a contact with NASA's AQUA public satellite using Azure Orbital Earth Observation Service
++++ Last updated : 04/14/2022+
+# Customer intent: As a satellite operator, I want to ingest data from NASA's AQUA public satellite into Azure.
++
+# Tutorial: Downlink data from NASA's AQUA public satellite
+
+You can communicate with satellites directly from Azure using Azure Orbital's ground station service. Once downlinked, this data can be processed and analyzed in Azure. In this guide you'll learn how to:
+
+> [!div class="checklist"]
+> * Create & authorize a spacecraft for AQUA
+> * Prepare a virtual machine (VM) to receive the downlinked AQUA data
+> * Configure a contact profile for an AQUA downlink mission
+> * Schedule a contact with AQUA using Azure Orbital and save the downlinked data
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Complete the onboarding process for the preview. [Onboard to the Azure Orbital Preview](orbital-preview.md).
+
+## Sign in to Azure
+
+Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
+
+> [!NOTE]
+> These steps must be followed as is or you won't be able to find the resources. Please use the specific link above to sign in directly to the Azure Orbital Preview page.
+
+## Create & authorize a spacecraft for AQUA
+1. In the Azure portal search box, enter **Spacecrafts*. Select **Spacecrafts** in the search results.
+2. In the **Spacecrafts** page, select Create.
+3. Learn an up-to-date Two-Line Element (TLE) for AQUA by checking celestrak at https://celestrak.com/NORAD/elements/active.txt
+ > [!NOTE]
+ > You will want to periodically update this TLE value to ensure that it is up-to-date prior to scheduling a contact. A TLE that is more than one or two weeks old may result in an unsuccessful downlink.
+4. In **Create spacecraft resource**, enter or select this information in the Basics tab:
+
+ | **Field** | **Value** |
+ | | |
+ | Subscription | Select your subscription |
+ | Resource Group | Select your resource group |
+ | Name | **AQUA** |
+ | Region | Select **West US 2** |
+ | NORAD ID | **27424** |
+ | TLE title line | **AQUA** |
+ | TLE line 1 | Enter TLE line 1 from Celestrak |
+ | TLE line 2 | Enter TLE line 2 from Celestrak |
+
+5. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
+6. In the **Links** page, enter or select this information:
+
+ | **Field** | **Value** |
+ | | |
+ | Direction | Select **Downlink** |
+ | Center Frequency | Enter **8160** |
+ | Bandwidth | Enter **15** |
+ | Polarization | Select **RHCP** |
+
+7. Select the **Review + create** tab, or select the **Review + create** button.
+8. Select **Create**
+
+9. Access the [Azure Orbital Spacecraft Authorization Form](https://forms.office.com/r/QbUef0Cmjr)
+10. Provide the following information:
+
+ - Spacecraft name: **AQUA**
+ - Region where spacecraft resource was created: **West US 2**
+ - Company name and email
+ - Azure Subscription ID
+
+11. Submit the form
+12. Await a 'Spacecraft resource authorized' email from Azure Orbital
+ > [!NOTE]
+ > You can confirm that your spacecraft resource for AQUA is authorized by checking that the **Authorization status** shows **Allowed** in the spacecraft's overiew page.
++
+## Prepare a virtual machine (VM) to receive the downlinked AQUA data
+1. [Create a virtual network](../virtual-network/quick-create-portal.md) to host your data endpoint virtual machine (VM)
+2. [Create a virtual machine (VM)](../virtual-network/quick-create-portal.md) within the virtual network above. Ensure that this VM has the following specifications:
+- Operation System: Linux (Ubuntu 18.04 or higher)
+- Size: at least 32 GiB of RAM
+- Ensure that the VM has at least one standard public IP
+3. Create a tmpfs on the virtual machine. This virtual machine is where the data will be written to in order to avoid slow writes to disk:
+```console
+sudo mount -t tmpfs -o size=28G tmpfs /media/aqua
+```
+4. Ensure that SOCAT is installed on the machine:
+```console
+sudo apt install socat
+```
+5. Edit the [Network Security Group](../virtual-network/network-security-groups-overview.md) for the subnet that your virtual machine is using to allow inbound connections from the following IPs over TCP port 56001:
+- 20.47.120.4
+- 20.47.120.38
+- 20.72.252.246
+- 20.94.235.188
+- 20.69.186.50
+- 20.47.120.177
+
+## Configure a contact profile for an AQUA downlink mission
+1. In the Azure portal search box, enter **Contact profile**. Select **Contact profile** in the search results.
+2. In the **Contact profile** page, select **Create**.
+3. In **Create contact profile resource**, enter or select this information in the **Basics** tab:
+
+ | **Field** | **Value** |
+ | | |
+ | Subscription | Select your subscription |
+ | Resource group | Select your resource group |
+ | Name | Enter **AQUA_Downlink** |
+ | Region | Select **West US 2** |
+ | Minimum viable contact duration | **PT1M** |
+ | Minimum elevation | **5.0** |
+ | Auto track configuration | **Disabled** |
+ | Event Hubs Namespace | Select an Event Hubs Namespace to which you'll send telemetry data of your contacts. Select a Subscription before you can select an Event Hubs Namespace. |
+ | Event Hubs Instance | Select an Event Hubs Instance that belongs to the previously selected Namespace. *This field will only appear if an Event Hubs Namespace is selected first*. |
++
+4. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
+5. In the **Links** page, select **Add new Link**
+6. In the **Add Link** page, enter, or select this information:
+
+ | **Field** | **Value** |
+ | | |
+ | Direction | **Downlink** |
+ | Gain/Temperature in db/K | **0** |
+ | Center Frequency | **8160.0** |
+ | Bandwidth MHz | **15.0** |
+ | Polarization | **RHCP** |
+ | Endpoint name | Enter the name of the virtual machine (VM) you created above |
+ | IP Address | Enter the Public IP address of the virtual machine you created above (VM) |
+ | Port | **56001** |
+ | Protocol | **TCP** |
+ | Demodulation Configuration | Leave this field **blank** or request a demodulation configuration from the [Azure Orbital team](mailto:msazureorbital@microsoft.com) to use a software modem. Include your Subscription ID, Spacecraft resource ID, and Contact Profile resource ID in your email request.|
+ | Decoding Configuration | Leave this field **blank** |
++
+7. Select the **Submit** button
+8. Select the **Review + create** tab or select the **Review + create** button
+9. Select the **Create** button
+
+## Schedule a contact with AQUA using Azure Orbital and save the downlinked data
+1. In the Azure portal search box, enter **Spacecrafts**. Select **Spacecrafts** in the search results.
+2. In the **Spacecrafts** page, select **AQUA**.
+3. Select **Schedule contact** on the top bar of the spacecraftΓÇÖs overview.
+4. In the **Schedule contact** page, specify this information from the top of the page:
+
+ | **Field** | **Value** |
+ | | |
+ | Contact profile | Select **AQUA_Downlink** |
+ | Ground station | Select **Quincy** |
+ | Start time | Identify a start time for the contact availability window |
+ | End time | Identify an end time for the contact availability window |
+
+5. Select **Search** to view available contact times.
+6. Select one or more contact windows and select **Schedule**.
+7. View the scheduled contact by selecting the **AQUA** spacecraft and navigating to **Contacts**.
+8. Shortly before the contact begins executing, start listening on port 56001, and output the data received into the file:
+```console
+socat -u tcp-listen:56001,fork create:/media/aqua/out.bin
+```
+9. Once your contact has executed, copy the output file,
+```console
+/media/aqua/out.bin out
+```
+ of the tmpfs and into your home directory to avoid being overwritten when another contact is executed.
+
+ > [!NOTE]
+ > For a 10 minute long contact with AQUA while it is transmitting with 15MHz of bandwidth, you should expect to receive somewhere in the order of 450MB of data.
+
+## Next steps
+
+- [Quickstart: Configure a contact profile](contact-profile.md)
+- [Quickstart: Schedule a contact](schedule-contact.md)
orbital Register Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/register-spacecraft.md
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Next steps - [Quickstart: Configure a contact profile](contact-profile.md)-- [How-to: Schedule a contact](schedule-contact.md)
+- [Quickstart: Schedule a contact](schedule-contact.md)
orbital Schedule Contact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/schedule-contact.md
Title: 'How to schedule a contact on Azure Orbital Earth Observation service'
description: 'How to schedule a contact' -+ Last updated 11/16/2021
orbital Update Tle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/update-tle.md
Title: 'How to update the spacecraft TLE on Azure Orbital Earth Observation service'
+ Title: 'Update the spacecraft TLE on Azure Orbital Earth Observation service'
description: 'Update the spacecraft TLE' -+ Last updated 11/16/2021
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Next steps -- [How-to: Schedule a contact](schedule-contact.md)-- [How-to: Cancel a scheduled contact](delete-contact.md)
+- [Tutorial: Schedule a contact](schedule-contact.md)
+- [Tutorial: Cancel a scheduled contact](delete-contact.md)
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
## Release: April 2022 * Support for [latest PostgreSQL minors](./concepts-supported-versions.md) 13.6, 12.10 and 11.15 with new server creates<sup>$</sup>.
+* Support for updating Private DNS Zone for [Azure Database for PostgreSQL - Flexible Server private networking](./concepts-networking.md) for existing servers<sup>$</sup>.
<sup>**$**</sup> New servers get these features automatically. In your existing servers, these features are enabled during your server's future maintenance window.
purview How To Delete Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-delete-self-service-data-access-policy.md
Title: Delete self-service policies
-description: This article describes how to delete auto-generated self-service policies
+description: This article describes how to delete auto-generated self-service policies.
Previously updated : 09/27/2021 Last updated : 03/22/2022 + # How to delete self-service data access policies
-In an Azure Purview catalog, you can now request access to datasets and self-service policies get auto-generated if the data source is enabled for **data use governance**.
+In an Azure Purview catalog, you can now [request access](how-to-request-access.md) to data assets. If policies are currently available for the data source type and the data source has [data use governance enabled](how-to-enable-data-use-governance.md), a self-service policy is generated when a data access request is approved.
-This guide describes how to delete self-service data access policies that have been auto-generated when data access request is approved.
+This article describes how to delete self-service data access policies that have been auto-generated by approved access requests.
## Prerequisites > [!IMPORTANT] > To delete self-service policies, make sure that the below prerequisites are completed.
-Self-service policies must exist for them to be deleted. Refer to the articles below to create
-self-service policies
+Self-service policies must exist to be deleted. To enable and create self-service policies, follow these articles:
-- [Enable Data Use Governance](./how-to-enable-data-use-governance.md)-- [Create a self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md)-- [Approve self-service data access request](how-to-workflow-manage-requests-approvals.md)
+1. [Enable Data Use Governance](how-to-enable-data-use-governance.md) - this will allow Azure Purview to create policies for your sources.
+1. [Create a self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md) - this will enable [users to request access to data sources from within Azure Purview](how-to-request-access.md).
+1. [Approve a self-service data access request](how-to-workflow-manage-requests-approvals.md#approvals) - after approving a request, if your workflow from the previous step includes the ability to create a self-service data policy, your policy will be created and will be viewable.
## Permission
Only users with **Policy Admin** privilege can delete self-service data access p
## Steps to delete self-service data access policies
-### Step 1: Open the Azure portal and launch the Azure purview studio
-
-The Azure Purview studio can be launched as shown below or by using using the url directly.
--
-### Step 2: Open the policy management tab
-
-Click the policy management tab to launch the self-service access policies.
-
+1. Open the Azure portal and launch the [Azure Purview Studio](https://web.purview.azure.com/resource/). The Azure Purview studio can be launched as shown below or by using the [url directly](https://web.purview.azure.com/resource/).
-### Step 3: Open the self-service access policies tab
+ :::image type="content" source="./media/how-to-delete-self-service-data-access-policy/Purview-Studio-launch-pic-1.png" alt-text="Screenshot showing an Azure Purview account open in the Azure portal, with the Azure Purview studio button highlighted.":::
+1. Select the policy management tab to launch the self-service access policies.
+ :::image type="content" source="./media/how-to-delete-self-service-data-access-policy/Purview-Studio-self-service-tab-pic-2.png" alt-text="Screenshot of the Azure Purview studio with the leftmost menu open, and the Data policy page option highlighted.":::
-### Step 4: Select the policies to be deleted
+1. Open the self-service access policies tab.
-The policies can be sorted by the different fields. once sorted, select the policies that need to be deleted.
+ :::image type="content" source="./media/how-to-delete-self-service-data-access-policy/Purview-Studio-self-service-tab-pic-3.png" alt-text="Screenshot of the Azure Purview studio open to the Data policy page with self-service access policies highlighted.":::
+1. Here you'll see all your policies. Select the policies that need to be deleted. The policies can be sorted and filtered by any of the displayed columns to improve your search.
-### Step 5: Delete the policy
+ :::image type="content" source="./media/how-to-delete-self-service-data-access-policy/Purview-Studio-selecting-policy-pic-4.png" alt-text="Screenshot showing the self-service access policies page with one policy selected.":::
-Click the delete button to delete policies that need to be removed.
+1. Select the delete button to delete all selected policies.
+ :::image type="content" source="./media/how-to-delete-self-service-data-access-policy/Purview-Studio-press-delete-pic-5.png" alt-text="Screenshot showing a self-service access policy selected, with the Delete button at the top of the page highlighted.":::
-click **OK** on the confirmation dialog box to delete the policy. Refresh the screen to check whether the policies have been deleted.
+1. Select **OK** on the confirmation dialog box to delete the policy. Refresh the screen to confirm that the policies have been deleted.
## Next steps
purview How To View Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-view-self-service-data-access-policy.md
Title: View self-service policies
-description: This article describes how to view auto-generated self-service policies
+description: This article describes how to view auto-generated self-service access policies
Previously updated : 09/27/2021 Last updated : 03/22/2022 + # How to view self-service data access policies
-In an Azure Purview catalog, you can now request access to datasets and self-service policies get auto-generated if the data source is enabled for **data use governance**.
+In an Azure Purview catalog, you can now [request access](how-to-request-access.md) to data assets. If policies are currently available for the data source type and the data source has [data use governance enabled](how-to-enable-data-use-governance.md), a self-service policy is generated when a data access request is approved.
-This guide describes how to view self-service data access policies that have been auto-generated when data access request is approved.
+This article describes how to view self-service data access policies that have been auto-generated by approved access requests.
## Prerequisites > [!IMPORTANT] > To view self-service policies, make sure that the below prerequisites are completed.
-Self-service policies must exist for them to be viewed. Refer to the articles below to create
-self-service policies
+Self-service policies must exist for them to be viewed. To enable and create self-service policies, follow these articles:
-- [Enable Data Use Governance](./how-to-enable-data-use-governance.md)-- [Create a self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md)-- [Approve self-service data access request](how-to-workflow-manage-requests-approvals.md)
+1. [Enable Data Use Governance](how-to-enable-data-use-governance.md) - this will allow Azure Purview to create policies for your sources.
+1. [Create a self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md) - this will enable [users to request access to data sources from within Azure Purview](how-to-request-access.md).
+1. [Approve a self-service data access request](how-to-workflow-manage-requests-approvals.md#approvals) - after approving a request, if your workflow from the previous step includes the ability to create a self-service data policy, your policy will be created and will be viewable.
## Permission
-Only users with **Policy Admin** privilege can delete self-service data access policies.
-
-## Steps to view self-service data access policies
-
-### Step 1: Open the Azure portal and launch the Azure purview studio
-
-The Azure Purview studio can be launched as shown below or by using using the url directly.
+Only the creator of your Azure Purview account, or users with [**Policy Admin**](catalog-permissions.md#roles) permissions can view self-service data access policies.
+If you need to add or request permissions, follow the [Azure Purview permissions documentation](catalog-permissions.md#add-users-to-roles).
-### Step 2: Open the policy management tab
+## Steps to view self-service data access policies
-Click the policy management tab to launch the self-service access policies.
+1. Open the Azure portal and launch the [Azure Purview Studio](https://web.purview.azure.com/resource/). The Azure Purview studio can be launched as shown below or by using the [url directly](https://web.purview.azure.com/resource/).
+ :::image type="content" source="./media/how-to-view-self-service-data-access-policy/Purview-Studio-launch-pic-1.png" alt-text="Screenshot showing an Azure Purview account open in the Azure portal, with the Azure Purview studio button highlighted.":::
-### Step 3: Open the self-service access policies tab
+1. Select the policy management tab to launch the self-service access policies.
+ :::image type="content" source="./media/how-to-view-self-service-data-access-policy/Purview-Studio-self-service-tab-pic-2.png" alt-text="Screenshot of the Azure Purview studio with the leftmost menu open, and the Data policy page option highlighted.":::
+1. Open the self-service access policies tab.
-### Step 4: View the self-service policies
+ :::image type="content" source="./media/how-to-view-self-service-data-access-policy/Purview-Studio-self-service-tab-pic-3.png" alt-text="Screenshot of the Azure Purview studio open to the Data policy page with self-service access policies highlighted.":::
-The policies can be sorted by the different fields.The policy can be filtered based on data source type and sorted by any of the columns on display
+1. Here you'll see all your policies. The policies can be sorted and filtered by any of the displayed columns to improve your search.
+ :::image type="content" source="./media/how-to-view-self-service-data-access-policy/Purview-Studio-self-service-tab-pic-4.png" alt-text="Screenshot showing the self-service access policies page, with an active filter highlighted next to the keyword filter textbox, and the date created column header selected to sort by that column.":::
## Next steps -- [Self-service data access policy](./concept-self-service-data-access-policy.md)
+- [Self-service data access policies](./concept-self-service-data-access-policy.md)
+- [How to delete self-service access policies](how-to-delete-self-service-data-access-policy.md)
search Search Howto Managed Identities Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-cosmos-db.md
Last updated 02/11/2022+ # Set up an indexer connection to a Cosmos DB database using a managed identity This article describes how to set up an Azure Cognitive Search indexer connection to an Azure Cosmos DB database using a managed identity instead of providing credentials in the connection string.
-You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure AD logins and require Azure role assignments to access data in Cosmos DB.
+You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure AD logins and require Azure role assignments to access data in Cosmos DB. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
Before learning more about this feature, it is recommended that you have an understanding of what an indexer is and how to set up an indexer for your data source. More information can be found at the following links:
search Search Howto Managed Identities Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-storage.md
Last updated 03/30/2022+ # Set up a connection to an Azure Storage account using a managed identity This article describes how to set up an Azure Cognitive Search indexer connection to an Azure Storage account using a managed identity instead of providing credentials in the connection string.
-You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure AD logins and require Azure role assignments to access data in Azure Storage.
+You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure AD logins and require Azure role assignments to access data in Azure Storage. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
This article assumes familiarity with indexer concepts and configuration. If you're new to indexers, start with these links:
service-bus-messaging Service Bus Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-sas.md
Title: Azure Service Bus access control with Shared Access Signatures description: Overview of Service Bus access control using Shared Access Signatures overview, details about SAS authorization with Azure Service Bus. Previously updated : 01/06/2022 Last updated : 04/14/2022 ms.devlang: csharp
This article discusses *Shared Access Signatures* (SAS), how they work, and how to use them in a platform-agnostic way.
-SAS guards access to Service Bus based on authorization rules. Those are configured either on a namespace, or a messaging entity (queue, or topic). An authorization rule has a name, is associated with specific rights, and carries a pair of cryptographic keys. You use the rule's name and key via the Service Bus SDK or in your own code to generate a SAS token. A client can then pass the token to Service Bus to prove authorization for the requested operation.
+SAS guards access to Service Bus based on authorization rules that are configured either on a namespace, or a messaging entity (queue, or topic). An authorization rule has a name, is associated with specific rights, and carries a pair of cryptographic keys. You use the rule's name and key via the Service Bus SDK or in your own code to generate a SAS token. A client can then pass the token to Service Bus to prove authorization for the requested operation.
> [!NOTE] > Azure Service Bus supports authorizing access to a Service Bus namespace and its entities using Azure Active Directory (Azure AD). Authorizing users or applications using OAuth 2.0 token returned by Azure AD provides superior security and ease of use over shared access signatures (SAS). With Azure AD, there is no need to store the tokens in your code and risk potential security vulnerabilities.
The rights conferred by the policy rule can be a combination of:
The 'Manage' right includes the 'Send' and 'Receive' rights.
-A namespace or entity policy can hold up to 12 Shared Access Authorization rules, providing room for three sets of rules, each covering the basic rights and the combination of Send and Listen. This limit underlines that the SAS policy store is not intended to be a user or service account store. If your application needs to grant access to Service Bus based on user or service identities, it should implement a security token service that issues SAS tokens after an authentication and access check.
+A namespace or entity policy can hold up to 12 Shared Access Authorization rules, providing room for three sets of rules, each covering the basic rights and the combination of Send and Listen. This limit underlines that the SAS policy store isn't intended to be a user or service account store. If your application needs to grant access to Service Bus based on user or service identities, it should implement a security token service that issues SAS tokens after an authentication and access check.
-An authorization rule is assigned a *Primary Key* and a *Secondary Key*. These are cryptographically strong keys. Don't lose them or leak them - they'll always be available in the [Azure portal][Azure portal]. You can use either of the generated keys, and you can regenerate them at any time. If you regenerate or change a key in the policy, all previously issued tokens based on that key become instantly invalid. However, ongoing connections created based on such tokens will continue to work until the token expires.
+An authorization rule is assigned a *Primary Key* and a *Secondary Key*. These keys are cryptographically strong keys. Don't lose them or leak them - they'll always be available in the [Azure portal][Azure portal]. You can use either of the generated keys, and you can regenerate them at any time. If you regenerate or change a key in the policy, all previously issued tokens based on that key become instantly invalid. However, ongoing connections created based on such tokens will continue to work until the token expires.
-When you create a Service Bus namespace, a policy rule named **RootManageSharedAccessKey** is automatically created for the namespace. This policy has Manage permissions for the entire namespace. It's recommended that you treat this rule like an administrative **root** account and don't use it in your application. You can create additional policy rules in the **Configure** tab for the namespace in the portal, via PowerShell or Azure CLI.
+When you create a Service Bus namespace, a policy rule named **RootManageSharedAccessKey** is automatically created for the namespace. This policy has Manage permissions for the entire namespace. It's recommended that you treat this rule like an administrative **root** account and don't use it in your application. You can create more policy rules in the **Configure** tab for the namespace in the portal, via PowerShell or Azure CLI.
## Best practices when using SAS When you use shared access signatures in your applications, you need to be aware of two potential risks:
When you use shared access signatures in your applications, you need to be aware
The following recommendations for using shared access signatures can help mitigate these risks: -- **Have clients automatically renew the SAS if necessary**: Clients should renew the SAS well before expiration, to allow time for retries if the service providing the SAS is unavailable. If your SAS is meant to be used for a small number of immediate, short-lived operations that are expected to be completed within the expiration period, then it may be unnecessary as the SAS is not expected to be renewed. However, if you have client that is routinely making requests via SAS, then the possibility of expiration comes into play. The key consideration is to balance the need for the SAS to be short-lived (as previously stated) with the need to ensure that client is requesting renewal early enough (to avoid disruption due to the SAS expiring prior to a successful renewal).-- **Be careful with the SAS start time**: If you set the start time for SAS to **now**, then due to clock skew (differences in current time according to different machines), failures may be observed intermittently for the first few minutes. In general, set the start time to be at least 15 minutes in the past. Or, donΓÇÖt set it at all, which will make it valid immediately in all cases. The same generally applies to the expiry time as well. Remember that you may observer up to 15 minutes of clock skew in either direction on any request.
+- **Have clients automatically renew the SAS if necessary**: Clients should renew the SAS well before expiration, to allow time for retries if the service providing the SAS is unavailable. If your SAS is meant to be used for a few immediate, short-lived operations that are expected to be completed within the expiration period, then it may be unnecessary as the SAS isn't expected to be renewed. However, if you have client that is routinely making requests via SAS, then the possibility of expiration comes into play. The key consideration is to balance the need for the SAS to be short-lived (as previously stated) with the need to ensure that client is requesting renewal early enough (to avoid disruption due to the SAS expiring prior to a successful renewal).
+- **Be careful with the SAS start time**: If you set the start time for SAS to **now**, then due to clock skew (differences in current time according to different machines), failures may be observed intermittently for the first few minutes. In general, set the start time to be at least 15 minutes in the past. Or, donΓÇÖt set it at all, which will make it valid immediately in all cases. The same generally applies to the expiry time as well. Remember that you may observe up to 15 minutes of clock skew in either direction on any request.
- **Be specific with the resource to be accessed**: A security best practice is to provide user with the minimum required privileges. If a user only needs read access to a single entity, then grant them read access to that single entity, and not read/write/delete access to all entities. It also helps lessen the damage if a SAS is compromised because the SAS has less power in the hands of an attacker. - **DonΓÇÖt always use SAS**: Sometimes the risks associated with a particular operation against your Service Bus outweigh the benefits of SAS. For such operations, create a middle-tier service that writes to your Service Bus after business rule validation, authentication, and auditing. - **Always use HTTPs**: Always use Https to create or distribute a SAS. If a SAS is passed over HTTP and intercepted, an attacker performing a man-in-the-middle attach is able to read the SAS and then use it just as the intended user could have, potentially compromising sensitive data or allowing for data corruption by the malicious user.
A SAS token is valid for all resources prefixed with the `<resourceURI>` used in
## Regenerating keys
-It is recommended that you periodically regenerate the keys used in the Shared Access Authorization Policy. The primary and secondary key slots exist so that you can rotate keys gradually. If your application generally uses the primary key, you can copy the primary key into the secondary key slot, and only then regenerate the primary key. The new primary key value can then be configured into the client applications, which have continued access using the old primary key in the secondary slot. Once all clients are updated, you can regenerate the secondary key to finally retire the old primary key.
+It's recommended that you periodically regenerate the keys used in the Shared Access Authorization Policy. The primary and secondary key slots exist so that you can rotate keys gradually. If your application generally uses the primary key, you can copy the primary key into the secondary key slot, and only then regenerate the primary key. The new primary key value can then be configured into the client applications, which have continued access using the old primary key in the secondary slot. Once all clients are updated, you can regenerate the secondary key to finally retire the old primary key.
If you know or suspect that a key is compromised and you have to revoke the keys, you can regenerate both the primary key and the secondary key of a Shared Access Authorization Policy, replacing them with new keys. This procedure invalidates all tokens signed with the old keys.
+To regenerate primary and secondary keys in the **Azure portal**, follow these steps:
+
+1. Navigate to the Service Bus namespace in the [Azure portal](https://portal.azure.com).
+2. Select **Shared Access Policies** on the left menu.
+3. Select the policy from the list. In the following example, **RootManageSharedAccessKey** is selected.
+4. On the **SAS Policy: RootManageSharedAccessKey** page, select **...** from the command bar, and then select **Regenerate Primary Keys** or **Regenerate Secondary Keys**.
+
+ :::image type="content" source="./media/service-bus-sas/regenerate-keys.png" alt-text="Screenshot of SAS Policy page with Regenerate options selected.":::
+
+If you are using **Azure PowerShell**, use the [`New-AzServiceBusKey`](/powershell/module/az.servicebus/new-azservicebuskey) cmdlet to regenerate primary and secondary keys for a Service Bus namespace. With PowerShell, you can also specify values for primary and secondary keys that are being generated, by using the `-KeyValue` parameter.
+
+If you are using **Azure CLI**, use the [`az servicebus namespace authorization-rule keys renew`](/cli/azure/servicebus/namespace/authorization-rule/keys#az-servicebus-namespace-authorization-rule-keys-renew) command to regenerate primary and secondary keys for a Service Bus namespace.
+
## Shared Access Signature authentication with Service Bus The scenario described as follows include configuration of authorization rules, generation of SAS tokens, and client authorization.
To use SAS authorization with Service Bus subscriptions, you can use SAS keys co
## Use the Shared Access Signature (at HTTP level)
-Now that you know how to create Shared Access Signatures for any entities in Service Bus, you are ready to perform an HTTP POST:
+Now that you know how to create Shared Access Signatures for any entities in Service Bus, you're ready to perform an HTTP POST:
```http POST https://<yournamespace>.servicebus.windows.net/<yourentity>/messages
ContentType: application/atom+xml;type=entry;charset=utf-8
Remember, this works for everything. You can create SAS for a queue, topic, or subscription.
-If you give a sender or client a SAS token, they don't have the key directly, and they cannot reverse the hash to obtain it. As such, you have control over what they can access, and for how long. An important thing to remember is that if you change the primary key in the policy, any Shared Access Signatures created from it are invalidated.
+If you give a sender or client a SAS token, they don't have the key directly, and they can't reverse the hash to obtain it. As such, you have control over what they can access, and for how long. An important thing to remember is that if you change the primary key in the policy, any Shared Access Signatures created from it are invalidated.
## Use the Shared Access Signature (at AMQP level) In the previous section, you saw how to use the SAS token with an HTTP POST request for sending data to the Service Bus. As you know, you can access Service Bus using the Advanced Message Queuing Protocol (AMQP) that is the preferred protocol to use for performance reasons, in many scenarios. The SAS token usage with AMQP is described in the document [AMQP Claim-Based Security Version 1.0](https://www.oasis-open.org/committees/download.php/50506/amqp-cbs-v1%200-wd02%202013-08-12.doc) that is in working draft since 2013 but it's supported by Azure today.
-Before starting to send data to Service Bus, the publisher must send the SAS token inside an AMQP message to a well-defined AMQP node named **$cbs** (you can see it as a "special" queue used by the service to acquire and validate all the SAS tokens). The publisher must specify the **ReplyTo** field inside the AMQP message; this is the node in which the service replies to the publisher with the result of the token validation (a simple request/reply pattern between publisher and service). This reply node is created "on the fly," speaking about "dynamic creation of remote node" as described by the AMQP 1.0 specification. After checking that the SAS token is valid, the publisher can go forward and start to send data to the service.
+Before starting to send data to Service Bus, the publisher must send the SAS token inside an AMQP message to a well-defined AMQP node named **$cbs** (you can see it as a "special" queue used by the service to acquire and validate all the SAS tokens). The publisher must specify the **ReplyTo** field inside the AMQP message; it's the node in which the service replies to the publisher with the result of the token validation (a simple request/reply pattern between publisher and service). This reply node is created "on the fly," speaking about "dynamic creation of remote node" as described by the AMQP 1.0 specification. After checking that the SAS token is valid, the publisher can go forward and start to send data to the service.
-The following steps show how to send the SAS token with AMQP protocol using the [AMQP.NET Lite](https://github.com/Azure/amqpnetlite) library. This is useful if you can't use the official Service Bus SDK (for example on WinRT, .NET Compact Framework, .NET Micro Framework and Mono) developing in C\#. Of course, this library is useful to help understand how claims-based security works at the AMQP level, as you saw how it works at the HTTP level (with an HTTP POST request and the SAS token sent inside the "Authorization" header). If you don't need such deep knowledge about AMQP, you can use the official Service Bus SDK in any of the supported languages like .NET, Java, JavaScript, Python and Go, which will do it for you.
+The following steps show how to send the SAS token with AMQP protocol using the [AMQP.NET Lite](https://github.com/Azure/amqpnetlite) library. It's useful if you can't use the official Service Bus SDK (for example, on WinRT, .NET Compact Framework, .NET Micro Framework and Mono) developing in C#. This library is useful to help understand how claims-based security works at the AMQP level, as you saw how it works at the HTTP level (with an HTTP POST request and the SAS token sent inside the "Authorization" header). If you don't need such deep knowledge about AMQP, you can use the official Service Bus SDK in any of the supported languages like .NET, Java, JavaScript, Python and Go, which will do it for you.
### C&#35;
Next, the publisher creates two AMQP links for sending the SAS token and receivi
The AMQP message contains a set of properties, and more information than a simple message. The SAS token is the body of the message (using its constructor). The **"ReplyTo"** property is set to the node name for receiving the validation result on the receiver link (you can change its name if you want, and it will be created dynamically by the service). The last three application/custom properties are used by the service to indicate what kind of operation it has to execute. As described by the CBS draft specification, they must be the **operation name** ("put-token"), the **type of token** (in this case, a `servicebus.windows.net:sastoken`), and the **"name" of the audience** to which the token applies (the entire entity).
-After sending the SAS token on the sender link, the publisher must read the reply on the receiver link. The reply is a simple AMQP message with an application property named **"status-code"** that can contain the same values as an HTTP status code.
+After publisher sends the SAS token on the sender link, the publisher must read the reply on the receiver link. The reply is a simple AMQP message with an application property named **"status-code"** that can contain the same values as an HTTP status code.
## Rights required for Service Bus operations
spring-cloud Concept App Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/concept-app-status.md
The discovery status of the instance is reported as one of the following values:
|-|| | UP | The app instance is registered to Eureka and ready to receive traffic | | OUT_OF_SERVICE | The app instance is registered to Eureka and able to receive traffic. but shuts down for traffic intentionally. |
-| DOWN | The app instance isn't registered to Eureka or is registered but not able to receive traffic. |
+| DOWN | The app instance is registered but not able to receive traffic. |
+| UNREGISTERED | The app instance isn't registered to Eureka. |
+| N/A | The app instance is running with custom container or service discovery is not enabled. |
## App registration status
spring-cloud How To Dynatrace One Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-dynatrace-one-agent-monitor.md
The following sections describe how to activate Dynatrace OneAgent.
To activate Dynatrace OneAgent on your Azure Spring Cloud instance, you need to configure four environment variables: `DT_TENANT`, `DT_TENANTTOKEN`, `DT_CONNECTION_POINT`, and `DT_CLUSTER_ID`. For more information, see [Integrate OneAgent with Azure Spring Cloud](https://www.dynatrace.com/support/help/shortlink/azure-spring).
-For applications with multiple instances, Dynatrace has several ways to group them. `DT_CLUSTER_ID` is one of the ways. For more information, see [Customize the structure of process groups](https://www.dynatrace.com/support/help/how-to-use-dynatrace/process-groups/configuration/adapt-the-composition-of-default-process-groups/).
+For applications with multiple instances, Dynatrace has several ways to group them. `DT_CLUSTER_ID` is one of the ways. For more information, see [Process group detection](https://www.dynatrace.com/support/help/how-to-use-dynatrace/process-groups/configuration/pg-detection).
### Add the environment variables to your application
spring-cloud Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/resources.md
As a developer, you might find the following Azure Spring Cloud resources useful
* [Spring Cloud Services for VMware Tanzu Documentation](https://docs.pivotal.io/spring-cloud-services/1-5/common/https://docsupdatetracker.net/index.html) * [Steeltoe](https://steeltoe.io/) * [Java Spring Cloud website](https://spring.io/)
-* [Spring framework](https://cloud.spring.io/spring-cloud-azure/)
+* [Spring framework](https://spring.io/projects/spring-cloud-azure)
* [Spring on Azure](/azure/developer/java/spring-framework/)
spring-cloud Tutorial Managed Identities Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-managed-identities-key-vault.md
az keyvault set-policy --name "<your-keyvault-name>" --object-id ${SERVICE_IDENT
## Build a sample Spring Boot app with Spring Boot starter
-This app will have access to get secrets from Azure Key Vault. Use the starter app: [Azure Key Vault Secrets Spring boot starter](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/spring/azure-spring-boot-starter-keyvault-secrets). Azure Key Vault is added as an instance of Spring **PropertySource**. Secrets stored in Azure Key Vault can be conveniently accessed and used like any externalized configuration property, such as properties in files.
+This app will have access to get secrets from Azure Key Vault. Use the Azure Key Vault Secrets Spring boot starter. Azure Key Vault is added as an instance of Spring **PropertySource**. Secrets stored in Azure Key Vault can be conveniently accessed and used like any externalized configuration property, such as properties in files.
1. Generate a sample project from start.spring.io with Azure Key Vault Spring Starter.
storage Sas Expiration Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/sas-expiration-policy.md
When a SAS expiration policy is in effect for the storage account, the signed st
When you create a SAS expiration policy on a storage account, the policy applies to each type of SAS that is signed with the account key. The types of shared access signatures that are signed with the account key are the service SAS and the account SAS.
-To configure a SAS expiration policy for a storage account, use the Azure portal, PowerShell, or Azure CLI.
+To configure a SAS expiration policy for a storage account, use the Azure portal, PowerShell, or Azure CLI.
### [Azure portal](#tab/azure-portal)
To create a SAS expiration policy in the Azure portal, follow these steps:
1. Navigate to your storage account in the Azure portal. 1. Under **Settings**, select **Configuration**.
-1. Locate the setting for **Allow recommended upper limit for shared access signature (SAS) expiry interval**, and set it to **Enabled**.
+1. Locate the setting for **Allow recommended upper limit for shared access signature (SAS) expiry interval**, and set it to **Enabled**. You must rotate both access keys at least once before you can set a recommended upper limit for SAS expiry interval else the option will come as disabled.
1. Specify the recommended interval for any new shared access signatures that are created on resources in this storage account. :::image type="content" source="media/sas-expiration-policy/configure-sas-expiration-policy-portal.png" alt-text="Screenshot showing how to configure a SAS expiration policy in the Azure portal":::
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
The following table describes key parameters for each redundancy option:
| Availability for write requests | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) | | Number of copies of data maintained on separate nodes | Three copies within a single region | Three copies across separate availability zones within a single region | Six copies total, including three in the primary region and three in the secondary region | Six copies total, including three across separate availability zones in the primary region and three locally redundant copies in the secondary region |
-For more information, see the [SLA for Storage Accounts](/support/legal/sla/storage/v1_5/).
+For more information, see the [SLA for Storage Accounts](https://azure.microsoft.com/support/legal/sla/storage/v1_5/).
### Durability and availability by outage scenario
storage Authorize Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/authorize-managed-identity.md
Previously updated : 10/11/2021 Last updated : 04/15/2022 -+ ms.devlang: csharp
Azure Table Storage supports Azure Active Directory (Azure AD) authentication wi
This article shows how to authorize access to table data from an Azure VM using managed identities for Azure Resources.
-> [!IMPORTANT]
-> Authorization with Azure AD for tables is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Enable managed identities on a VM Before you can use managed identities for Azure Resources to authorize access to tables from your VM, you must first enable managed identities for Azure Resources on the VM. To learn how to enable managed identities for Azure Resources, see one of these articles:
public static void CreateTable(string accountName, string tableName)
## Next steps - [Assign an Azure role for access to table data](assign-azure-role-data-access.md)-- [Authorize access to tables using Azure Active Directory](authorize-access-azure-active-directory.md)
+- [Authorize access to tables using Azure Active Directory](authorize-access-azure-active-directory.md)
stream-analytics Blob Output Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/blob-output-managed-identity.md
Last updated 07/07/2021+ # Use Managed Identity to authenticate your Azure Stream Analytics job to Azure Blob Storage
Unless you need the job to create containers on your behalf, you should choose *
1. Navigate to the container's configuration pane within your storage account.
-2. Select **Access Control (IAM)** on the left-hand side.
+1. Select **Access control (IAM)**.
-3. Under the "Add a role assignment" section click **Add**.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-4. In the role assignment pane:
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
- 1. Set the **Role** to "Storage Blob Data Contributor"
- 2. Ensure the **Assign access to** dropdown is set to "Azure AD user, group, or service principal".
- 3. Type the name of your Stream Analytics job in the search field.
- 4. Select your Stream Analytics job and click **Save**.
+ | Setting | Value |
+ | | |
+ | Role | Storage Blob Data Contributor |
+ | Assign access to | User, group, or service principal |
+ | Members | \<Name of your Stream Analytics job> |
- ![Grant container access](./media/stream-analytics-managed-identities-blob-output-preview/stream-analytics-container-access-portal.png)
+ ![Screenshot that shows Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
#### Account level access 1. Navigate to your storage account.
-2. Select **Access Control (IAM)** on the left-hand side.
+1. Select **Access control (IAM)**.
-3. Under the "Add a role assignment" section click **Add**.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-4. In the role assignment pane:
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
- 1. Set the **Role** to "Storage Blob Data Contributor"
- 2. Ensure the **Assign access to** dropdown is set to "Azure AD user, group, or service principal".
- 3. Type the name of your Stream Analytics job in the search field.
- 4. Select your Stream Analytics job and click **Save**.
+ | Setting | Value |
+ | | |
+ | Role | Storage Blob Data Contributor |
+ | Assign access to | User, group, or service principal |
+ | Members | \<Name of your Stream Analytics job> |
- ![Grant account access](./media/stream-analytics-managed-identities-blob-output-preview/stream-analytics-account-access-portal.png)
+ ![Screenshot that shows Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
### Grant access via the command line
stream-analytics Copy Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/copy-job.md
Last updated 09/11/2019
# Copy, back up and move your Azure Stream Analytics jobs between regions
-You can copy or back up your deployed Azure Stream Analytics jobs using Visual Studio Code or Visual Studio. Copying a job to another region does not copy the last output time. Therefore, you cannot use [**When last stopped**](./start-job.md#start-options) option when starting the copied job.
+When you want to move, copy or back up your deployed Azure Stream Analytics jobs, you can use the job exporting function in Stream Analytics extension for Visual Studio Code or Visual Studio to achieve. It will help you to export your job's definition to local, then you can backup it there or resubmit it to another region.
+
+> [!NOTE]
+> * We strongly recommend using [**Stream Analytics tools for Visual Studio Code**](./quick-create-visual-studio-code.md) for best local development experience. There are known feature gaps in Stream Analytics tools for Visual Studio 2019 (version 2.6.3000.0) and it won't be improved going forward.
+> * Copying a job to another region does not copy the last output time. Therefore, you cannot use [**When last stopped**](./start-job.md#start-options) option when starting the copied job.
## Before you begin * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
stream-analytics Event Hubs Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/event-hubs-managed-identity.md
Last updated 07/07/2021+ # Use managed identities to access Event Hub from an Azure Stream Analytics job
First, you create a managed identity for your Azure Stream Analytics job.ΓÇ»
For the Stream Analytics job to access your Event Hub using managed identity, the service principal you created must have special permissions to the Event Hub.
-1. Go to **Access Control (IAM)** in your Event Hub.
+1. Select **Access control (IAM)**.
-1. Select **+ Add** and **Add role assignment**.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-1. On the **Add role assignment** page, enter the following options:
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
- |Parameter|Value|
- ||--|
- |Role|Azure Event Hubs Data Owner|
- |Assign access to|User, group, or service principal|
- |Select|Enter the name of your Stream Analytics job|
+ | Setting | Value |
+ | | |
+ | Role | Azure Event Hubs Data Owner |
+ | Assign access to | User, group, or service principal |
+ | Members | \<Name of your Stream Analytics job> |
- :::image type="content" source="media/event-hubs-managed-identity/add-role-assignment.png" alt-text="Add role assignment":::
-
-1. Select **Save** and wait a minute or so for changes to propagate.
+ ![Screenshot that shows Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
You can also grant this role at the Event Hub Namespace level, which will naturally propagate the permissions to all Event Hubs created under it. That is, all Event Hubs under a Namespace can be used as a managed-identity-authenticating resource in your Stream Analytics job.
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/microsoft-spark-utilities.md
For example:
mssparkutils.notebook.run("folder/Sample1", 90, {"input": 20 }) ```
+After the run finished, you will see a snapshot link named '**View notebook run: *Notebook Name***' shown in the cell output, you can click the link to see the snapshot for this specific run.
+
+![Screenshot of a snap link python](./media/microsoft-spark-utilities/spark-utilities-run-notebook-snap-link-sample-python.png)
+ ### Exit a notebook Exits a notebook with a value. You can run nesting function calls in a notebook interactively or in a pipeline.
For example:
mssparkutils.notebook.run("folder/Sample1", 90, Map("input" -> 20)) ```
+After the run finished, you will see a snapshot link named '**View notebook run: *Notebook Name***' shown in the cell output, you can click the link to see the snapshot for this specific run.
+
+![Screenshot of a snap link scala](./media/microsoft-spark-utilities/spark-utilities-run-notebook-snap-link-sample.png)
++ ### Exit a notebook Exits a notebook with a value. You can run nesting function calls in a notebook interactively or in a pipeline.
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared.md
Azure shared disks are supported on:
- [Ubuntu 18.04 and above](https://discourse.ubuntu.com/t/ubuntu-high-availability-corosync-pacemaker-shared-disk-environments/14874) - [RHEL 8.3 and above](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/deploying_red_hat_enterprise_linux_8_on_public_cloud_platforms/index?lb_target=production#azure-configuring-shared-block-storage-configuring-rhel-high-availability-on-azure) - It may be possible to use RHEL 7 or an older version of RHEL 8 with shared disks, contact SharedDiskFeedback @microsoft.com-- [Oracle Enterprise Linux](https://docs.oracle.com/en/operating-systems/oracle-linux/8/availability/hacluster-1.html)
+- [Oracle Enterprise Linux](https://docs.oracle.com/en/operating-systems/oracle-linux/8/availability/)
Linux clusters can use cluster managers such as [Pacemaker](https://wiki.clusterlabs.org/wiki/Pacemaker). Pacemaker builds on [Corosync](http://corosync.github.io/corosync/), enabling cluster communications for applications deployed in highly available environments. Some common clustered filesystems include [ocfs2](https://oss.oracle.com/projects/ocfs2/) and [gfs2](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/global_file_system_2/ch-overview-gfs2). You can use SCSI Persistent Reservation (SCSI PR) and/or STONITH Block Device (SBD) based clustering models for arbitrating access to the disk. When using SCSI PR, you can manipulate reservations and registrations using utilities such as [fence_scsi](http://manpages.ubuntu.com/manpages/eoan/man8/fence_scsi.8.html) and [sg_persist](https://linux.die.net/man/8/sg_persist).
virtual-machines Diagnostics Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux.md
Supported distributions and versions:
- OpenSUSE 13.1+ - SUSE Linux Enterprise Server 12 - Debian 9, 8, 7-- Red Hat Enterprise Linux (RHEL) 8, 7, 6.7+
+- Red Hat Enterprise Linux (RHEL) 7, 6.7+
### Prerequisites
virtual-machines Tutorial Manage Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/tutorial-manage-vm.md
Previously updated : 06/06/2019 Last updated : 03/29/2022
New-AzVm `
-AsJob ```
-The `-AsJob` parameter creates the VM as a background task, so the PowerShell prompts return to you. You can view details of background jobs with the `Get-Job` cmdlet.
+The `-AsJob` parameter creates the VM as a background task, so the PowerShell prompts return to you. You can view details of background jobs with the [Get-Job](/powershell/module/microsoft.powershell.core/get-job) cmdlet.
## Understand VM sizes
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules.
-You can use service tags to define network access controls on [network security groups](./network-security-groups-overview.md#security-rules), [Azure Firewall](../firewall/service-tags.md), and [user-defined routes](./virtual-networks-udr-overview.md#service-tags-for-user-defined-routes). Use service tags in place of specific IP addresses when you create security rules and routes. By specifying the service tag name, such as **ApiManagement**, in the appropriate *source* or *destination* field of a security rule, you can allow or deny the traffic for the corresponding service. By specifying the service tag name in the address prefix of a route, you can route traffic intended for any of the prefixes encapsulated by the service tag to a desired next hop type.
+You can use service tags to define network access controls on [network security groups](./network-security-groups-overview.md#security-rules), [Azure Firewall](../firewall/service-tags.md), and user-defined routes. Use service tags in place of specific IP addresses when you create security rules and routes. By specifying the service tag name, such as **ApiManagement**, in the appropriate *source* or *destination* field of a security rule, you can allow or deny the traffic for the corresponding service. By specifying the service tag name in the address prefix of a route, you can route traffic intended for any of the prefixes encapsulated by the service tag to a desired next hop type.
+ > [!NOTE] > As of March 2022, using service tags in place of explicit address prefixes in [user defined routes](./virtual-networks-udr-overview.md#user-defined) is out of preview and generally available.
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-udr-overview.md
You cannot specify **VNet peering** or **VirtualNetworkServiceEndpoint** as the
### Service Tags for user-defined routes
-You can now specify a [Service Tag](service-tags-overview.md) as the address prefix for a user-defined route instead of an explicit IP range. A Service Tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to user-defined routes and reducing the number of routes you need to create. You can currently create 25 or less routes with Service Tags in each route table. </br>
+You can now specify a [service tag](service-tags-overview.md) as the address prefix for a user-defined route instead of an explicit IP range. A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to user-defined routes and reducing the number of routes you need to create. You can currently create 25 or less routes with service tags in each route table. With this release, using service tags in routing scenarios for containers is also supported. </br>
#### Exact Match When there is an exact prefix match between a route with an explicit IP prefix and a route with a Service Tag, preference is given to the route with the explicit prefix. When multiple routes with Service Tags have matching IP prefixes, routes will be evaluated in the following order:
virtual-wan About Nva Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-nva-hub.md
Deploying Network Virtual Appliances into the Virtual WAN Hub allows customers t
* **Platform-provided lifecycle management**: Upgrades and patches are a part of the Azure Virtual WAN service. This takes away the complexity of lifecycle management from a customer deploying Virtual Appliance solutions. * **Integrated with platform features**: Transit connectivity with Microsoft gateways and Virtual Networks, Encrypted ExpressRoute (SD-WAN overlay running over an ExpressRoute circuit) and Virtual Hub route tables interact seamlessly.
+> [!IMPORTANT]
+> To ensure you get the best support for this integrated solution, please make sure you have similar levels of support entitlement with both Microsoft and your Network Virtual Appliance provider.
## <a name ="partner"></a> Partners
virtual-wan Global Hub Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/global-hub-profile.md
The global profile associated with a User VPN configuration points to a load bal
For example, you can associate a VPN configuration with two Virtual WAN hubs, one in West US and one in Southeast Asia. If a user connects to the global profile associated with the User VPN configuration, they'll connect to the closest Virtual WAN hub based on their location. > [!IMPORTANT]
-> If a Point-to-site VPN configuration used for a global profile is configured to authenticate users using the RADIUS protocol, make sure "Use Remote/On-premises RADIUS server" is turned on for all Point-to-site VPN Gateways using that configuration. Additionally, ensure your RADIUS server is configured to accept authentication requests from theRADIUS proxy IP addresses of **all** Point-to-site VPN Gateways using this VPN configuration.
+> If a Point-to-site VPN configuration used for a global profile is configured to authenticate users using the RADIUS protocol, make sure "Use Remote/On-premises RADIUS server" is turned on for all Point-to-site VPN Gateways using that configuration. Additionally, ensure your RADIUS server is configured to accept authentication requests from the RADIUS proxy IP addresses of **all** Point-to-site VPN Gateways using this VPN configuration.
To download the global profile:
virtual-wan Howto Connect Vnet Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-connect-vnet-hub.md
This article helps you connect your virtual network to your virtual hub. Repeat these steps for each VNet that you want to connect. > [!NOTE]
-> A virtual network can only be connected to one virtual hub at a time.
->
+> 1. A virtual network can only be connected to one virtual hub at a time.
+> 2. In order to connect it to a virtual hub, the remote virtual network must not have any gateway.
## Add a connection
vpn-gateway Vpn Gateway Troubleshoot Site To Site Cannot Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-cannot-connect.md
If the Internet-facing IP address of the VPN device is included in the **Local n
`https://<YourVirtualNetworkGatewayIP>:8081/healthprobe`
+> [!NOTE]
+> For Active/Acive Gateways use the following to check the second public IP: https://<YourVirtualNetworkGatewayIP2>:8083/healthprobe
+ 2. Click through the certificate warning. 3. If you receive a response, the VPN gateway is considered healthy. If you don't receive a response, the gateway might not be healthy or an NSG on the gateway subnet is causing the problem. The following text is a sample response:
The perfect forward secrecy feature can cause disconnection problems. If the VPN
## Next steps - [Configure a site-to-site connection to a virtual network](./tutorial-site-to-site-portal.md)-- [Configure an IPsec/IKE policy for site-to-site VPN connections](vpn-gateway-ipsecikepolicy-rm-powershell.md)
+- [Configure an IPsec/IKE policy for site-to-site VPN connections](vpn-gateway-ipsecikepolicy-rm-powershell.md)