Updates from: 07/22/2021 03:06:13
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Integer Transformations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/integer-transformations.md
Previously updated : 09/10/2018 Last updated : 07/21/2021
This article provides examples for using the integer claims transformations of the Identity Experience Framework schema in Azure Active Directory B2C (Azure AD B2C). For more information, see [ClaimsTransformations](claimstransformations.md).
+## AdjustNumber
+
+Increases or decreases a numeric claim and return a new claim.
+
+| Item | TransformationClaimType | Data Type | Notes |
+| - | -- | | -- |
+| InputClaim | inputClaim | int | The claim type, which contains the number to increase or decrease. If the `inputClaim` claim value is null, the default of 0 is used. |
+| InputParameter | Operator | string | Possible values: `INCREMENT` (default), or `DECREMENT`.|
+| OutputClaim | outputClaim | int | The claim type that is produced after this claims transformation has been invoked. |
+
+Use this claim transformation to increase or decrease a numeric claim value.
+
+```xml
+<ClaimsTransformation Id="UpdateSteps" TransformationMethod="AdjustNumber">
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="steps" TransformationClaimType="inputClaim" />
+ </InputClaims>
+ <InputParameters>
+ <InputParameter Id="Operator" DataType="string" Value="INCREMENT" />
+ </InputParameters>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="steps" TransformationClaimType="outputClaim" />
+ </OutputClaims>
+</ClaimsTransformation>
+```
+
+### Example 1
+
+- Input claims:
+ - **inputClaim**: 1
+- Input parameters:
+ - **Operator**: INCREMENT
+- Output claims:
+ - **outputClaim**: 2
+
+### Example 2
+
+- Input claims:
+ - **inputClaim**: NULL
+- Input parameters:
+ - **Operator**: INCREMENT
+- Output claims:
+ - **outputClaim**: 1
++
+## AssertNumber
+
+Determines whether a numeric claim is greater, lesser, equal, or not equal to a number.
+
+| Item | TransformationClaimType | Data Type | Notes |
+| - | -- | | -- |
+| InputClaim | inputClaim | int | The first numeric claim to compare whether it is greater, lesser, equal, or not equal than the second number. Null value throws an exception. |
+| InputParameter | CompareToValue | boolean | The second number to compare whether it is greater, lesser, equal, or not equal than the first number. |
+| InputParameter | Operator | string | Possible values: `LESSTHAN`, `GREATERTHAN`, `GREATERTHANOREQUAL`, `LESSTHANOREQUAL`, `EQUAL`, `NOTEQUAL`. |
+| InputParameter | throwError | boolean | Specifies whether this assertion should throw an error if the comparison result is `true`. Possible values: `true` (default), or `false`. <br />&nbsp;<br />When set to `true` (Assertion mode), and the comparison result is `true`, an exception will be thrown. When set to `false` (Evaluation mode), the result is a new boolean claim type with a value of `true`, or `false`.|
+| OutputClaim | outputClaim | boolean | If `ThrowError` is set to `false`, this output claim contains `true`, or `false` according to the comparison result. |
+
+### Assertion mode
+
+When `throwError` input parameter is `true` (default), the **AssertNumber** claims transformation is always executed from a [validation technical profile](validation-technical-profile.md) that is called by a [self-asserted technical profile](self-asserted-technical-profile.md).
+
+The **AssertNumberError** self-asserted technical profile metadata controls the error message that the technical profile presents to the user. The error messages can be [localized](localization-string-ids.md#claims-transformations-error-messages).
+
+```xml
+<TechnicalProfile Id="SelfAsserted-LocalAccountSignin-Email">
+ <Metadata>
+ <Item Key="AssertNumberError">You've reached the maximum logon attempts</Item>
+ </Metadata>
+ ...
+</TechnicalProfile>
+```
+
+For more information how to call the claims transformation in an assertion mode, see [AssertStringClaimsAreEqual](string-transformations.md#assertstringclaimsareequal), [AssertBooleanClaimIsEqualToValue](boolean-transformations.md#assertbooleanclaimisequaltovalue), and [AssertDateTimeIsGreaterThan](date-transformations.md#assertdatetimeisgreaterthan) claims transformations.
+
+### Assertion mode example
+
+The following example asserts the number of attempts is over five. The claims transformation throws an error according to the comparison result.
+
+```xml
+<ClaimsTransformation Id="isOverLimit" TransformationMethod="AssertNumber">
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="attempts" TransformationClaimType="inputClaim" />
+ </InputClaims>
+ <InputParameters>
+ <InputParameter Id="Operator" DataType="string" Value="GREATERTHAN" />
+ <InputParameter Id="CompareToValue" DataType="int" Value="5" />
+ <InputParameter Id="throwError" DataType="boolean" Value="true" />
+ </InputParameters>
+</ClaimsTransformation>
+```
+
+- Input claims:
+ - **inputClaim**: 10
+- Input parameters:
+ - **Operator**: GREATERTHAN
+ - **CompareToValue**: 5
+ - **throwError**: true
+- Result: Error thrown
+
+### Evaluation mode example
+
+The following example evaluates whether the number of attempts is over five. The output claim contains a boolean value according to the comparison result. The claims transformation will not throw an error.
+
+```xml
+<ClaimsTransformation Id="isOverLimit" TransformationMethod="AssertNumber">
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="attempts" TransformationClaimType="inputClaim" />
+ </InputClaims>
+ <InputParameters>
+ <InputParameter Id="Operator" DataType="string" Value="GREATERTHAN" />
+ <InputParameter Id="CompareToValue" DataType="int" Value="5" />
+ <InputParameter Id="throwError" DataType="boolean" Value="false" />
+ </InputParameters>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="attemptsCountExceeded" TransformationClaimType="outputClaim" />
+ </OutputClaims>
+</ClaimsTransformation>
+```
+
+- Input claims:
+ - **inputClaim**: 10
+- Input parameters:
+ - **Operator**: GREATERTHAN
+ - **CompareToValue**: 5
+ - **throwError**: false
+- Output claims:
+ - **outputClaim**: true
++ ## ConvertNumberToStringClaim Converts a long data type into a string data type. | Item | TransformationClaimType | Data Type | Notes | | - | -- | | -- |
-| InputClaim | inputClaim | long | The ClaimType to convert to a string. |
-| OutputClaim | outputClaim | string | The ClaimType that is produced after this ClaimsTransformation has been invoked. |
+| InputClaim | inputClaim | long | The claim type to convert to a string. |
+| OutputClaim | outputClaim | string | The claim type that is produced after this claims transformation has been invoked. |
In this example, the `numericUserId` claim with a value type of long is converted to a `UserId` claim with a value type of string.
active-directory-domain-services Secure Your Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/secure-your-domain.md
Previously updated : 06/22/2021 Last updated : 07/21/2021
To complete this article, you need the following resources:
1. On the left-hand side, select **Security settings**. 1. Click **Enable** or **Disable** for the following settings: - **TLS 1.2 only mode**
- - **NTLM authentication****
+ - **NTLM authentication**
- **Password synchronization from on-premises** - **NTLM password synchronization from on-premises** - **RC4 encryption**
In addition to **Security settings**, Microsoft Azure Policy has a **Compliance*
![Screenshot of Compliance settings](media/secure-your-domain/policy-tls.png)
+## Audit NTLM failures
+
+While disabling NTLM password synchronization will improve security, many applications and services are not designed to work without it. For example, connecting to any resource by its IP address, such as DNS Server management or RDP, will fail with Access Denied. If you disable NTLM password synchronization and your application or service isnΓÇÖt working as expected, you can check for NTLM authentication failures by enabling security auditing for the **Logon/Logoff** > **Audit Logon** event category, where NTLM is specified as the **Authentication Package** in the event details. For more information, see [Enable security audits for Azure Active Directory Domain Services](security-audit-events.md).
+ ## Use PowerShell to harden your domain If needed, [install and configure Azure PowerShell](/powershell/azure/install-az-ps). Make sure that you sign in to your Azure subscription using the [Connect-AzAccount][Connect-AzAccount] cmdlet.
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/known-issues.md
Title: Known issues for Application Provisioning in Azure Active Directory
-description: Learn about known issues when working with automated Application Provisioning in Azure Active Directory.
+ Title: Known issues for application provisioning in Azure Active Directory
+description: Learn about known issues when you work with automated application provisioning in Azure Active Directory.
Last updated 07/07/2021
-# Known issues for Application Provisioning in Azure Active Directory
-Known issues to be aware of when working with app provisioning. You can provide feedback about the application provisioning service on UserVoice, see [Azure AD Application Provision UserVoice](https://aka.ms/appprovisioningfeaturerequest). We closely watch UserVoice so we can improve the service.
+# Known issues for application provisioning in Azure Active Directory
+This article discusses known issues to be aware of when you work with app provisioning. To provide feedback about the application provisioning service on UserVoice, see [Azure Active Directory (Azure AD) application provision UserVoice](https://aka.ms/appprovisioningfeaturerequest). We watch UserVoice closely so that we can improve the service.
> [!NOTE]
-> This isnΓÇÖt a comprehensive list of known issues. If you know of an issue that is not listed, provide feedback at the bottom of the page.
+> This article isn't a comprehensive list of known issues. If you know of an issue that isn't listed, provide feedback at the bottom of the page.
## Authorization
-**Unable to save**
+#### Unable to save
-The tenant URL, secret token, and notification email must be filled in to save. You can't provide just one of them.
+The tenant URL, secret token, and notification email must be filled in to save. You can't provide only one of them.
-**Unable to change provisioning mode back to manual**
-
-After you have configured provisioning for the first time, youΓÇÖll notice that the provisioning mode has switched from manual to automatic. You can't change it back to manual. But you can turn off provisioning through the UI. Turning off provisioning in the UI effectively does the same as setting the dropdown to manual.
+#### Unable to change provisioning mode back to manual
+After you've configured provisioning for the first time, you'll notice that the provisioning mode has switched from manual to automatic. You can't change it back to manual. But you can turn off provisioning through the UI. Turning off provisioning in the UI effectively does the same as setting the dropdown to manual.
## Attribute mappings
-**Attribute SamAccountName or userType not available as a source attribute**
+#### Attribute SamAccountName or userType not available as a source attribute
-The attributes SamAccountName and userType aren't available as a source attribute by default. Extend your schema to add the attribute. You can add the attributes to the list of available source attributes by extending your schema. To learn more, see [Missing source attribute](user-provisioning-sync-attributes-for-mapping.md).
+The attributes **SamAccountName** and **userType** aren't available as a source attribute by default. Extend your schema to add the attributes. You can add the attributes to the list of available source attributes by extending your schema. To learn more, see [Missing source attribute](user-provisioning-sync-attributes-for-mapping.md).
-**Source attribute dropdown missing for schema extension**
+#### Source attribute dropdown missing for schema extension
Extensions to your schema can sometimes be missing from the source attribute dropdown in the UI. Go into the advanced settings of your attribute mappings and manually add the attributes. To learn more, see [Customize attribute mappings](customize-application-attributes.md).
-**Null attribute can't be provisioned**
+#### Null attribute can't be provisioned
Azure AD currently can't provision null attributes. If an attribute is null on the user object, it will be skipped.
-**Max characters for attribute-mapping expressions**
+#### Maximum characters for attribute-mapping expressions
Attribute-mapping expressions can have a maximum of 10,000 characters.
-**Unsupported scoping filters**
+#### Unsupported scoping filters
-Directory extensions, appRoleAssignments, userType, and accountExpires are not supported as scoping filters.
+Directory extensions and the **appRoleAssignments**, **userType**, and **accountExpires** attributes aren't supported as scoping filters.
-**Multi-value directory extensions**
+#### Multivalue directory extensions
-Multi-value directory extensions cannot be used in attribute mappings or scoping filters.
+Multivalue directory extensions can't be used in attribute mappings or scoping filters.
## Service issues
-**Unsupported scenarios**
+#### Unsupported scenarios
- Provisioning passwords isn't supported. - Provisioning nested groups isn't supported. - Provisioning to B2C tenants isn't supported because of the size of the tenants.-- Not all provisioning apps are available in all clouds. For example, Atlassian is not yet available in the Government Cloud. We are working with app developers to onboard their apps to all clouds.
+- Not all provisioning apps are available in all clouds. For example, Atlassian isn't yet available in the Government cloud. We're working with app developers to onboard their apps to all clouds.
-**Automatic provisioning is not available on my OIDC based application**
+#### Automatic provisioning isn't available on my OIDC-based application
-If you create an app registration, the corresponding service principal in enterprise apps will not be enabled for automatic user provisioning. You will need to either request the app be added to the gallery, if intended for use by multiple organizations, or create a second non-gallery app for provisioning.
+If you create an app registration, the corresponding service principal in enterprise apps won't be enabled for automatic user provisioning. You'll need to either request the app be added to the gallery, if intended for use by multiple organizations, or create a second non-gallery app for provisioning.
-**The provisioning interval is fixed**
+#### The provisioning interval is fixed
The [time](./application-provisioning-when-will-provisioning-finish-specific-user.md#how-long-will-it-take-to-provision-users) between provisioning cycles is currently not configurable.
-**Changes not moving from target app to Azure AD**
+#### Changes not moving from target app to Azure AD
The app provisioning service isn't aware of changes made in external apps. So, no action is taken to roll back. The app provisioning service relies on changes made in Azure AD.
-**Switching from sync all to sync assigned not working**
+#### Switching from Sync All to Sync Assigned not working
-After changing scope from 'Sync All' to 'Sync Assigned', please make sure to also perform a restart to ensure that the change takes effect. You can do the restart from the UI.
+After you change scope from **Sync All** to **Sync Assigned**, make sure to also perform a restart to ensure that the change takes effect. You can do the restart from the UI.
-**Provisioning cycle continues until completion**
+#### Provisioning cycle continues until completion
-When setting provisioning `enabled = off`, or hitting stop, the current provisioning cycle will continue running until completion. The service will stop executing any future cycles until you turn provisioning on again.
+When you set provisioning to `enabled = off` or select **Stop**, the current provisioning cycle continues running until completion. The service stops executing any future cycles until you turn provisioning on again.
-**Member of group not provisioned**
+#### Member of group not provisioned
-When a group is in scope and a member is out of scope, the group will be provisioned. The out of scope user won't be provisioned. If the member comes back into scope, the service wonΓÇÖt immediately detect the change. Restarting provisioning will address the issue. We recommend periodically restarting the service to ensure that all users are properly provisioned.
+When a group is in scope and a member is out of scope, the group will be provisioned. The out-of-scope user won't be provisioned. If the member comes back into scope, the service won't immediately detect the change. Restarting provisioning addresses the issue. Periodically restart the service to ensure that all users are properly provisioned.
-**Manager is not provisioned**
+#### Manager isn't provisioned
-If a user and their manager are both in scope for provisioning, the service will provision the user and then update the manager. However if on day one the user is in scope and the manager is out of scope, we will provision the user without the manager reference. When the manager comes into scope, the manager reference will not be updated until you restart provisioning and cause the service to re evaluate all the users again.
+If a user and their manager are both in scope for provisioning, the service provisions the user and then updates the manager. If on day one the user is in scope and the manager is out of scope, we'll provision the user without the manager reference. When the manager comes into scope, the manager reference won't be updated until you restart provisioning and cause the service to reevaluate all the users again.
## On-premises application provisioning
-The following information is a current list of known limitations with the Azure AD ECMA Connector Host and on-prem application provisioning.
+The following information is a current list of known limitations with the Azure AD ECMA Connector Host and on-premises application provisioning.
### Application and directories
-The following applications and directories are not yet supported.
+The following applications and directories aren't yet supported.
+
+#### Active Directory Domain Services (user or group writeback from Azure AD by using the on-premises provisioning preview)
+ - When a user is managed by Azure AD Connect, the source of authority is on-premises Azure AD. So, user attributes can't be changed in Azure AD. This preview doesn't change the source of authority for users managed by Azure AD Connect.
+ - Attempting to use Azure AD Connect and the on-premises provisioning to provision groups or users into Active Directory Domain Services can lead to creation of a loop, where Azure AD Connect can overwrite a change that was made by the provisioning service in the cloud. Microsoft is working on a dedicated capability for group or user writeback. Upvote the UserVoice feedback on [this website](https://feedback.azure.com/forums/169401-azure-active-directory/suggestions/16887037-enable-user-writeback-to-on-premise-ad-from-azure) to track the status of the preview. Alternatively, you can use [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) for user or group writeback from Azure AD to Active Directory.
+
+#### Connectors other than SQL
-**AD DS - (user / group writeback from Azure AD, using the on-prem provisioning preview)**
- - When a user is managed by Azure AD Connect, the source of authority is on-prem Active Directory. Therefore, user attributes cannot be changed in Azure AD. This preview does not change the source of authority for users managed by Azure AD Connect.
- - Attempting to use Azure AD Connect and the on-prem provisioning to provision groups / users into AD DS can lead to creation of a loop, where Azure AD Connect can overwrite a change that was made by the provisioning service in the cloud. Microsoft is working on a dedicated capability for group / user writeback. Upvote the UserVoice feedback [here](https://feedback.azure.com/forums/169401-azure-active-directory/suggestions/16887037-enable-user-writeback-to-on-premise-ad-from-azure) to track the status of the preview. Alternatively, you can use [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) for user / group writeback from Azure AD to AD.
+ The Azure AD ECMA Connector Host is officially supported for the generic SQL connector. While it's possible to use other connectors such as the web services connector or custom ECMA connectors, it's *not yet supported*.
-**Connectors other than SQL**
- - The Azure AD ECMA Connector Host is officially supported for generic SQL (GSQL) connector. While it is possible to use other connectors such as the web services connector or custom ECMA connectors, it is **not yet supported**.
+#### Azure AD
-**Azure Active Directory**
- - On-prem provisioning allows you to take a user already in Azure AD and provision them into a third-party application. **It does not allow you to bring a user into the directory from a third-party application.** Customers will need to rely on our native HR integrations, Azure AD Connect, MIM, or Microsoft Graph to bring users into the directory.
+ By using on-premises provisioning, you can take a user already in Azure AD and provision them into a third-party application. *You can't bring a user into the directory from a third-party application.* Customers will need to rely on our native HR integrations, Azure AD Connect, Microsoft Identity Manager, or Microsoft Graph, to bring users into the directory.
### Attributes and objects
-The following attributes and objects are not supported:
- - Multi-valued attributes
+The following attributes and objects aren't supported:
+ - Multivalued attributes.
- Reference attributes (for example, manager).
- - Groups
+ - Groups.
- Complex anchors (for example, ObjectTypeName+UserName).
- - On-premises applications are sometimes not federated with Azure AD and require local passwords. The on-premises provisioning preview **does not support provisioning one-time passwords or synchronizing passwords** between Azure AD and third-party applications.
- - export_password' virtual attribute, SetPassword, and ChangePassword operations are not supported
+ - On-premises applications are sometimes not federated with Azure AD and require local passwords. The on-premises provisioning preview *doesn't support provisioning one-time passwords or synchronizing passwords* between Azure AD and third-party applications.
+ - The **export_password** virtual attribute, **SetPassword**, and **ChangePassword** operations aren't supported.
#### SSL certificates
- - The Azure AD ECMA Connector Host currently requires either SSL certificate to be trusted by Azure or the Provisioning Agent to be used. Certificate subject must match the host name the Azure AD ECMA Connector Host is installed on.
+ The Azure AD ECMA Connector Host currently requires either an SSL certificate to be trusted by Azure or the provisioning agent to be used. The certificate subject must match the host name the Azure AD ECMA Connector Host is installed on.
#### Anchor attributes
- - The Azure AD ECMA Connector Host currently does not support anchor attribute changes (renames) or target systems, which require multiple attributes to form an anchor.
+ The Azure AD ECMA Connector Host currently doesn't support anchor attribute changes (renames) or target systems, which require multiple attributes to form an anchor.
#### Attribute discovery and mapping
- - The attributes that the target application supports are discovered and surfaced in the Azure portal in Attribute Mappings. Newly added attributes will continue to be discovered. However, if an attribute type has changed (for example, string to boolean), and the attribute is part of the mappings, the type will not change automatically in the Azure portal. Customers will need to go into advanced settings in mappings and manually update the attribute type.
+ The attributes that the target application supports are discovered and surfaced in the Azure portal in **Attribute Mappings**. Newly added attributes will continue to be discovered. If an attribute type has changed, for example, string to Boolean, and the attribute is part of the mappings, the type won't change automatically in the Azure portal. Customers will need to go into advanced settings in mappings and manually update the attribute type.
## Next steps-- [How provisioning works](how-provisioning-works.md)
+[How provisioning works](how-provisioning-works.md)
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
Title: 'Troubleshooting issues with the ECMA Connector Host and Azure AD'
-description: Describes how to troubleshoot various issues you may encounter when installing and using the ECMCA connector host.
+description: Describes how to troubleshoot various issues you might encounter when you install and use the ECMA Connector Host.
-# Troubleshooting ECMA Connector Host issues
+# Troubleshoot ECMA Connector Host issues
>[!IMPORTANT]
-> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+> The on-premises provisioning preview is currently in an invitation-only preview. To request access to the capability, use the [access request form](https://aka.ms/onpremprovisioningpublicpreviewaccess). We'll open the preview to more customers and connectors over the next few months as we prepare for general availability.
+## Troubleshoot test connection issues
+After you configure the ECMA host and provisioning agent, it's time to test connectivity from the Azure Active Directory (Azure AD) provisioning service to the provisioning agent, the ECMA host, and the application. To perform this end-to-end test, select **Test connection** in the application in the Azure portal. When the test connection fails, try the following troubleshooting steps:
-## Troubleshoot test connection issues.
-After configuring the ECMA Host and Provisioning Agent, it's time to test connectivity from the Azure AD Provisioning service to the Provisioning Agent > ECMA Host > Application. This end to end test can be performed by clicking test connection in the application in the Azure portal. When test connection fails, try the following troubleshooting steps:
-
- 1. Verify that the agent and ECMA host are running:
+ 1. Check that the agent and ECMA host are running:
1. On the server with the agent installed, open **Services** by going to **Start** > **Run** > **Services.msc**.
- 2. Under **Services**, make sure **Microsoft Azure AD Connect Agent Updater**, **Microsoft Azure AD Connect Provisioning Agent**, and **Microsoft ECMA2Host** services are present and their status is *Running*.
-![ECMA service running](./media/on-premises-ecma-troubleshoot/tshoot-1.png)
-
- 2. Navigate to the folder where the ECMA Host was installed > Troubleshooting > Scripts > TestECMA2HostConnection and run the script. This script will send a SCIM GET or POST request in order to validate that the ECMA Connector Host is operating and responding to requests.
- It should be run on the same computer as the ECMA Connector Host service itself.
- 3. Ensure that the agent is active by navigating to your application in the Azure portal, click on admin connectivity, click on the agent dropdown, and ensure your agent is active.
- 4. Check if the secret token provided is the same as the secret token on-prem (you will need to go on-prem and provide the secret token again and then copy it into the Azure portal).
- 5. Ensure that you have assigned one or more agents to the application in the Azure portal.
- 6. After assigning an agent, you need to wait 10-20 minutes for the registration to complete. The connectivity test will not work until the registration completes.
- 7. Ensure that you are using a valid certificate. Navigating the settings tab of the ECMA host allows you to generate a new certificate.
- 8. Restart the provisioning agent by navigating to the task bar on your VM by searching for the Microsoft Azure AD Connect provisioning agent. Right-click stop and then start.
- 9. When providing the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace localhost with your hostname, but it is not required. Replace "connectorName" with the name of the connector you specified in the ECMA host.
+ 1. Under **Services**, make sure the **Microsoft Azure AD Connect Agent Updater**, **Microsoft Azure AD Connect Provisioning Agent**, and **Microsoft ECMA2Host** services are present and their status is *Running*.
+
+ ![Screenshot that shows that the ECMA service is running.](./media/on-premises-ecma-troubleshoot/tshoot-1.png)
+
+ 1. Go to the folder where the ECMA host was installed by selecting **Troubleshooting** > **Scripts** > **TestECMA2HostConnection**. Run the script. This script sends a SCIM GET or POST request to validate that the ECMA Connector Host is operating and responding to requests. It should be run on the same computer as the ECMA Connector Host service itself.
+ 1. Ensure that the agent is active by going to your application in the Azure portal, selecting **admin connectivity**, selecting the agent dropdown list, and ensuring your agent is active.
+ 1. Check if the secret token provided is the same as the secret token on-premises. Go to on-premises, provide the secret token again, and then copy it into the Azure portal.
+ 1. Ensure that you've assigned one or more agents to the application in the Azure portal.
+ 1. After you assign an agent, you need to wait 10 to 20 minutes for the registration to complete. The connectivity test won't work until the registration completes.
+ 1. Ensure that you're using a valid certificate. Go to the **Settings** tab of the ECMA host to generate a new certificate.
+ 1. Restart the provisioning agent by going to the taskbar on your VM by searching for the Microsoft Azure AD Connect provisioning agent. Right-click **Stop**, and then select **Start**.
+ 1. When you provide the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace `localhost` with your host name, but it isn't required. Replace `connectorName` with the name of the connector you specified in the ECMA host.
+
``` https://localhost:8585/ecma2host_connectorName/scim ```
-## Unable to configure ECMA host, view logs in event viewer, or start ECMA host service
+## Unable to configure the ECMA host, view logs in Event Viewer, or start the ECMA host service
+
+To resolve the following issues, run the ECMA host as an admin:
+
+* I get an error when I open the ECMA host wizard.
-#### The following issues can be resolved by running the ECMA host as an admin:
+ ![Screenshot that shows an ECMA wizard error.](./media/on-premises-ecma-troubleshoot/tshoot-2.png)
-* I get an error when opening the ECMA host wizard
- ![ECMA wizard error](./media/on-premises-ecma-troubleshoot/tshoot-2.png)
+* I can configure the ECMA host wizard, but I can't see the ECMA host logs. In this case, you need to open the host as an admin and set up a connector end to end. This step can be simplified by exporting an existing connector and importing it again.
-* I've been able to configure the ECMA host wizard, but am not able to see the ECMA host logs. In this case you will need to open the host as an admin and setup a connector end to end. This can be simplified by exporting an existing connector and importing it again.
+ ![Screenshot that shows host logs.](./media/on-premises-ecma-troubleshoot/tshoot-3.png)
- ![Host logs](./media/on-premises-ecma-troubleshoot/tshoot-3.png)
+* I can configure the ECMA host wizard, but I can't start the ECMA host service.
-* I've been able to configure the ECMA host wizard, but am not able to start the ECMA host service
- ![Host service](./media/on-premises-ecma-troubleshoot/tshoot-4.png)
+ ![Screenshot that shows the host service.](./media/on-premises-ecma-troubleshoot/tshoot-4.png)
-## Turning on verbose logging
+## Turn on verbose logging
-By default, the swithValue for the ECMA Connector Host is set to Error. This means it will only log events that are errors. To enable verbose logging for the ECMA host service and / or Wizard. Set the "switchValue" to Verbose in both locations as shown below.
+By default, `switchValue` for the ECMA Connector Host is set to `Error`. This setting means it will only log events that are errors. To enable verbose logging for the ECMA host service or wizard, set `switchValue` to `Verbose` in both locations as shown.
-File location for verbose service logging: c:\program files\Microsoft ECMA2Host\Service\Microsoft.ECMA2Host.Service.exe.config
+The file location for verbose service logging is C:\Program Files\Microsoft ECMA2Host\Service\Microsoft.ECMA2Host.Service.exe.config.
``` <?xml version="1.0" encoding="utf-8"?> <configuration>
File location for verbose service logging: c:\program files\Microsoft ECMA2Host\
<add initializeData="ECMA2Host" type="System.Diagnos ```
-File location for verbose wizard logging: C:\Program Files\Microsoft ECMA2Host\Wizard\Microsoft.ECMA2Host.ConfigWizard.exe.config
+The file location for verbose wizard logging is C:\Program Files\Microsoft ECMA2Host\Wizard\Microsoft.ECMA2Host.ConfigWizard.exe.config.
``` <source name="ConnectorsLog" switchValue="Verbose"> <listeners>
File location for verbose wizard logging: C:\Program Files\Microsoft ECMA2Host\W
<add initializeData="ECMA2Host" type="System.Diagnostics.EventLogTraceListener, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ECMA2HostListener" traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, Callstack" /> ```
-## Target attribute missing
+## Target attribute is missing
The provisioning service automatically discovers attributes in your target application. If you see that a target attribute is missing in the target attribute list in the Azure portal, perform the following troubleshooting step:
- 1. Review the "Select Attributes" page of your ECMA host configuration to verify that the attribute has been selected to be exposed to the Azure portal.
- 2. Ensure that the ECMA host service is turned on.
- 3. Review the ECMA host logs to verify that a /schemas request was made and review the attributes in the response. This information will be valuable for support to troubleshoot the issue.
+ 1. Review the **Select Attributes** page of your ECMA host configuration to check that the attribute has been selected to be exposed to the Azure portal.
+ 1. Ensure that the ECMA host service is turned on.
+ 1. Review the ECMA host logs to check that a /schemas request was made, and review the attributes in the response. This information will be valuable for support to troubleshoot the issue.
-## Collect logs from event viewer as a zip file
-Navigate to the folder where the ECMA Host was installed > Troubleshooting > Scripts. Run the `CollectTroubleshootingInfo` script as an admin. It allows you to capture the logs in a zip file and export them.
+## Collect logs from Event Viewer as a zip file
-## Reviewing events in the event viewer
+Go to the folder where the ECMA host was installed by selecting **Troubleshooting** > **Scripts**. Run the `CollectTroubleshootingInfo` script as an admin. You can use it to capture the logs in a zip file and export them.
-Once the ECMA Connector host schema mapping has been configured, start the service so it will listen for incoming connections. Then, monitor for incoming requests. To do this, do the following:
+## Review events in Event Viewer
- 1. Click on the start menu, type **event viewer**, and click on Event Viewer.
- 2. In **Event Viewer**, expand **Applications and Services** Logs, and select **Microsoft ECMA2Host Logs**.
- 3. As changes are received by the connector host, events will be written to the application log.
+After the ECMA Connector Host schema mapping has been configured, start the service so it will listen for incoming connections. Then, monitor for incoming requests.
+ 1. Select the **Start** menu, enter **event viewer**, and select **Event Viewer**.
+ 1. In **Event Viewer**, expand **Applications and Services** logs, and select **Microsoft ECMA2Host Logs**.
+ 1. As changes are received by the connector host, events will be written to the application log.
+## Understand incoming SCIM requests
-## Understanding incoming SCIM requests
+Requests made by Azure AD to the provisioning agent and connector host use the SCIM protocol. Requests made from the host to apps use the protocol the app supports. The requests from the host to the agent to Azure AD rely on SCIM. You can learn more about the SCIM implementation in [Tutorial: Develop and plan provisioning for a SCIM endpoint in Azure Active Directory](use-scim-to-provision-users-and-groups.md).
-Requests made by Azure AD to the provisioning agent and connector host use the SCIM protocol. Requests made from the host to apps use the protocol the app support and the requests from the host to agent to Azure AD rely on SCIM. You can learn more about our SCIM implementation [here](use-scim-to-provision-users-and-groups.md).
-
-Be aware that at the beginning of each provisioning cycle, before performing on-demand provisioning, and when doing the test connection the Azure AD provisioning service generally makes a get user call for a [dummy user](use-scim-to-provision-users-and-groups.md#request-3) to ensure the target endpoint is available and returning SCIM-compliant responses.
+At the beginning of each provisioning cycle, before performing on-demand provisioning and when doing the test connection, the Azure AD provisioning service generally makes a get-user call for a [dummy user](use-scim-to-provision-users-and-groups.md#request-3) to ensure the target endpoint is available and returning SCIM-compliant responses.
## How do I troubleshoot the provisioning agent?
+You might experience the following error scenarios.
+ ### Agent failed to start You might receive an error message that states:
-**Service 'Microsoft Azure AD Connect Provisioning Agent' failed to start. Verify that you have sufficient privileges to start the system services.**
+"Service 'Microsoft Azure AD Connect Provisioning Agent' failed to start. Check that you have sufficient privileges to start the system services."
-This problem is typically caused by a group policy that prevented permissions from being applied to the local NT Service log-on account created by the installer (NT SERVICE\AADConnectProvisioningAgent). These permissions are required to start the service.
+This problem is typically caused by a group policy that prevented permissions from being applied to the local NT Service sign-in account created by the installer (NT SERVICE\AADConnectProvisioningAgent). These permissions are required to start the service.
-To resolve this problem, follow these steps.
+To resolve this problem:
1. Sign in to the server with an administrator account. 1. Open **Services** by either navigating to it or by going to **Start** > **Run** > **Services.msc**.
This test verifies that your agents can communicate with Azure over port 443. Op
You might get the following error message when you attempt to register the agent.
-![Agent times out](./media/on-premises-ecma-troubleshoot/tshoot-5.png)
+![Screenshot that shows that the agent timed out.](./media/on-premises-ecma-troubleshoot/tshoot-5.png)
This problem is usually caused by the agent being unable to connect to the Hybrid Identity Service and requires you to configure an HTTP proxy. To resolve this problem, configure an outbound proxy.
Replace the variables `[proxy-server]` and `[proxy-port]` with your proxy server
You might get an error message when you install the cloud provisioning agent.
-This problem is typically caused by the agent being unable to execute the PowerShell registration scripts due to local PowerShell execution policies.
+This problem is typically caused by the agent being unable to execute the PowerShell registration scripts because of local PowerShell execution policies.
To resolve this problem, change the PowerShell execution policies on the server. You need to have Machine and User policies set as *Undefined* or *RemoteSigned*. If they're set as *Unrestricted*, you'll see this error. For more information, see [PowerShell execution policies](/powershell/module/microsoft.powershell.core/about/about_execution_policies?view=powershell-6). ### Log files
-By default, the agent emits minimal error messages and stack trace information. You can find these trace logs in the folder **C:\ProgramData\Microsoft\Azure AD Connect Provisioning Agent\Trace**.
+By default, the agent emits minimal error messages and stack trace information. You can find trace logs in the folder C:\ProgramData\Microsoft\Azure AD Connect Provisioning Agent\Trace.
+
+To gather more information for troubleshooting agent-related problems:
-To gather additional details for troubleshooting agent-related problems, follow these steps.
+1. Install the AADCloudSyncTools PowerShell module as described in [AADCloudSyncTools PowerShell Module for Azure AD Connect cloud sync](../../active-directory/cloud-sync/reference-powershell.md#install-the-aadcloudsynctools-powershell-module).
+1. Use the `Export-AADCloudSyncToolsLogs` PowerShell cmdlet to capture the information. Use the following switches to fine-tune your data collection. Use:
-1. Install the AADCloudSyncTools PowerShell module as described [here](../../active-directory/cloud-sync/reference-powershell.md#install-the-aadcloudsynctools-powershell-module).
-2. Use the `Export-AADCloudSyncToolsLogs` PowerShell cmdlet to capture the information. You can use the following switches to fine-tune your data collection.
- - SkipVerboseTrace to only export current logs without capturing verbose logs (default = false)
- - TracingDurationMins to specify a different capture duration (default = 3 mins)
- - OutputPath to specify a different output path (default = UserΓÇÖs Documents)
+ - **SkipVerboseTrace** to only export current logs without capturing verbose logs (default = false).
+ - **TracingDurationMins** to specify a different capture duration (default = 3 mins).
+ - **OutputPath** to specify a different output path (default = user's documents).
-Azure AD allows you to monitor the provisioning service in the cloud as well as collect logs on-premises. The provisioning service emits logs for each user that was evaluated as part of the synchronization process. Those logs can be consumed through the [Azure portal UI, APIs, and log analytics](../reports-monitoring/concept-provisioning-logs.md). In addition, the ECMA host generates logs on-premises, showing each provisioning request received and the response sent to Azure AD.
+By using Azure AD, you can monitor the provisioning service in the cloud and collect logs on-premises. The provisioning service emits logs for each user that was evaluated as part of the synchronization process. Those logs can be consumed through the [Azure portal UI, APIs, and log analytics](../reports-monitoring/concept-provisioning-logs.md). The ECMA host also generates logs on-premises. It shows each provisioning request that was received and the response that was sent to Azure AD.
### Agent installation fails
-* The error `System.ComponentModel.Win32Exception: The specified service already exists` indicates that the previous ECMA Host was unsuccessfully uninstalled. Please uninstall the host application. Navigate to program files and remove the ECMA Host folder. You may want to store the configuration file for backup.
-* The following error indicates a pre-req has not been fulfilled. Ensure that you have .NET 4.7.1 installed.
+* The error `System.ComponentModel.Win32Exception: The specified service already exists` indicates that the previous ECMA host was unsuccessfully uninstalled. Uninstall the host application. Go to program files, and remove the ECMA host folder. You might want to store the configuration file for backup.
+* The following error indicates a prerequisite wasn't fulfilled. Ensure that you have .NET 4.7.1 installed.
``` Method Name : <>c__DisplayClass0_1 :
Azure AD allows you to monitor the provisioning service in the cloud as well as
```
-## Next Steps
+## Next steps
- [Azure AD ECMA Connector Host installation](on-premises-ecma-install.md) - [Azure AD ECMA Connector Host configuration](on-premises-ecma-configure.md)-- [Generic SQL Connector](on-premises-sql-connector-configure.md)-- [Tutorial: ECMA Connector Host Generic SQL Connector](tutorial-ecma-sql-connector.md)
+- [Generic SQL connector](on-premises-sql-connector-configure.md)
+- [Tutorial: ECMA Connector Host generic SQL connector](tutorial-ecma-sql-connector.md)
active-directory On Premises Migrate Microsoft Identity Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-migrate-microsoft-identity-manager.md
Title: 'Export a Microsoft Identity Manager connector for use with Azure AD ECMA Connector Host'
-description: Describes how to create and export a connector from MIM Sync to be used with Azure AD ECMA Connector Host.
+ Title: 'Export a Microsoft Identity Manager connector for use with the Azure AD ECMA Connector Host'
+description: Describes how to create and export a connector from MIM Sync to be used with the Azure AD ECMA Connector Host.
-# Export a Microsoft Identity Manager connector for use with Azure AD ECMA Connector Host
+# Export a Microsoft Identity Manager connector for use with the Azure AD ECMA Connector Host
>[!IMPORTANT]
-> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+> The on-premises provisioning preview is currently in an invitation-only preview. To request access to the capability, use the [access request form](https://aka.ms/onpremprovisioningpublicpreviewaccess). We'll open the preview to more customers and connectors over the next few months as we prepare for general availability.
-You can import into Azure AD ECMA Connector Host a configuration for a specific connector from a FIM Sync or MIM Sync installation. Note that the MIM Sync installation is only used for configuration, not for the ongoing synchronization from Azure AD.
+You can import into the Azure Active Directory (Azure AD) ECMA Connector Host a configuration for a specific connector from a Forefront Identity Manager Synchronization Service or Microsoft Identity Manager Synchronization Service (MIM Sync) installation. The MIM Sync installation is only used for configuration, not for the ongoing synchronization from Azure AD.
>[!IMPORTANT]
->Currently, only the Generic SQL (GSQL) connector is support for use with the Azure AD ECMA Connector Host.
--
-## Creating and exporting a connector configuration in MIM Sync
-If you already have MIM Sync with your ECMA connector already configured, then skip to step 10.
-
- 1. Prepare a Windows Server 2016 server, which is distinct from the server that will be used for running the Azure AD ECMA Connector Host. This host server should either have a SQL Server 2016 database co-located, or have network connectivity to a SQL Server 2016 database. One way to set up this server is by deploying an Azure Virtual Machine with the image **SQL Server 2016 SP1 Standard on Windows Server 2016**. Note that this server does not need Internet connectivity, other than remote desktop access for setup purposes.
- 2. Create an account for use during the MIM Sync installation. This can be a local account on that Windows Server. To create a local account, launch control panel, open user accounts, and add a user account **mimsync**.
- 3. Add the account created in the previous step to the local Administrators group.
- 4. Give the account created earlier the ability to run a service. Launch Local Security Policy, click on Local Policies, User Rights Assignment, and **Log on as a service**. Add the account mentioned earlier.
- 5. Install MIM Sync on this host. If you do not have MIM Sync binaries, then you can install an evaluation by downloading the ZIP file from [https://www.microsoft.com/en-us/download/details.aspx?id=48244](https://www.microsoft.com/en-us/download/details.aspx?id=48244), mounting the ISO image, and copying the folder **Synchronization Service** to the Windows Server host. Then run the setup program contained in that folder. Note that evaluation software is time-limited and will expire, and is not intended for production use.
- 6. Once the installation of MIM Sync is complete, log out and log back in.
- 7. Install your connector on that same server as MIM Sync. (For illustration purposes, this test lab guide will illustrate using one of the Microsoft-supplied connectors for download from [https://www.microsoft.com/en-us/download/details.aspx?id=51495](https://www.microsoft.com/en-us/download/details.aspx?id=51495) ).
- 8. Launch the Synchronization Service UI. Click on **Management Agents**. Click **Create**, and specify the connector management agent. Be sure to select a connector management agent that is ECMA-based.
- 9. Give the connector a name, and configure the parameters needed to import and export data to the connector. Be sure to configure that the connector can import and export single-valued string attributes of a user or person object type.
- 10. On the MIM Sync server computer, launch the Synchronization Service UI, if not already running. Click on **Management Agents**.
- 11. Select the connector, and click **Export Management Agent**. Save the XML file, as well as the DLL and related software for your connector, to the Windows Server which will be holding the ECMA Connector host.
+>Currently, only the generic SQL connector is supported for use with the Azure AD ECMA Connector Host.
+
+## Create and export a connector configuration in MIM Sync
+If you already have MIM Sync with your ECMA connector configured, skip to step 10.
+
+ 1. Prepare a Windows Server 2016 server, which is distinct from the server that will be used for running the Azure AD ECMA Connector Host. This host server should either have a SQL Server 2016 database colocated or have network connectivity to a SQL Server 2016 database. One way to set up this server is by deploying an Azure virtual machine with the image **SQL Server 2016 SP1 Standard on Windows Server 2016**. This server doesn't need internet connectivity other than remote desktop access for setup purposes.
+ 1. Create an account for use during the MIM Sync installation. It can be a local account on that Windows Server instance. To create a local account, open **Control Panel** > **User Accounts**, and add the user account **mimsync**.
+ 1. Add the account created in the previous step to the local Administrators group.
+ 1. Give the account created earlier the ability to run a service. Start **Local Security Policy** and select **Local Policies** > **User Rights Assignment** > **Log on as a service**. Add the account mentioned earlier.
+ 1. Install MIM Sync on this host. If you don't have MIM Sync binaries, you can install an evaluation by downloading the zip file from the [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=48244), mounting the ISO image, and copying the folder **Synchronization Service** to the Windows Server host. Then run the setup program contained in that folder. Evaluation software is time limited and will expire. It isn't intended for production use.
+ 1. After the installation of MIM Sync is complete, sign out and sign back in.
+ 1. Install your connector on the same server as MIM Sync. For illustration purposes, this test lab guide will illustrate using one of the Microsoft-supplied connectors for download from the [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=51495).
+ 1. Start the Synchronization Service UI. Select **Management Agents**. Select **Create**, and specify the connector management agent. Be sure to select a connector management agent that's ECMA based.
+ 1. Give the connector a name, and configure the parameters needed to import and export data to the connector. Be sure to configure that the connector can import and export single-valued string attributes of a user or person object type.
+ 1. On the MIM Sync server computer, start the Synchronization Service UI, if it isn't already running. Select **Management Agents**.
+ 1. Select the connector, and select **Export Management Agent**. Save the XML file, and the DLL and related software for your connector, to the Windows server that will be holding the ECMA Connector Host.
At this point, the MIM Sync server is no longer needed.
- 1. Sign into the Windows Server as the account which the Azure AD ECMA Connector Host will run as.
- 2. Change to the directory c:\program files\Microsoft ECMA2host\Service\ECMA and ensure there are one or more DLLs already present in that directory. (Those DLLs correspond to Microsoft-delivered connectors).
- 3. Copy the MA DLL for your connector, and any of its prerequisite DLLs, to that same ECMA subdirectory of the Service directory.
- 4. Change to the directory C:\program files\Microsoft ECMA2Host\Wizard and run the program Microsoft.ECMA2Host.ConfigWizard.exe to set up the ECMA Connector Host configuration.
- 5. A new window will appear with a list of connectors. By default, no connectors will be present. Click **;New connector**.
- 6. Specify the management agent xml file that was exported from MIM earlier. Continue with the configuration and schema mapping instructions from the section Configuring a connector above.
---
-## Next Steps
+ 1. Sign in to the Windows server as the account that the Azure AD ECMA Connector Host will run as.
+ 1. Change to the directory C:\Program Files\Microsoft ECMA2host\Service\ECMA. Ensure there are one or more DLLs already present in that directory. Those DLLs correspond to Microsoft-delivered connectors.
+ 1. Copy the MA DLL for your connector, and any of its prerequisite DLLs, to that same ECMA subdirectory of the Service directory.
+ 1. Change to the directory C:\Program Files\Microsoft ECMA2Host\Wizard. Run the program Microsoft.ECMA2Host.ConfigWizard.exe to set up the ECMA Connector Host configuration.
+ 1. A new window appears with a list of connectors. By default, no connectors will be present. Select **New connector**.
+ 1. Specify the management agent XML file that was exported from MIM Sync earlier. Continue with the configuration and schema-mapping instructions from the section "Configure a connector."
+## Next steps
- [App provisioning](user-provisioning.md) - [Azure AD ECMA Connector Host prerequisites](on-premises-ecma-prerequisites.md) - [Azure AD ECMA Connector Host installation](on-premises-ecma-install.md) - [Azure AD ECMA Connector Host configuration](on-premises-ecma-configure.md)-- [Generic SQL Connector](on-premises-sql-connector-configure.md)
+- [Generic SQL connector](on-premises-sql-connector-configure.md)
active-directory On Premises Scim Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-scim-provisioning.md
Title: Azure AD on-premises app provisioning to SCIM-enabled apps
-description: This article describes how to on-premises app provisioning to SCIM-enabled apps.
+description: This article describes how to use the Azure AD provisioning service to provision users into an on-premises app that's SCIM enabled.
# Azure AD on-premises application provisioning to SCIM-enabled apps >[!IMPORTANT]
-> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+> The on-premises provisioning preview is currently in an invitation-only preview. To request access to the capability, use the [access request form](https://aka.ms/onpremprovisioningpublicpreviewaccess). We'll open the preview to more customers and connectors over the next few months as we prepare for general availability.
-The Azure AD provisioning service supports a [SCIM 2.0](https://techcommunity.microsoft.com/t5/identity-standards-blog/provisioning-with-scim-getting-started/ba-p/880010) client that can be used to automatically provision users into cloud or on-premises applications. This document outlines how you can use the Azure AD provisioning service to provision users into an on-premises application that is SCIM enabled. If you're looking to provision users into non-SCIM on-premises applications that use SQL as a data store, please see the documentation [here](tutorial-ecma-sql-connector.md). If you're looking to provisioning users into cloud apps such as DropBox, Atlassian, etc. review the app specific [tutorials](../../active-directory/saas-apps/tutorial-list.md).
+The Azure Active Directory (Azure AD) provisioning service supports a [SCIM 2.0](https://techcommunity.microsoft.com/t5/identity-standards-blog/provisioning-with-scim-getting-started/ba-p/880010) client that can be used to automatically provision users into cloud or on-premises applications. This article outlines how you can use the Azure AD provisioning service to provision users into an on-premises application that's SCIM enabled. If you want to provision users into non-SCIM on-premises applications that use SQL as a data store, see the [Azure AD ECMA Connector Host Generic SQL Connector tutorial](tutorial-ecma-sql-connector.md). If you want to provision users into cloud apps such as DropBox and Atlassian, review the app-specific [tutorials](../../active-directory/saas-apps/tutorial-list.md).
-![architecture](./media/on-premises-scim-provisioning/scim-4.png)
+![Diagram that shows SCIM architecture.](./media/on-premises-scim-provisioning/scim-4.png)
+## Prerequisites
+- An Azure AD tenant with Azure AD Premium P1 or Premium P2 (or EMS E3 or E5). [!INCLUDE [active-directory-p1-license.md](../../../includes/active-directory-p1-license.md)]
+- Administrator role for installing the agent. This task is a one-time effort and should be an Azure account that's either a hybrid administrator or a global administrator.
+- Administrator role for configuring the application in the cloud (application administrator, cloud application administrator, global administrator, or a custom role with permissions).
-## Pre-requisites
-- An Azure AD tenant with Azure AD Premium P1 or Premium P2 (or EMS E3 or E5).
- [!INCLUDE [active-directory-p1-license.md](../../../includes/active-directory-p1-license.md)]
-- Administrator role for installing the agent. This is a one time effort and should be an Azure account that is either a hybrid admin or global admin. -- Administrator role for configuring the application in the cloud (Application admin, Cloud application admin, Global Administrator, Custom role with perms)
+## On-premises app provisioning to SCIM-enabled apps
+To provision users to SCIM-enabled apps:
-## Steps for on-premises app provisioning to SCIM-enabled apps
-Use the steps below to provision to SCIM-enabled apps.
-
- 1. Add the "On-premises SCIM app" from the [gallery](../../active-directory/manage-apps/add-application-portal.md).
- 2. Navigate to your app > Provisioning > Download the provisioning agent.
- 3. Click on on-premises connectivity and download the provisioning agent.
- 4. Copy the agent onto the virtual machine or server that your SCIM endpoint is hosted on.
- 5. Open the provisioning agent installer, agree to the terms of service, and click install.
- 6. Open the provisioning agent wizard and select on-premises provisioning when prompted for the extension that you would like to enable.
- 7. Provide credentials for an Azure AD Administrator when prompted to authorize (Hybrid administrator or Global administrator required).
- 8. Click confirm to confirm the installation was successful.
- 9. Navigate back to your application > on-premises connectivity.
- 10. Select the agent that you installed, from the dropdown list, and click assign agent.
- 11. Wait 10 minutes or restart the Azure AD Connect Provisioning agent service on your server / VM.
- 12. Provide URL for your SCIM endpoint in the tenant URL field (e.g. Https://localhost:8585/scim).
- ![assign agent](./media/on-premises-scim-provisioning/scim-2.png)
- 13. Click test connection and save the credentials.
- 14. Configure any [attribute mappings](customize-application-attributes.md) or [scoping](define-conditional-rules-for-provisioning-user-accounts.md) rules required for your application.
- 15. Add users to scope by [assigning users and groups](../../active-directory/manage-apps/add-application-portal-assign-users.md) to the application.
- 16. Test provisioning a few users [on-demand](provision-on-demand.md).
- 17. Add additional users into scope by assigning them to your application.
- 18. Navigate to the provisioning blade and hit start provisioning.
- 19. Monitor using the [provisioning logs](../../active-directory/reports-monitoring/concept-provisioning-logs.md).
+ 1. Add the **On-premises SCIM app** from the [gallery](../../active-directory/manage-apps/add-application-portal.md).
+ 1. Go to your app and select **Provisioning** > **Download the provisioning agent**.
+ 1. Select **On-Premises Connectivity**, and download the provisioning agent.
+ 1. Copy the agent onto the virtual machine or server that your SCIM endpoint is hosted on.
+ 1. Open the provisioning agent installer, agree to the terms of service, and select **Install**.
+ 1. Open the provisioning agent wizard, and select **On-premises provisioning** when prompted for the extension you want to enable.
+ 1. Provide credentials for an Azure AD administrator when you're prompted to authorize. Hybrid administrator or global administrator is required.
+ 1. Select **Confirm** to confirm the installation was successful.
+ 1. Go back to your application, and select **On-Premises Connectivity**.
+ 1. Select the agent that you installed from the dropdown list, and select **Assign Agent(s)**.
+ 1. Wait 10 minutes or restart the Azure AD Connect Provisioning agent service on your server or VM.
+ 1. Provide the URL for your SCIM endpoint in the **Tenant URL** box. An example is https://localhost:8585/scim.
+ ![Screenshot that shows assigning an agent.](./media/on-premises-scim-provisioning/scim-2.png)
+ 1. Select **Test Connection**, and save the credentials.
+ 1. Configure any [attribute mappings](customize-application-attributes.md) or [scoping](define-conditional-rules-for-provisioning-user-accounts.md) rules required for your application.
+ 1. Add users to scope by [assigning users and groups](../../active-directory/manage-apps/add-application-portal-assign-users.md) to the application.
+ 1. Test provisioning a few users [on demand](provision-on-demand.md).
+ 1. Add more users into scope by assigning them to your application.
+ 1. Go to the **Provisioning** pane, and select **Start provisioning**.
+ 1. Monitor using the [provisioning logs](../../active-directory/reports-monitoring/concept-provisioning-logs.md).
-## Things to be aware of
+## Additional requirements
* Ensure your [SCIM](https://techcommunity.microsoft.com/t5/identity-standards-blog/provisioning-with-scim-getting-started/ba-p/880010) implementation meets the [Azure AD SCIM requirements](use-scim-to-provision-users-and-groups.md).
- * Azure AD offers open-source [reference code](https://github.com/AzureAD/SCIMReferenceCode/wiki) that developers can use to bootstrap their SCIM implementation (the code is as-is)
+
+ Azure AD offers open-source [reference code](https://github.com/AzureAD/SCIMReferenceCode/wiki) that developers can use to bootstrap their SCIM implementation. The code is as is.
* Support the /schemaDiscovery endpoint to reduce configuration required in the Azure portal.
-## Next Steps
+## Next steps
- [App provisioning](user-provisioning.md) - [Azure AD ECMA Connector Host installation](on-premises-ecma-install.md) - [Azure AD ECMA Connector Host configuration](on-premises-ecma-configure.md)-- [Generic SQL Connector](on-premises-sql-connector-configure.md)-- [Tutorial: ECMA Connector Host Generic SQL Connector](tutorial-ecma-sql-connector.md)
+- [Generic SQL connector](on-premises-sql-connector-configure.md)
+- [Tutorial: ECMA Connector Host generic SQL connector](tutorial-ecma-sql-connector.md)
active-directory On Premises Sql Connector Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-sql-connector-configure.md
# Azure AD ECMA Connector Host generic SQL connector configuration >[!IMPORTANT]
-> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+> The on-premises provisioning preview is currently in an invitation-only preview. To request access to the capability, use the [access request form](https://aka.ms/onpremprovisioningpublicpreviewaccess). We'll open the preview to more customers and connectors over the next few months as we prepare for general availability.
-
-This document describes how to create a new SQL connector with the Azure AD ECMA Connector Host and how to configure it. You will need to do this once you have successfully installed Azure AD ECMA Connector Host.
+This article describes how to create a new SQL connector with the Azure Active Directory (Azure AD) ECMA Connector Host and how to configure it. You'll need to do this task after you've successfully installed the Azure AD ECMA Connector Host.
>[!NOTE]
-> This document covers only the configuration of the Generic SQL connector. For step-by-step example of setting up the Generic SQL connector, see [Tutorial: ECMA Connector Host Generic SQL Connector](tutorial-ecma-sql-connector.md)
+> This article covers only the configuration of the generic SQL connector. For a step-by-step example of how to set up the generic SQL connector, see [Tutorial: ECMA Connector Host generic SQL connector](tutorial-ecma-sql-connector.md)
-Installing and configuring the Azure AD ECMA Connector Host is a process. Use the flow below to guide you through the process.
+ This flow guides you through the process of installing and configuring the Azure AD ECMA Connector Host.
- ![Installation flow](./media/on-premises-sql-connector-configure/flow-1.png)
+ ![Diagram that shows the installation flow.](./media/on-premises-sql-connector-configure/flow-1.png)
-For more installation and configuration information see:
+For more installation and configuration information, see:
- [Prerequisites for the Azure AD ECMA Connector Host](on-premises-ecma-prerequisites.md) - [Installation of the Azure AD ECMA Connector Host](on-premises-ecma-install.md) - [Configure the Azure AD ECMA Connector Host and the provisioning agent](on-premises-ecma-configure.md)
-Depending on the options you select, some of the wizard screens may or may not be available and the information may be slightly different. For purposes of this configuration, the user object type is used. Use the information below to guide you in your configuration.
+Depending on the options you select, some of the wizard screens might not be available and the information might be slightly different. For purposes of this configuration, the user object type is used. Use the following information to guide you in your configuration.
-**Supported systems**
-* Microsoft SQL Server & SQL Azure
+#### Supported systems
+* Microsoft SQL Server and Azure SQL
* IBM DB2 10.x * IBM DB2 9.x
-* Oracle 10 & 11g
+* Oracle 10 and 11g
* Oracle 12c and 18c * MySQL 5.x+ ## Create a generic SQL connector
-To create a generic SQL connector use the following steps:
+To create a generic SQL connector:
- 1. Click on the ECMA Connector Host shortcut on the desktop.
- 2. Select **New Connector**.
- ![Choose new connector](.\media\on-premises-sql-connector-configure\sql-1.png)
+ 1. Select the ECMA Connector Host shortcut on the desktop.
+ 1. Select **New Connector**.
+
+ ![Screenshot that shows Choose new connector.](.\media\on-premises-sql-connector-configure\sql-1.png)
- 3. On the **Properties** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
- ![Enter properties](.\media\on-premises-sql-connector-configure\sql-2.png)
+ 1. On the **Properties** page, fill in the boxes and select **Next**. Use the table that follows the image for guidance on the individual boxes.
+
+ ![Screenshot that shows Enter properties.](.\media\on-premises-sql-connector-configure\sql-2.png)
|Property|Description| |--|--|
- |Name|The name for this connector|
+ |Name|The name for this connector.|
|Autosync timer (minutes)|Minimum allowed is 120 minutes.|
- |Secret Token|123456 [This must be a string of 10-20 ASCII letters and/or digits.]|
- |Description|The description of the connector|
- |Extension DLL|For a generic sql connector, select Microsoft.IAM.Connector.GenericSql.dll.|
- 4. On the **Connectivity** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
- ![Enter connectivity](.\media\on-premises-sql-connector-configure\sql-3.png)
+ |Secret Token|123456 (The token must be a string of 10 to 20 ASCII letters and/or digits.)|
+ |Description|The description of the connector.|
+ |Extension DLL|For a generic SQL connector, select **Microsoft.IAM.Connector.GenericSql.dll**.|
+ 1. On the **Connectivity** page, fill in the boxes and select **Next**. Use the table that follows the image for guidance on the individual boxes.
+
+ ![Screenshot that shows Enter connectivity.](.\media\on-premises-sql-connector-configure\sql-3.png)
|Property|Description| |--|--|
- |DSN File|The Data Source Name file used to connect to the SQL server|
- |User Name|The username of an individual with rights to the SQL server. This must be in the form of hostname\sqladminaccount for standalone servers, or domain\sqladminaccount for domain member servers.|
- |Password|The password of the username provided above.|
- |DN is Anchor|Unless the your environment is known to require these settings, leave DN is Anchor and Export Type:Object Replace deselected.|
- |Export TypeObjectReplace||
- 5. On the **Schema 1** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
- ![Enter schema 1](.\media\on-premises-sql-connector-configure\sql-4.png)
+ |DSN File|The Data Source Name file used to connect to the SQL Server instance.|
+ |User Name|The username of an individual with rights to the SQL Server instance. It must be in the form of hostname\sqladminaccount for standalone servers or domain\sqladminaccount for domain member servers.|
+ |Password|The password of the username just provided.|
+ |DN is Anchor|Unless your environment is known to require these settings, don't select the **DN is Anchor** and **Export Type:Object Replace** checkboxes.|
+ |Export Type:Object Replace||
+ 1. On the **Schema 1** page, fill in the boxes and select **Next**. Use the table that follows the image for guidance on the individual boxes.
+
+ ![Screenshot that shows the Schema 1 page.](.\media\on-premises-sql-connector-configure\sql-4.png)
|Property|Description| |--|--| |Object type detection method|The method used to detect the object type the connector will be provisioning.|
- |Fixed value list/Table/View/SP|This should contain User.|
+ |Fixed value list/Table/View/SP|This box should contain **User**.|
|Column Name for Table/View/SP|| |Stored Procedure Parameters|| |Provide SQL query for detecting object types||
- 6. On the **Schema 2** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes. This schema screen maybe slightly different or have additional information depending on the object types that were selected in the previous step.
- ![Enter schema 2](.\media\on-premises-sql-connector-configure\sql-5.png)
+ 1. On the **Schema 2** page, fill in the boxes and select **Next**. Use the table that follows the image for guidance on the individual boxes. This schema screen might be slightly different or have additional information depending on the object types you selected in the previous step.
+
+ ![Screenshot that shows the Schema 2 page.](.\media\on-premises-sql-connector-configure\sql-5.png)
|Property|Description| |--|--|
- |User:Attribute Detection|This should be set to Table.|
- |User:Table/View/SP|This should contain Employees.|
- |User:Name of Multi-Values Table/Views||
- |User:Stored Procedure Parameters||
- |User:Provide SQL query for detecting object types||
- 7. On the **Schema 3** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes. The attributes that you see will depend on the information provided in the previous step.
- ![Enter schema 3](.\media\on-premises-sql-connector-configure\sql-6.png)
+ |User:Attribute Detection|This property should be set to **Table**.|
+ |User:Table/View/SP|This box should contain **Employees**.|
+ |User:Name of Multi-Valued Table/Views||
+ |User:Store Procedure Parameters||
+ |User:Provide SQL query for detecting attributes||
+ 1. On the **Schema 3** page, fill in the boxes and select **Next**. Use the table that follows the image for guidance on the individual boxes. The attributes you see depends on the information you provided in the previous step.
+
+ ![Screenshot that shows the Schema 3 page.](.\media\on-premises-sql-connector-configure\sql-6.png)
|Property|Description| |--|--| |Select DN attribute for User||
- 8. On the **Schema 4** page, review the attributes DataType and the Direction of flow for the connector. You can adjust them if needed and click Next.
- ![Enter schema 4](.\media\on-premises-sql-connector-configure\sql-7.png)
- 9. On the **Global** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
- ![Enter global information](.\media\on-premises-sql-connector-configure\sql-8.png)
+ 1. On the **Schema 4** page, review the **DataType** attribute and the direction of flow for the connector. You can adjust them if needed and select **Next**.
+
+ ![Screenshot that shows the schema 4 page.](.\media\on-premises-sql-connector-configure\sql-7.png)
+ 1. On the **Global** page, fill in the boxes and select **Next**. Use the table that follows the image for guidance on the individual boxes.
+
+ ![Screenshot that shows the Global page.](.\media\on-premises-sql-connector-configure\sql-8.png)
|Property|Description| |--|--|
To create a generic SQL connector use the following steps:
|Extension Name|| |Set Password SP Name|| |Set Password SP Parameters||
- 10. On the **Select partition** page, ensure that the correct partitions are selected and click Next.
- ![Enter partition information](.\media\on-premises-sql-connector-configure\sql-9.png)
+ 1. On the **Select partition** page, ensure that the correct partitions are selected and select **Next**.
+
+ ![Screenshot that shows the Select partition page.](.\media\on-premises-sql-connector-configure\sql-9.png)
- 11. On the **Run Profiles** page, select the run profiles that you wish to use and click Next.
- ![Enter run profiles](.\media\on-premises-sql-connector-configure\sql-10.png)
+ 1. On the **Run Profiles** page, select the run profiles that you want to use and select **Next**.
+
+ ![Screenshot that shows the Run Profiles page.](.\media\on-premises-sql-connector-configure\sql-10.png)
|Property|Description| |--|--|
- |Export|Run profile that will export data to SQL. This run profile is required.|
+ |Export|Run profile that will export data to SQL. This run profile is required.|
|Full import|Run profile that will import all data from SQL sources specified earlier.| |Delta import|Run profile that will import only changes from SQL since the last full or delta import.|
- 12. On the **Run Profiles** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
- ![Enter Export information](.\media\on-premises-sql-connector-configure\sql-11.png)
+ 1. On the **Run Profiles** page, fill in the boxes and select **Next**. Use the table that follows the image for guidance on the individual boxes.
+
+ ![Screenshot that shows Enter Export information.](.\media\on-premises-sql-connector-configure\sql-11.png)
|Property|Description| |--|--|
To create a generic SQL connector use the following steps:
|End Index Parameter Name|| |Stored Procedure Parameters||
- 13. On the **Object Types** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
- ![Enter object types](.\media\on-premises-sql-connector-configure\sql-12.png)
+ 1. On the **Object Types** page, fill in the boxes and select **Next**. Use the table that follows the image for guidance on the individual boxes.
+
+ ![Screenshot that shows the Object Types page.](.\media\on-premises-sql-connector-configure\sql-12.png)
|Property|Description| |--|--|
- |Target Object|The object that you are configuring.|
- |Anchor|The attribute that will be used as the objects anchor. This attribute should be unique in the target system. The Azure AD provisioning service will query the ECMA host using this attribute after the initial cycle. This anchor value should be the same as the anchor value in schema 3.|
- |Query attribute|Used by the ECMA host to query the in-memory cache. This attribute should be unique.|
- |DN|The attribute that is used for the target objects distinguished name. The autogenerate option should be selected in most cases. If deselected, ensure that the DN attribute is mapped to an attribute in Azure AD that stores the DN in this format: CN = anchorValue, Object = objectType|
+ |Target object|The object that you're configuring.|
+ |Anchor|The attribute that will be used as the object's anchor. This attribute should be unique in the target system. The Azure AD provisioning service will query the ECMA host by using this attribute after the initial cycle. This anchor value should be the same as the anchor value in Schema 3.|
+ |Query Attribute|Used by the ECMA host to query the in-memory cache. This attribute should be unique.|
+ |DN|The attribute that's used for the target object's distinguished name. The **Autogenerated** checkbox should be selected in most cases. If it isn't selected, ensure that the DN attribute is mapped to an attribute in Azure AD that stores the DN in this format: CN = anchorValue, Object = objectType.|
- 14. The ECMA host discovers the attributes supported by the target system. You can choose which of those attributes you would like to expose to Azure AD. These attributes can then be configured in the Azure portal for provisioning. On the **Select Attributes** page, select attributes from the drop-down to add.
- ![Enter attributes](.\media\on-premises-sql-connector-configure\sql-13.png)
-
-15. On the **Deprovisioning** page, review the deprovisioning information and make adjustments as necessary. Attributes selected in the previous page will not be available to select in the deprovisioning page. Click Finish.
- ![Enter deprovisioning information](.\media\on-premises-sql-connector-configure\sql-14.png)
+ 1. The ECMA host discovers the attributes supported by the target system. You can choose which of those attributes you want to expose to Azure AD. These attributes can then be configured in the Azure portal for provisioning. On the **Select Attributes** page, select attributes from the dropdown list to add.
+
+ ![Screenshot that shows the Select Attributes page.](.\media\on-premises-sql-connector-configure\sql-13.png)
+1. On the **Deprovisioning** page, review the deprovisioning information and make adjustments as necessary. Attributes selected on the previous page won't be available to select on the **Deprovisioning** page. Select **Finish**.
+ ![Screenshot that shows the Deprovisioning page.](.\media\on-premises-sql-connector-configure\sql-14.png)
-## Next Steps
+## Next steps
- [App provisioning](user-provisioning.md) - [Azure AD ECMA Connector Host installation](on-premises-ecma-install.md) - [Azure AD ECMA Connector Host configuration](on-premises-ecma-configure.md)-- [Tutorial: ECMA Connector Host Generic SQL Connector](tutorial-ecma-sql-connector.md)
+- [Tutorial: ECMA Connector Host generic SQL connector](tutorial-ecma-sql-connector.md)
active-directory Tutorial Ecma Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/tutorial-ecma-sql-connector.md
Title: Azure AD ECMA Connector Host Generic SQL Connector tutorial
-description: This tutorial describes how to use the On-premises application provisioning generic SQL connector.
+ Title: Azure AD ECMA Connector Host generic SQL connector tutorial
+description: This tutorial describes how to use the on-premises application provisioning generic SQL connector.
-# Azure AD ECMA Connector Host Generic SQL Connector tutorial
+# Azure AD ECMA Connector Host generic SQL connector tutorial
>[!IMPORTANT]
-> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
-
-This tutorial describes the steps you need to perform to automatically provision and deprovision users from Azure AD into a SQL DB. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
-
-This tutorial covers how to setup and use the generic SQL connector with the Azure AD ECMA Connector Host.
-
-## Step 1 - Prepare the sample database
-On a server running SQL Server, run the SQL script found in [Appendix A](#appendix-a). This script creates a sample database with the name CONTOSO. This is the database that we will be provisioning users in to.
--
-## Step 2 - Create the DSN connection file
-The Generic SQL Connector is a DSN file to connect to the SQL server. First we need to create a file with the ODBC connection information.
-
-1. Start the ODBC management utility on your server:
- ![ODBC management](./media/tutorial-ecma-sql-connector/odbc.png)
-2. Select the tab **File DSN**. Click **Add...**.
- ![Add file dsn](./media/tutorial-ecma-sql-connector/dsn-2.png)
-3. Select SQL Server Native Client 11.0 and click **Next**.
- ![Choose native client](./media/tutorial-ecma-sql-connector/dsn-3.png)
-4. Give the file a name, such as **GenericSQL** and click **Next**.
- ![Name the connector](./media/tutorial-ecma-sql-connector/dsn-4.png)
-5. Click **Finish**.
- ![Finish](./media/tutorial-ecma-sql-connector/dsn-5.png)
-6. Now configure the connection. Enter **APP1** for the name of the server and click **Next**.
- ![Enter server name](./media/tutorial-ecma-sql-connector/dsn-6.png)
-7. Keep Windows Authentication and click **Next**.
- ![Windows authentication](./media/tutorial-ecma-sql-connector/dsn-7.png)
-8. Provide the name of the sample database, **CONTOSO**.
- ![Enter database name](./media/tutorial-ecma-sql-connector/dsn-8.png)
-9. Keep everything default on this screen. Click **Finish**.
- ![Click finish](./media/tutorial-ecma-sql-connector/dsn-9.png)
-10. To verify everything is working as expected, click **Test Data Source**.
- ![Test data source](./media/tutorial-ecma-sql-connector/dsn-10.png)
-11. Make sure the test is successful.
- ![Success](./media/tutorial-ecma-sql-connector/dsn-11.png)
-12. Click **OK**. Click **OK**. Close ODBC Data Source Administrator.
-
-## Step 3 - Download and install the Azure AD Connect Provisioning Agent Package
+> The on-premises provisioning preview is currently in an invitation-only preview. To request access to the capability, use the [access request form](https://aka.ms/onpremprovisioningpublicpreviewaccess). We'll open the preview to more customers and connectors over the next few months as we prepare for general availability.
+
+This tutorial describes the steps you need to perform to automatically provision and deprovision users from Azure Active Directory (Azure AD) into a SQL database. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+
+This tutorial covers how to set up and use the generic SQL connector with the Azure AD ECMA Connector Host.
+
+## Prepare the sample database
+On a server running SQL Server, run the SQL script found in [Appendix A](#appendix-a). This script creates a sample database with the name CONTOSO. This is the database that you'll be provisioning users into.
++
+## Create the DSN connection file
+The generic SQL connector is a DSN file to connect to the SQL server. First, you need to create a file with the ODBC connection information.
+
+1. Start the ODBC management utility on your server.
+
+ ![Screenshot that shows ODBC management.](./media/tutorial-ecma-sql-connector/odbc.png)
+1. Select the **File DSN** tab, and select **Add**.
+
+ ![Screenshot that shows the File DSN tab.](./media/tutorial-ecma-sql-connector/dsn-2.png)
+1. Select **SQL Server Native Client 11.0** and select **Next**.
+
+ ![Screenshot that shows choosing a native client.](./media/tutorial-ecma-sql-connector/dsn-3.png)
+1. Give the file a name, such as **GenericSQL**, and select **Next**.
+
+ ![Screenshot that shows naming the connector.](./media/tutorial-ecma-sql-connector/dsn-4.png)
+1. Select **Finish**.
+
+ ![Screenshot that shows Finish.](./media/tutorial-ecma-sql-connector/dsn-5.png)
+1. Now configure the connection. Enter **APP1** for the name of the server and select **Next**.
+
+ ![Screenshot that shows entering a server name.](./media/tutorial-ecma-sql-connector/dsn-6.png)
+1. Keep Windows authentication and select **Next**.
+
+ ![Screenshot that shows Windows authentication.](./media/tutorial-ecma-sql-connector/dsn-7.png)
+1. Enter the name of the sample database, which is **CONTOSO**.
+
+ ![Screenshot that shows entering a database name.](./media/tutorial-ecma-sql-connector/dsn-8.png)
+1. Keep everything default on this screen, and select **Finish**.
+
+ ![Screenshot that shows selecting Finish.](./media/tutorial-ecma-sql-connector/dsn-9.png)
+1. To check everything is working as expected, select **Test Data Source**.
+
+ ![Screenshot that shows Test Data Source.](./media/tutorial-ecma-sql-connector/dsn-10.png)
+1. Make sure the test is successful.
+
+ ![Screenshot that shows success.](./media/tutorial-ecma-sql-connector/dsn-11.png)
+1. Select **OK** twice. Close the ODBC Data Source Administrator.
+
+## Download and install the Azure AD Connect Provisioning Agent Package
1. Sign in to the server you'll use with enterprise admin permissions.
- 2. Sign in to the Azure portal, and then go to **Azure Active Directory**.
- 3. In the left menu, select **Azure AD Connect**.
- 4. Select **Manage cloud sync** > **Review all agents**.
- 5. Download the Azure AD Connect provisioning agent package from the Azure portal.
- 6. Accept the terms and click download.
- 7. Run the Azure AD Connect provisioning installer AADConnectProvisioningAgentSetup.msi.
- 8. On the **Microsoft Azure AD Connect Provisioning Agent Package** screen, accept the licensing terms and select **Install**.
- ![Microsoft Azure AD Connect Provisioning Agent Package screen](media/on-premises-ecma-install/install-1.png)</br>
- 9. After this operation finishes, the configuration wizard starts. Click **Next**.
- ![Welcome screen](media/on-premises-ecma-install/install-2.png)</br>
- 10. On the **Select Extension** screen, select **On-premises application provisioning (Azure AD to application)** and click **Next**.
- ![Select extension](media/on-premises-ecma-install/install-3.png)</br>
- 12. Use your global administrator account and sign in to Azure AD.
- ![Azure signin](media/on-premises-ecma-install/install-4.png)</br>
- 13. On the **Agent Configuration** screen, click **Confirm**.
- ![Confirm installation](media/on-premises-ecma-install/install-5.png)</br>
- 14. Once the installation is complete, you should see a message at the bottom of the wizard. Click **Finish**.
- ![Finish button](media/on-premises-ecma-install/install-6.png)</br>
- 15. Click **Close**.
-
-## Step 4 - Configure the Azure AD ECMA Connector Host
-1. On the desktop, click the ECMA shortcut.
-2. Once the ECMA Connector Host Configuration starts, leave the default port 8585 and click **Generate**. This will generate a certificate. The auto-generated certificate will be self-signed / part of the trusted root and the SAN matches the hostname.
- ![Configure your settings](.\media\on-premises-ecma-configure\configure-1.png)
-3. Click **Save**.
-
-## Step 5 - Create a generic SQL connector
- 1. Click on the ECMA Connector Host shortcut on the desktop.
- 2. Select **New Connector**.
- ![Choose new connector](.\media\on-premises-sql-connector-configure\sql-1.png)
-
- 3. On the **Properties** page, fill in the boxes with the values specified in the table below and click **Next**.
- ![Enter properties](.\media\tutorial-ecma-sql-connector\conn-1.png)
+ 1. Sign in to the Azure portal, and then go to **Azure Active Directory**.
+ 1. On the menu on the left, select **Azure AD Connect**.
+ 1. Select **Manage cloud sync** > **Review all agents**.
+ 1. Download the Azure AD Connect Provisioning Agent Package from the Azure portal.
+ 1. Accept the terms and select **Download**.
+ 1. Run the Azure AD Connect provisioning installer AADConnectProvisioningAgentSetup.msi.
+ 1. On the **Microsoft Azure AD Connect Provisioning Agent Package** screen, select **Install**.
+
+ ![Screenshot that shows the Microsoft Azure AD Connect Provisioning Agent Package screen.](media/on-premises-ecma-install/install-1.png)</br>
+ 1. After this operation finishes, the configuration wizard starts. Select **Next**.
+
+ ![Screenshot that shows the Welcome screen.](media/on-premises-ecma-install/install-2.png)</br>
+ 1. On the **Select Extension** screen, select **On-premises application provisioning (Azure AD to application)** and select **Next**.
+
+ ![Screenshot that shows the Select Extension screen.](media/on-premises-ecma-install/install-3.png)</br>
+ 1. Use your global administrator account and sign in to Azure AD.
+
+ ![Screenshot that shows the Azure sign-in screen.](media/on-premises-ecma-install/install-4.png)</br>
+ 1. On the **Agent configuration** screen, select **Confirm**.
+
+ ![Screenshot that shows confirming the installation.](media/on-premises-ecma-install/install-5.png)</br>
+ 1. After the installation is complete, you should see a message at the bottom of the wizard. Select **Exit**.
+
+ ![Screenshot that shows the Exit button.](media/on-premises-ecma-install/install-6.png)</br>
+
+## Configure the Azure AD ECMA Connector Host
+1. On the desktop, select the ECMA shortcut.
+1. After the ECMA Connector Host Configuration starts, leave the default port **8585** and select **Generate** to generate a certificate. The autogenerated certificate will be self-signed as part of the trusted root. The SAN matches the host name.
+
+ ![Screenshot that shows configuring your settings.](.\media\on-premises-ecma-configure\configure-1.png)
+1. Select **Save**.
+
+## Create a generic SQL connector
+ 1. Select the ECMA Connector Host shortcut on the desktop.
+ 1. Select **New Connector**.
+
+ ![Screenshot that shows choosing New Connector.](.\media\on-premises-sql-connector-configure\sql-1.png)
+
+ 1. On the **Properties** page, fill in the boxes with the values specified in the table that follows the image and select **Next**.
+
+ ![Screenshot that shows entering properties.](.\media\tutorial-ecma-sql-connector\conn-1.png)
|Property|Value| |--|--| |Name|SQL| |Autosync timer (minutes)|120|
- |Secret Token|Enter your own key here. It should be 12 characters minimum.|
- |Extension DLL|For a generic sql connector, select Microsoft.IAM.Connector.GenericSql.dll.|
- 4. On the **Connectivity** page, fill in the boxes with the values specified in the table below and click **Next**.
- ![Enter connectivity](.\media\tutorial-ecma-sql-connector\conn-2.png)
+ |Secret Token|Enter your own key here. It should be 12 characters minimum.|
+ |Extension DLL|For a generic SQL connector, select **Microsoft.IAM.Connector.GenericSql.dll**.|
+ 1. On the **Connectivity** page, fill in the boxes with the values specified in the table that follows the image and select **Next**.
+
+ ![Screenshot that shows the Connectivity page.](.\media\tutorial-ecma-sql-connector\conn-2.png)
|Property|Value| |--|--|
- |DSN File|Navigate to the file created at the beginning of the tutorial in Step 2.|
+ |DSN File|Go to the file created at the beginning of the tutorial in "Create the DSN connection file."|
|User Name|contoso\administrator|
- |Password|the administrators password.|
- 5. On the **Schema 1** page, fill in the boxes with the values specified in the table below and click **Next**.
- ![Enter schema 1](.\media\tutorial-ecma-sql-connector\conn-3.png)
+ |Password|Enter the administrator's password.|
+ 1. On the **Schema 1** page, fill in the boxes with the values specified in the table that follows the image and select **Next**.
+
+ ![Screenshot that shows the Schema 1 page.](.\media\tutorial-ecma-sql-connector\conn-3.png)
|Property|Value| |--|--| |Object type detection method|Fixed Value| |Fixed value list/Table/View/SP|User|
- 6. On the **Schema 2** page,fill in the boxes with the values specified in the table below and click **Next**.
- ![Enter schema 2](.\media\tutorial-ecma-sql-connector\conn-4.png)
+ 1. On the **Schema 2** page, fill in the boxes with the values specified in the table that follows the image and select **Next**.
+
+ ![Screenshot that shows the Schema 2 page.](.\media\tutorial-ecma-sql-connector\conn-4.png)
|Property|Value| |--|--| |User:Attribute Detection|Table| |User:Table/View/SP|Employees|
- 7. On the **Schema 3** page, fill in the boxes with the values specified in the table below and click **Next**.
- ![Enter schema 3](.\media\tutorial-ecma-sql-connector\conn-5.png)
+ 1. On the **Schema 3** page, fill in the boxes with the values specified in the table that follows the image and select **Next**.
+
+ ![Screenshot that shows the Schema 3 page.](.\media\tutorial-ecma-sql-connector\conn-5.png)
|Property|Description| |--|--| |Select Anchor for :User|User:ContosoLogin| |Select DN attribute for User|AzureID|
- 8. On the **Schema 4** page, leave the defaults and click **Next**.
- ![Enter schema 4](.\media\tutorial-ecma-sql-connector\conn-6.png)
- 9. On the **Global** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
- ![Enter global information](.\media\tutorial-ecma-sql-connector\conn-7.png)
+ 1. On the **Schema 4** page, leave the defaults and select **Next**.
+
+ ![Screenshot that shows the Schema 4 page.](.\media\tutorial-ecma-sql-connector\conn-6.png)
+ 1. On the **Global** page, fill in the boxes and select **Next**. Use the table that follows the image for guidance on the individual boxes.
+
+ ![Screenshot that shows the Global page.](.\media\tutorial-ecma-sql-connector\conn-7.png)
|Property|Description| |--|--| |Data Source Date Time Format|yyyy-MM-dd HH:mm:ss|
- 10. On the **Select partition** page, click **Next**.
- ![Enter partition information](.\media\tutorial-ecma-sql-connector\conn-8.png)
+ 1. On the **Partitions** page, select **Next**.
+
+ ![Screenshot that shows the Partitions page.](.\media\tutorial-ecma-sql-connector\conn-8.png)
- 11. On the **Run Profiles** page, keep **Export** and add **Full Import**. Click **Next**.
- ![Enter run profiles](.\media\tutorial-ecma-sql-connector\conn-9.png)
+ 1. On the **Run Profiles** page, keep the **Export** checkbox selected. Select the **Full import** checkbox and select **Next**.
+
+ ![Screenshot that shows the Run Profiles page.](.\media\tutorial-ecma-sql-connector\conn-9.png)
- 12. On the **Export** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
- ![Enter Export information](.\media\tutorial-ecma-sql-connector\conn-10.png)
+ 1. On the **Export** page, fill in the boxes and select **Next**. Use the table that follows the image for guidance on the individual boxes.
+
+ ![Screenshot that shows the Export page.](.\media\tutorial-ecma-sql-connector\conn-10.png)
|Property|Description| |--|--| |Operation Method|Table| |Table/View/SP|Employees|
- 12. On the **Full Import** page, fill in the boxes and click **Next**. Use the table below the image for guidance on the individual boxes.
- ![Enter Full import information](.\media\tutorial-ecma-sql-connector\conn-11.png)
+ 1. On the **Full Import** page, fill in the boxes and select **Next**. Use the table that follows the image for guidance on the individual boxes.
+
+ ![Screenshot that shows the Full Import page.](.\media\tutorial-ecma-sql-connector\conn-11.png)
|Property|Description| |--|--| |Operation Method|Table| |Table/View/SP|Employees|
- 13. On the **Object Types** page, fill in the boxes and click **Next**. Use the table below the image for guidance on the individual boxes.
+ 1. On the **Object Types** page, fill in the boxes and select **Next**. Use the table that follows the image for guidance on the individual boxes.
- **Anchor** - this attribute should be unique in the target system. The Azure AD provisioning service will query the ECMA host using this attribute after the initial cycle. This anchor value should be the same as the anchor value in schema 3.
-
- **Query attribute** - used by the ECMA host to query the in-memory cache. This attribute should be unique.
-
- **DN** - The autogenerate option should be selected in most cases. If deselected, ensure that the DN attribute is mapped to an attribute in Azure AD that stores the DN in this format: CN = anchorValue, Object = objectType
+ - **Anchor**: This attribute should be unique in the target system. The Azure AD provisioning service will query the ECMA host by using this attribute after the initial cycle. This anchor value should be the same as the anchor value in schema 3.
+ - **Query Attribute**: Used by the ECMA host to query the in-memory cache. This attribute should be unique.
+ - **DN**: The **Autogenerated** option should be selected in most cases. If it isn't selected, ensure that the DN attribute is mapped to an attribute in Azure AD that stores the DN in this format: CN = anchorValue, Object = objectType.
- ![Enter object types](.\media\tutorial-ecma-sql-connector\conn-12.png)
+ ![Screenshot that shows the Object Types page.](.\media\tutorial-ecma-sql-connector\conn-12.png)
|Property|Description| |--|--|
- |Target Object|User|
+ |Target object|User|
|Anchor|ContosoLogin|
- |Query attribute|AzureID|
+ |Query Attribute|AzureID|
|DN|AzureID| |Autogenerated|Checked|
- 14. On the **Select Attributes** page, add all of the attributes in the drop-down and click **Next**.
- ![Enter attributes](.\media\tutorial-ecma-sql-connector\conn-13.png)
-
- The set attribute dropdown will show any attribute that has been discovered in the target system and has **not been** chosen in the previous select attributes page.
- 15. On the **Deprovisioning** page, under **Disable flow**, select **Delete**. Click **Finish**.
- ![Enter deprovisioning information](.\media\tutorial-ecma-sql-connector\conn-14.png)
-
-## Step 6 - Ensure ECMA2Host service is running
-1. On the server the running the Azure AD ECMA Connector Host, click Start.
-2. Type run and enter services.msc in the box
-3. In the services, ensure that **Microsoft ECMA2Host** is present and running. If not, click **Start**.
- ![Service is running](.\media\on-premises-ecma-configure\configure-2.png)
-
-## Step 7 - Add Enterprise application
-1. Sign-in to the Azure portal as an application administrator
-2. In the portal, navigate to Azure Active Directory, **Enterprise Applications**.
-3. Click on **New Application**.
- ![Add new application](.\media\on-premises-ecma-configure\configure-4.png)
-4. Search the gallery for **On-premises ECMA app** and click **Create**.
-
-## Step 8 - Configure the application and test
-1. Once it has been created, click he **Provisioning page**.
-2. Click **get started**.
- ![get started](.\media\on-premises-ecma-configure\configure-6.png)
-3. On the **Provisioning page**, change the mode to **Automatic**
- ![Mode to automatic](.\media\on-premises-ecma-configure\configure-7.png)
-4. In the on-premises connectivity section, select the agent that you just deployed and click **assign agent(s)**.
+ 1. On the **Select Attributes** page, add all the attributes in the dropdown list and select **Next**.
+
+ ![Screenshot that shows the Select Attributes page.](.\media\tutorial-ecma-sql-connector\conn-13.png)
+
+ The **Attribute** dropdown list shows any attribute that was discovered in the target system and *wasn't* chosen on the previous **Select Attributes** page.
+ 1. On the **Deprovisioning** page, under **Disable flow**, select **Delete**. Select **Finish**.
+
+ ![Screenshot that shows the Deprovisioning page.](.\media\tutorial-ecma-sql-connector\conn-14.png)
+
+## Ensure ECMA2Host service is running
+1. On the server the running the Azure AD ECMA Connector Host, select **Start**.
+1. Enter **run** and enter **services.msc** in the box.
+1. In the **Services** list, ensure that **Microsoft ECMA2Host** is present and running. If not, select **Start**.
+
+ ![Screenshot that shows the service is running.](.\media\on-premises-ecma-configure\configure-2.png)
+
+## Add an enterprise application
+1. Sign in to the Azure portal as an application administrator
+1. In the portal, go to **Azure Active Directory** > **Enterprise applications**.
+1. Select **New application**.
+
+ ![Screenshot that shows adding a new application.](.\media\on-premises-ecma-configure\configure-4.png)
+1. Search the gallery for **On-premises ECMA app** and select **Create**.
+
+## Configure the application and test
+1. After it has been created, select the **Provisioning** page.
+1. Select **Get started**.
+
+ ![Screenshot that shows get started.](.\media\on-premises-ecma-configure\configure-1.png)
+1. On the **Provisioning** page, change the mode to **Automatic**.
+
+ ![Screenshot that shows changing the mode to Automatic.](.\media\on-premises-ecma-configure\configure-7.png)
+1. In the **On-Premises Connectivity** section, select the agent that you just deployed and select **Assign Agent(s)**.
>[!NOTE]
- >After adding the agent, you need to wait 10 minutes for the registration to complete. The connectivity test will not work until the registration completes.
+ >After you add the agent, wait 10 minutes for the registration to complete. The connectivity test won't work until the registration completes.
>
- >Alternatively, you can force the agent registration to complete by restarting the provisioning agent on your server. Navigating to your server > search for services in the windows search bar > identify the Azure AD Connect Provisioning Agent Service > right click on the service and restart.
+ >Alternatively, you can force the agent registration to complete by restarting the provisioning agent on your server. Go to your server, search for **services** in the Windows search bar, identify the **Azure AD Connect Provisioning Agent Service**, right-click the service, and restart.
- ![Restart an agent](.\media\on-premises-ecma-configure\configure-8.png)
-5. After 10 minutes, under the **Admin credentials** section, enter the following URL, replacing "connectorName" portion with the name of the connector on the ECMA Host. You may also replace localhost with the host name.
+ ![Screenshot that shows restarting an agent.](.\media\on-premises-ecma-configure\configure-8.png)
+1. After 10 minutes, under the **Admin credentials** section, enter the following URL. Replace the `connectorName` portion with the name of the connector on the ECMA host. You can also replace `localhost` with the host name.
|Property|Value| |--|--| |Tenant URL|https://localhost:8585/ecma2host_connectorName/scim|
-6. Enter the secret token value that you defined when creating the connector.
-7. Click Test Connection and wait one minute.
- ![Assign an agent](.\media\on-premises-ecma-configure\configure-5.png)
-8. Once connection test is successful, click **save**.</br>
- ![Test an agent](.\media\on-premises-ecma-configure\configure-9.png)
-
-## Step 9 - Assign users to application
-Now that you have the Azure AD ECMA Connector Host talking with Azure AD you can move on to configuring who is in scope for provisioning.
-
-1. In the Azure portal select **Enterprise Applications**
-2. Click on the **on-premises provisioning** application
-3. On the left, under **Manage** click on **Users and groups**
-4. Click **Add user/group**
- ![Add user](.\media\tutorial-ecma-sql-connector\app-2.png)
-5. Under **Users** click **None selected**
- ![None selected](.\media\tutorial-ecma-sql-connector\app-3.png)
-6. Select users from the right and click **Select**.</br>
- ![Select users](.\media\tutorial-ecma-sql-connector\app-4.png)
-7. Now click **Assign**.
- ![Assign users](.\media\tutorial-ecma-sql-connector\app-5.png)
-
-## Step 10 - Configure attribute mappings
-Now we need to map attributes between the on-premises application and our SQL server.
+1. Enter the **Secret Token** value that you defined when you created the connector.
+1. Select **Test Connection**, and wait one minute.
+
+ ![Screenshot that shows assigning an agent.](.\media\on-premises-ecma-configure\configure-5.png)
+1. After the connection test is successful, select **Save**.</br>
+
+ ![Screenshot that shows testing an agent.](.\media\on-premises-ecma-configure\configure-9.png)
+
+## Assign users to an application
+Now that you have the Azure AD ECMA Connector Host talking with Azure AD, you can move on to configuring who's in scope for provisioning.
+
+1. In the Azure portal, select **Enterprise applications**.
+1. Select the **On-premises provisioning** application.
+1. On the left, under **Manage**, select **Users and groups**.
+1. Select **Add user/group**.
+
+ ![Screenshot that shows adding a user.](.\media\tutorial-ecma-sql-connector\app-2.png)
+1. Under **Users**, select **None Selected**.
+
+ ![Screenshot that shows None Selected.](.\media\tutorial-ecma-sql-connector\app-3.png)
+1. Select users from the right and select the **Select** button.</br>
+
+ ![Screenshot that shows Select users.](.\media\tutorial-ecma-sql-connector\app-4.png)
+1. Now select **Assign**.
+
+ ![Screenshot that shows Assign users.](.\media\tutorial-ecma-sql-connector\app-5.png)
+
+## Configure attribute mappings
+Now you need to map attributes between the on-premises application and your SQL server.
#### Configure attribute mapping
- 1. In the Azure AD portal, under **Enterprise applications**, click he **Provisioning page**.
- 2. Click **get started**.
- 3. Expand **Mappings** and click **Provision Azure Active Directory Users**
- ![provision a user](.\media\on-premises-ecma-configure\configure-10.png)
- 5. Click **Add new mapping**
- ![Add a mapping](.\media\on-premises-ecma-configure\configure-11.png)
- 6. Specify the source and target attributes and and add all of the mappings in the table below.
-
- |Mapping Type|Source attribute|Target attribute|
+ 1. In the Azure AD portal, under **Enterprise applications**, select the **Provisioning** page.
+ 1. Select **Get started**.
+ 1. Expand **Mappings** and select **Provision Azure Active Directory Users**.
+
+ ![Screenshot that shows provisioning a user.](.\media\on-premises-ecma-configure\configure-10.png)
+ 1. Select **Add New Mapping**.
+
+ ![Screenshot that shows Add New Mapping.](.\media\on-premises-ecma-configure\configure-11.png)
+ 1. Specify the source and target attributes, and add all the mappings in the following table.
+
+ |Mapping type|Source attribute|Target attribute|
|--|--|--| |Direct|userPrincipalName|urn:ietf:params:scim:schemas:extension:ECMA2Host:2.0:User:ContosoLogin| |Direct|objectID|urn:ietf:params:scim:schemas:extension:ECMA2Host:2.0:User:AzureID|
Now we need to map attributes between the on-premises application and our SQL se
|Direct|surName|urn:ietf:params:scim:schemas:extension:ECMA2Host:2.0:User:LastName| |Direct|mailNickname|urn:ietf:params:scim:schemas:extension:ECMA2Host:2.0:User:textID|
- 7. Click **Save**
- ![Save the mapping](.\media\tutorial-ecma-sql-connector\app-6.png)
+ 1. Select **Save**.
+
+ ![Screenshot that shows saving the mapping.](.\media\tutorial-ecma-sql-connector\app-6.png)
-## Step 11 - Test provisioning
-Now that our attributes are mapped we can test on-demand provisioning with one of our users.
+## Test provisioning
+Now that your attributes are mapped, you can test on-demand provisioning with one of your users.
+
+ 1. In the Azure portal, select **Enterprise applications**.
+ 1. Select the **On-premises provisioning** application.
+ 1. On the left, select **Provisioning**.
+ 1. Select **Provision on demand**.
+ 1. Search for one of your test users, and select **Provision**.
- 1. In the Azure portal select **Enterprise Applications**
- 2. Click on the **on-premises provisioning** application
- 3. On the left, click **Provisioning**.
- 4. Click **Provision on-demand**
- 5. Search for one of your test users and click **Provision**
- ![Test provisioning](.\media\on-premises-ecma-configure\configure-13.png)
+ ![Screenshot that shows testing provisioning.](.\media\on-premises-ecma-configure\configure-13.png)
-### Step 12 - Start provisioning users
- 1. Once on-demand provisioning is successful, change back to the provisioning configuration page. Ensure that the scope is set to only assigned users and group, turn **provisioning On**, and click **Save**.
- ![Start provisioning](.\media\on-premises-ecma-configure\configure-14.png)
- 2. Wait several minutes for provisioning to start (it may take up to 40 minutes). You can learn more about the provisioning service performance here. After the provisioning job has been completed, as described in the next section, you can change the provisioning status to Off, and click Save. This will stop the provisioning service from running in the future.
+## Start provisioning users
+ 1. After on-demand provisioning is successful, change back to the provisioning configuration page. Ensure that the scope is set to only assigned users and groups, turn provisioning **On**, and select **Save**.
+
+ ![Screenshot that shows Start provisioning.](.\media\on-premises-ecma-configure\configure-14.png)
+ 1. Wait several minutes for provisioning to start. It might take up to 40 minutes. After the provisioning job has been completed, as described in the next section, you can change the provisioning status to **Off**, and select **Save**. This action stops the provisioning service from running in the future.
-### Step 13 - Verify users have been successfully provisioned
+## Check that users were successfully provisioned
After waiting, check the SQL database to ensure users are being provisioned.
- ![Verify users are provisioned](.\media\on-premises-ecma-configure\configure-15.png)
+
+ ![Screenshot that shows checking that users are provisioned.](.\media\on-premises-ecma-configure\configure-15.png)
## Appendix A
-**SQL script to create the sample database**
+Use the following SQL script to create the sample database.
```SQL Creating the Database
GO
-## Next Steps
+## Next steps
- [Troubleshoot on-premises application provisioning](on-premises-ecma-troubleshoot.md) - [Review known limitations](known-issues.md)
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/user-provisioning.md
# What is app provisioning in Azure Active Directory?
-In Azure Active Directory (Azure AD), the term **app provisioning** refers to automatically creating user identities and roles for applications.
+In Azure Active Directory (Azure AD), the term *app provisioning* refers to automatically creating user identities and roles for applications.
-![provisioning scenarios](../governance/media/what-is-provisioning/provisioning.png)
+![Diagram that shows provisioning scenarios.](../governance/media/what-is-provisioning/provisioning.png)
-Azure AD to SaaS application provisioning refers to automatically creating user identities and roles in the cloud ([SaaS](https://azure.microsoft.com/overview/what-is-saas/)) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and more.
+Azure AD to software as a service (SaaS) application provisioning refers to automatically creating user identities and roles in the cloud ([SaaS](https://azure.microsoft.com/overview/what-is-saas/)) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and more.
-Azure AD supports provisioning users into SaaS applications as well as applications hosted on-premises or an IaaS solution such as a virtual machine. You may have a legacy application that relies on an LDAP user store or a SQL DB. The Azure AD provisioning service allows you to create, update, and delete users into on-premises applications without having to open up firewalls or dealing with TCP ports.
+Azure AD supports provisioning users into SaaS applications and applications hosted on-premises or an infrastructure as a service (IaaS) solution such as a virtual machine. You might have a legacy application that relies on an LDAP user store or a SQL database. By using the Azure AD provisioning service, you can create, update, and delete users into on-premises applications without having to open up firewalls or deal with TCP ports.
-Using lightweight agents, you can provision users into on-premises application and govern access. When used in conjunction with the application proxy, Azure AD can allow you to manage access to your on-premises application, providing automatic user provisioning (with the provisioning service) as well as single sign-on (with app proxy).
+Using lightweight agents, you can provision users into on-premises applications and govern access. When Azure AD is used with the application proxy, you can manage access to your on-premises application and provide automatic user provisioning (with the provisioning service) and single sign-on (with app proxy).
App provisioning lets you: - **Automate provisioning**: Automatically create new accounts in the right systems for new people when they join your team or organization.-- **Automate deprovisioning:** Automatically deactivate accounts in the right systems when people leave the team or organization.-- **Synchronize data between systems:** Ensure that the identities in your apps and systems are kept up to date based on changes in the directory or your human resources system.-- **Provision groups:** Provision groups to applications that support them.-- **Govern access:** Monitor and audit who has been provisioned into your applications.-- **Seamlessly deploy in brown field scenarios:** Match existing identities between systems and allow for easy integration, even when users already exist in the target system.-- **Use rich customization:** Take advantage of customizable attribute mappings that define what user data should flow from the source system to the target system.-- **Get alerts for critical events:** The provisioning service provides alerts for critical events, and allows for Log Analytics integration where you can define custom alerts to suite your business needs.
+- **Automate deprovisioning**: Automatically deactivate accounts in the right systems when people leave the team or organization.
+- **Synchronize data between systems**: Ensure that the identities in your apps and systems are kept up to date based on changes in the directory or your human resources system.
+- **Provision groups**: Provision groups to applications that support them.
+- **Govern access**: Monitor and audit who has been provisioned into your applications.
+- **Seamlessly deploy in brown field scenarios**: Match existing identities between systems and allow for easy integration, even when users already exist in the target system.
+- **Use rich customization**: Take advantage of customizable attribute mappings that define what user data should flow from the source system to the target system.
+- **Get alerts for critical events**: The provisioning service provides alerts for critical events and allows for Log Analytics integration where you can define custom alerts to suit your business needs.
-## What is System for Cross-domain Identity Management (SCIM)?
+## What is SCIM?
-To help automate provisioning and deprovisioning, apps expose proprietary user and group APIs. However, anyone whoΓÇÖs tried to manage users in more than one app will tell you that every app tries to perform the same simple actions, such as creating or updating users, adding users to groups, or deprovisioning users. Yet, all these simple actions are implemented just a little bit differently, using different endpoint paths, different methods to specify user information, and a different schema to represent each element of information.
+To help automate provisioning and deprovisioning, apps expose proprietary user and group APIs. But anyone who's tried to manage users in more than one app will tell you that every app tries to perform the same actions, such as creating or updating users, adding users to groups, or deprovisioning users. Yet, all these actions are implemented slightly differently by using different endpoint paths, different methods to specify user information, and a different schema to represent each element of information.
-To address these challenges, the SCIM specification provides a common user schema to help users move into, out of, and around apps. SCIM is becoming the de facto standard for provisioning and, when used in conjunction with federation standards like SAML or OpenID Connect, provides administrators an end-to-end standards-based solution for access management.
+To address these challenges, the System for Cross-domain Identity Management (SCIM) specification provides a common user schema to help users move into, out of, and around apps. SCIM is becoming the de facto standard for provisioning and, when used with federation standards like Security Assertions Markup Language (SAML) or OpenID Connect (OIDC), provides administrators an end-to-end standards-based solution for access management.
-For detailed guidance on developing a SCIM endpoint to automate the provisioning and deprovisioning of users and groups to an application, see [Build a SCIM endpoint and configure user provisioning](use-scim-to-provision-users-and-groups.md). For pre-integrated applications in the gallery (Slack, Azure Databricks, Snowflake, etc.), you can skip the developer documentation and use the tutorials provided [here](../../active-directory/saas-apps/tutorial-list.md).
+For detailed guidance on developing a SCIM endpoint to automate the provisioning and deprovisioning of users and groups to an application, see [Build a SCIM endpoint and configure user provisioning](use-scim-to-provision-users-and-groups.md). For pre-integrated applications in the gallery, such as Slack, Azure Databricks, and Snowflake, you can skip the developer documentation and use the tutorials provided in [Tutorials for integrating SaaS applications with Azure Active Directory](../../active-directory/saas-apps/tutorial-list.md).
## Manual vs. automatic provisioning Applications in the Azure AD gallery support one of two provisioning modes:
-* **Manual** provisioning means there is no automatic Azure AD provisioning connector for the app yet. User accounts must be created manually, for example by adding users directly into the app's administrative portal, or uploading a spreadsheet with user account detail. Consult the documentation provided by the app, or contact the app developer to determine what mechanisms are available.
+* **Manual** provisioning means there's no automatic Azure AD provisioning connector for the app yet. User accounts must be created manually. Examples are adding users directly into the app's administrative portal or uploading a spreadsheet with user account detail. Consult the documentation provided by the app, or contact the app developer to determine what mechanisms are available.
+* **Automatic** means that an Azure AD provisioning connector has been developed for this application. Follow the setup tutorial specific to setting up provisioning for the application. App tutorials can be found in [Tutorials for integrating SaaS applications with Azure Active Directory](../../active-directory/saas-apps/tutorial-list.md).
-* **Automatic** means that an Azure AD provisioning connector has been developed for this application. You should follow the setup tutorial specific to setting up provisioning for the application. App tutorials can be found at [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](../../active-directory/saas-apps/tutorial-list.md).
-
-The provisioning mode supported by an application is also visible on the **Provisioning** tab once you've added the application to your **Enterprise apps**.
+The provisioning mode supported by an application is also visible on the **Provisioning** tab after you've added the application to your enterprise apps.
## Benefits of automatic provisioning
-As the number of applications used in modern organizations continues to grow, IT admins are tasked with access management at scale. Standards such as Security Assertions Markup Language (SAML) or Open ID Connect (OIDC) allow admins to quickly set up single sign-on (SSO), but access also requires users to be provisioned into the app. To many admins, provisioning means manually creating every user account or uploading CSV files each week, but these processes are time-consuming, expensive, and error-prone. Solutions such as SAML just-in-time (JIT) have been adopted to automate provisioning, but enterprises also need a solution to deprovision users when they leave the organization or no longer require access to certain apps based on role change.
+As the number of applications used in modern organizations continues to grow, IT admins are tasked with access management at scale. Standards such as SAML or OIDC allow admins to quickly set up single sign-on (SSO), but access also requires users to be provisioned into the app. To many admins, provisioning means manually creating every user account or uploading CSV files each week. These processes are time-consuming, expensive, and error prone. Solutions such as SAML just-in-time (JIT) have been adopted to automate provisioning. Enterprises also need a solution to deprovision users when they leave the organization or no longer require access to certain apps based on role change.
Some common motivations for using automatic provisioning include:
Some common motivations for using automatic provisioning include:
- Easily importing a large number of users into a particular SaaS application or system. - Having a single set of policies to determine who is provisioned and who can sign in to an app.
-Azure AD user provisioning can help address these challenges. To learn more about how customers have been using Azure AD user provisioning, you can read the [ASOS case study](https://aka.ms/asoscasestudy). The video below provides an overview of user provisioning in Azure AD:
+Azure AD user provisioning can help address these challenges. To learn more about how customers have been using Azure AD user provisioning, read the [ASOS case study](https://aka.ms/asoscasestudy). The following video provides an overview of user provisioning in Azure AD.
> [!VIDEO https://www.youtube.com/embed/_ZjARPpI6NI]
Azure AD user provisioning can help address these challenges. To learn more abou
Azure AD features pre-integrated support for many popular SaaS apps and human resources systems, and generic support for apps that implement specific parts of the [SCIM 2.0 standard](https://techcommunity.microsoft.com/t5/Identity-Standards-Blog/Provisioning-with-SCIM-getting-started/ba-p/880010).
-* **Pre-integrated applications (gallery SaaS apps)**. You can find all applications for which Azure AD supports a pre-integrated provisioning connector in the [list of application tutorials for user provisioning](../saas-apps/tutorial-list.md). The pre-integrated applications listed in the gallery generally use SCIM 2.0-based user management APIs for provisioning.
+* **Pre-integrated applications (gallery SaaS apps)**: You can find all applications for which Azure AD supports a pre-integrated provisioning connector in [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md). The pre-integrated applications listed in the gallery generally use SCIM 2.0-based user management APIs for provisioning.
- ![Salesforce logo](./media/user-provisioning/gallery-app-logos.png)
+ ![Image that shows logos for DropBox, Salesforce, and others.](./media/user-provisioning/gallery-app-logos.png)
- If you want to request a new application for provisioning, you can [request that your application be integrated with our app gallery](../develop/v2-howto-app-gallery-listing.md). For a user provisioning request, we require the application to have a SCIM-compliant endpoint. Please request that the application vendor follow the SCIM standard so we can onboard the app to our platform quickly.
+ If you want to request a new application for provisioning, you can [request that your application be integrated with our app gallery](../develop/v2-howto-app-gallery-listing.md). For a user provisioning request, we require the application to have a SCIM-compliant endpoint. Request that the application vendor follow the SCIM standard so we can onboard the app to our platform quickly.
-* **Applications that support SCIM 2.0**. For information on how to generically connect applications that implement SCIM 2.0-based user management APIs, see [Build a SCIM endpoint and configure user provisioning](use-scim-to-provision-users-and-groups.md).
+* **Applications that support SCIM 2.0**: For information on how to generically connect applications that implement SCIM 2.0-based user management APIs, see [Build a SCIM endpoint and configure user provisioning](use-scim-to-provision-users-and-groups.md).
## How do I set up automatic provisioning to an application?
-For pre-integrated applications listed in the gallery, step-by-step guidance is available for setting up automatic provisioning. See the [list of tutorials for integrated gallery apps](../saas-apps/tutorial-list.md). The following video demonstrates how to set up automatic user provisioning for SalesForce.
+For pre-integrated applications listed in the gallery, step-by-step guidance is available for setting up automatic provisioning. See [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md). The following video demonstrates how to set up automatic user provisioning for SalesForce.
> [!VIDEO https://www.youtube.com/embed/pKzyts6kfrw]
-For other applications that support SCIM 2.0, follow the steps in the article [Build a SCIM endpoint and configure user provisioning](use-scim-to-provision-users-and-groups.md).
+For other applications that support SCIM 2.0, follow the steps in [Build a SCIM endpoint and configure user provisioning](use-scim-to-provision-users-and-groups.md).
## Next steps
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-optional-claims.md
Previously updated : 1/06/2021 Last updated : 7/19/2021
The set of optional claims available by default for applications to use are list
| Name | Description | Token Type | User Type | Notes | |-|-||--|--|
+| `acct` | Users account status in tenant | JWT, SAML | | If the user is a member of the tenant, the value is `0`. If they are a guest, the value is `1`. |
| `auth_time` | Time when the user last authenticated. See OpenID Connect spec.| JWT | | |
-| `tenant_region_scope` | Region of the resource tenant | JWT | | |
+| `ctry` | User's country/region | JWT | | Azure AD returns the `ctry` optional claim if it's present and the value of the field is a standard two-letter country/region code, such as FR, JP, SZ, and so on. |
+| `email` | The addressable email for this user, if the user has one. | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. For managed users, the email address must be set in the [Office admin portal](https://portal.office.com/adminportal/home#/users).|
+| `fwd` | IP address.| JWT | | Adds the original IPv4 address of the requesting client (when inside a VNET) |
+| `groups`| Optional formatting for group claims |JWT, SAML| |Used with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well. For details see [Group claims](#configuring-groups-optional-claims) below. For more information about group claims, see [How to configure group claims](../hybrid/how-to-connect-fed-group-claims.md)
+| `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | Value is `app` when the token is an app-only token. This is the most accurate way for an API to determine if a token is an app token or an app+user token.|
+| `login_hint` | Login hint | JWT | MSA, Azure AD | An opaque, reliable login hint claim. This claim is the best value to use for the `login_hint` OAuth parameter in all flows to get SSO. It can be passed between applications to help them silently SSO as well - application A can sign in a user, read the `login_hint` claim, and then send the claim and the current tenant context to application B in the query string or fragment when the user clicks on a link that takes them to application B. To avoid race conditions and reliability issues, the `login_hint` claim *doesn't* include the current tenant for the user, and defaults to the user's home tenant when used. If you are operating in a guest scenario, where the user is from another tenant, then you must still provide a tenant identifier in the sign in request, and pass the same to apps you partner with. This claim is intended for use with your SDK's existing `login_hint` functionality, however that it exposed. |
| `sid` | Session ID, used for per-session user sign-out. | JWT | Personal and Azure AD accounts. | |
+| `tenant_ctry` | Resource tenant's country | JWT | | Same as `ctry` except set at a tenant level by an admin. Must also be a standard two-letter value. |
+| `tenant_region_scope` | Region of the resource tenant | JWT | | |
+| `upn` | UserPrincipalName | JWT, SAML | | An identifer for the user that can be used with the username_hint parameter. Not a durable identifier for the user and should not be used to uniquely identity user information (for example, as a database key). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) should not be shown their User Principal Name (UPN). Instead, use the following ID token claims for displaying sign-in state to the user: `preferred_username` or `unique_name` for v1 tokens and `preferred_username` for v2 tokens. Although this claim is automatically included, you can specify it as an optional claim to attach additional properties to modify its behavior in the guest user case. You should use the `login_hint` claim for `login_hint` use - human-readable identifiers like UPN are unreliable.|
| `verified_primary_email` | Sourced from the user's PrimaryAuthoritativeEmail | JWT | | | | `verified_secondary_email` | Sourced from the user's SecondaryAuthoritativeEmail | JWT | | | | `vnet` | VNET specifier information. | JWT | | |
-| `fwd` | IP address.| JWT | | Adds the original IPv4 address of the requesting client (when inside a VNET) |
-| `ctry` | User's country/region | JWT | | Azure AD returns the `ctry` optional claim if it's present and the value of the field is a standard two-letter country/region code, such as FR, JP, SZ, and so on. |
-| `tenant_ctry` | Resource tenant's country | JWT | | Same as `ctry` except set at a tenant level by an admin. Must also be a standard two-letter value. |
| `xms_pdl` | Preferred data location | JWT | | For Multi-Geo tenants, the preferred data location is the three-letter code showing the geographic region the user is in. For more info, see the [Azure AD Connect documentation about preferred data location](../hybrid/how-to-connect-sync-feature-preferreddatalocation.md).<br/>For example: `APC` for Asia Pacific. | | `xms_pl` | User preferred language | JWT ||The user's preferred language, if set. Sourced from their home tenant, in guest access scenarios. Formatted LL-CC ("en-us"). | | `xms_tpl` | Tenant preferred language| JWT | | The resource tenant's preferred language, if set. Formatted LL ("en"). | | `ztdid` | Zero-touch Deployment ID | JWT | | The device identity used for [Windows AutoPilot](/windows/deployment/windows-autopilot/windows-10-autopilot) |
-| `email` | The addressable email for this user, if the user has one. | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. For managed users, the email address must be set in the [Office admin portal](https://portal.office.com/adminportal/home#/users).|
-| `acct` | Users account status in tenant | JWT, SAML | | If the user is a member of the tenant, the value is `0`. If they are a guest, the value is `1`. |
-| `groups`| Optional formatting for group claims |JWT, SAML| |Used with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well. For details see [Group claims](#configuring-groups-optional-claims) below. For more information about group claims, see [How to configure group claims](../hybrid/how-to-connect-fed-group-claims.md)
-| `upn` | UserPrincipalName | JWT, SAML | | An identifer for the user that can be used with the username_hint parameter. Not a durable identifier for the user and should not be used to uniquely identity user information (for example, as a database key). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) should not be shown their User Principal Name (UPN). Instead, use the following ID token claims for displaying sign-in state to the user: `preferred_username` or `unique_name` for v1 tokens and `preferred_username` for v2 tokens. Although this claim is automatically included, you can specify it as an optional claim to attach additional properties to modify its behavior in the guest user case. |
-| `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | Value is `app` when the token is an app-only token. This is the most accurate way for an API to determine if a token is an app token or an app+user token.|
## v2.0-specific optional claims set
active-directory App Resilience Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/app-resilience-continuous-access-evaluation.md
Previously updated : 10/06/2020 Last updated : 07/09/2021 # Customer intent: As an application developer, I want to learn how to use Continuous Access Evaluation for building resiliency through long-lived, refreshable tokens that can be revoked based on critical events and policy evaluation.
Your app would check for:
- an "error" parameter with the value "insufficient_claims" - a "claims" parameter
-When these conditions are met, the app can extract and decode the claims challenge.
+When these conditions are met, the app can extract and decode the claims challenge using MSAL.NET `WwwAuthenticateParameters` class.
```csharp if (APIresponse.IsSuccessStatusCode)
else
if (APIresponse.StatusCode == System.Net.HttpStatusCode.Unauthorized && APIresponse.Headers.WwwAuthenticate.Any()) {
- AuthenticationHeaderValue bearer = APIresponse.Headers.WwwAuthenticate.First
- (v => v.Scheme == "Bearer");
- IEnumerable<string> parameters = bearer.Parameter.Split(',').Select(v => v.Trim()).ToList();
- var error = GetParameter(parameters, "error");
-
- if (null != error && "insufficient_claims" == error)
- {
- var claimChallengeParameter = GetParameter(parameters, "claims");
- if (null != claimChallengeParameter)
- {
- var claimChallengebase64Bytes = System.Convert.FromBase64String(claimChallengeParameter);
- var claimChallenge = System.Text.Encoding.UTF8.GetString(claimChallengebase64Bytes);
- var newAccessToken = await GetAccessTokenWithClaimChallenge(scopes, claimChallenge);
+ string claimChallenge = WwwAuthenticateParameters.GetClaimChallengeFromResponseHeaders(APIresponse.Headers);
``` Your app would then use the claims challenge to acquire a new access token for the resource.
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
Previously updated : 06/30/2021 Last updated : 07/19/2021
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| Parameter | Required/optional | Description | |--|-|--|
-| `tenant` | required | The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](active-directory-v2-protocols.md#endpoints). |
+| `tenant` | required | The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](active-directory-v2-protocols.md#endpoints). Critically, for guest scenarios where you sign a user from one tenant into another tenant, you *must* provide the tenant identifier to correctly sign them into the resource tenant.|
| `client_id` | required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. | | `response_type` | required | Must include `code` for the authorization code flow. Can also include `id_token` or `token` if using the [hybrid flow](#request-an-id-token-as-well-hybrid-flow). | | `redirect_uri` | required | The redirect_uri of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect_uris you registered in the portal, except it must be url encoded. For native & mobile apps, you should use one of the recommended values - `https://login.microsoftonline.com/common/oauth2/nativeclient` (for apps using embedded browsers) or `http://localhost` (for apps that use system browsers). |
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `response_mode` | recommended | Specifies the method that should be used to send the resulting token back to your app. Can be one of the following:<br/><br/>- `query`<br/>- `fragment`<br/>- `form_post`<br/><br/>`query` provides the code as a query string parameter on your redirect URI. If you're requesting an ID token using the implicit flow, you can't use `query` as specified in the [OpenID spec](https://openid.net/specs/oauth-v2-multiple-response-types-1_0.html#Combinations). If you're requesting just the code, you can use `query`, `fragment`, or `form_post`. `form_post` executes a POST containing the code to your redirect URI. | | `state` | recommended | A value included in the request that will also be returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The value can also encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. | | `prompt` | optional | Indicates the type of user interaction that is required. The only valid values at this time are `login`, `none`, `consent`, and `select_account`.<br/><br/>- `prompt=login` will force the user to enter their credentials on that request, negating single-sign on.<br/>- `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via single-sign on, the Microsoft identity platform will return an `interaction_required` error.<br/>- `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app.<br/>- `prompt=select_account` will interrupt single sign-on providing account selection experience listing all the accounts either in session or any remembered account or an option to choose to use a different account altogether.<br/> |
-| `login_hint` | optional | Can be used to pre-fill the username/email address field of the sign-in page for the user, if you know their username ahead of time. Often apps will use this parameter during re-authentication, having already extracted the username from a previous sign-in using the `preferred_username` claim. |
+| `login_hint` | Optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user, if you know the username ahead of time. Often, apps use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](active-directory-optional-claims.md) from an earlier sign-in. |
| `domain_hint` | optional | If included, it will skip the email-based discovery process that user goes through on the sign-in page, leading to a slightly more streamlined user experience - for example, sending them to their federated identity provider. Often apps will use this parameter during re-authentication, by extracting the `tid` from a previous sign-in. | | `code_challenge` | recommended / required | Used to secure authorization code grants via Proof Key for Code Exchange (PKCE). Required if `code_challenge_method` is included. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is now recommended for all application types - both public and confidential clients - and required by the Microsoft identity platform for [single page apps using the authorization code flow](reference-third-party-cookies-spas.md). | | `code_challenge_method` | recommended / required | The method used to encode the `code_verifier` for the `code_challenge` parameter. This *SHOULD* be `S256`, but the spec allows the use of `plain` if for some reason the client cannot support SHA256. <br/><br/>If excluded, `code_challenge` is assumed to be plaintext if `code_challenge` is included. The Microsoft identity platform supports both `plain` and `S256`. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is required for [single page apps using the authorization code flow](reference-third-party-cookies-spas.md).|
active-directory V2 Oauth2 Implicit Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-implicit-grant-flow.md
Previously updated : 06/25/2021 Last updated : 07/19/2021
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| Parameter | Type | Description | | | | |
-| `tenant` | required |The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](active-directory-v2-protocols.md#endpoints). |
+| `tenant` | required |The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](active-directory-v2-protocols.md#endpoints).Critically, for guest scenarios where you sign a user from one tenant into another tenant, you *must* provide the tenant identifier to correctly sign them into the resource tenant.|
| `client_id` | required | The Application (client) ID that the [Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page assigned to your app. | | `response_type` | required |Must include `id_token` for OpenID Connect sign-in. It may also include the response_type `token`. Using `token` here will allow your app to receive an access token immediately from the authorize endpoint without having to make a second request to the authorize endpoint. If you use the `token` response_type, the `scope` parameter must contain a scope indicating which resource to issue the token for (for example, user.read on Microsoft Graph). It can also contain `code` in place of `token` to provide an authorization code, for use in the [authorization code flow](v2-oauth2-auth-code-flow.md). This id_token+code response is sometimes called the hybrid flow. | | `redirect_uri` | recommended |The redirect_uri of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect_uris you registered in the portal, except it must be url encoded. |
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `state` | recommended |A value included in the request that will also be returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The state is also used to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. | | `nonce` | required |A value included in the request, generated by the app, that will be included in the resulting id_token as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string that can be used to identify the origin of the request. Only required when an id_token is requested. | | `prompt` | optional |Indicates the type of user interaction that is required. The only valid values at this time are 'login', 'none', 'select_account', and 'consent'. `prompt=login` will force the user to enter their credentials on that request, negating single-sign on. `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via single-sign on, the Microsoft identity platform will return an error. `prompt=select_account` sends the user to an account picker where all of the accounts remembered in the session will appear. `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app. |
-| `login_hint` |optional |Can be used to pre-fill the username/email address field of the sign in page for the user, if you know their username ahead of time. Often apps will use this parameter during reauthentication, having already extracted the username from a previous sign-in using the `preferred_username` claim.|
+| `login_hint` | Optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user, if you know the username ahead of time. Often, apps use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](active-directory-optional-claims.md) from an earlier sign-in. |
| `domain_hint` | optional |If included, it will skip the email-based discovery process that user goes through on the sign in page, leading to a slightly more streamlined user experience. This parameter is commonly used for Line of Business apps that operate in a single tenant, where they will provide a domain name within a given tenant, forwarding the user to the federation provider for that tenant. Note that this hint prevents guests from signing into this application, and limits the use of cloud credentials like FIDO. | At this point, the user will be asked to enter their credentials and complete the authentication. The Microsoft identity platform will also ensure that the user has consented to the permissions indicated in the `scope` query parameter. If the user has consented to **none** of those permissions, it will ask the user to consent to the required permissions. For more info, see [permissions, consent, and multi-tenant apps](v2-permissions-and-consent.md).
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-protocols-oidc.md
Previously updated : 06/23/2021 Last updated : 07/19/2021
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| Parameter | Condition | Description | | | | |
-| `tenant` | Required | You can use the `{tenant}` value in the path of the request to control who can sign in to the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more information, see [protocol basics](active-directory-v2-protocols.md#endpoints). |
+| `tenant` | Required | You can use the `{tenant}` value in the path of the request to control who can sign in to the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more information, see [protocol basics](active-directory-v2-protocols.md#endpoints). Critically, for guest scenarios where you sign a user from one tenant into another tenant, you *must* provide the tenant identifier to correctly sign them into the resource tenant.|
| `client_id` | Required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. | | `response_type` | Required | Must include `id_token` for OpenID Connect sign-in. It might also include other `response_type` values, such as `code`. | | `redirect_uri` | Recommended | The redirect URI of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs you registered in the portal, except that it must be URL encoded. If not present, the endpoint will pick one registered redirect_uri at random to send the user back to. |
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `response_mode` | Recommended | Specifies the method that should be used to send the resulting authorization code back to your app. Can be `form_post` or `fragment`. For web applications, we recommend using `response_mode=form_post`, to ensure the most secure transfer of tokens to your application. | | `state` | Recommended | A value included in the request that also will be returned in the token response. It can be a string of any content you want. A randomly generated unique value typically is used to [prevent cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The state also is used to encode information about the user's state in the app before the authentication request occurred, such as the page or view the user was on. | | `prompt` | Optional | Indicates the type of user interaction that is required. The only valid values at this time are `login`, `none`, `consent`, and `select_account`. The `prompt=login` claim forces the user to enter their credentials on that request, which negates single sign-on. The `prompt=none` parameter is the opposite, and should be paired with a `login_hint` to indicate which user must be signed in. These parameters ensure that the user isn't presented with any interactive prompt at all. If the request can't be completed silently via single sign-on (because no user is signed in, the hinted user isn't signed in, or there are multiple users signed in and no hint is provided), the Microsoft identity platform returns an error. The `prompt=consent` claim triggers the OAuth consent dialog after the user signs in. The dialog asks the user to grant permissions to the app. Finally, `select_account` shows the user an account selector, negating silent SSO but allowing the user to pick which account they intend to sign in with, without requiring credential entry. You cannot use `login_hint` and `select_account` together.|
-| `login_hint` | Optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user, if you know the username ahead of time. Often, apps use this parameter during reauthentication, after already extracting the username from an earlier sign-in by using the `preferred_username` claim. |
+| `login_hint` | Optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user, if you know the username ahead of time. Often, apps use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](active-directory-optional-claims.md) from an earlier sign-in. |
| `domain_hint` | Optional | The realm of the user in a federated directory. This skips the email-based discovery process that the user goes through on the sign-in page, for a slightly more streamlined user experience. For tenants that are federated through an on-premises directory like AD FS, this often results in a seamless sign-in because of the existing login session. | At this point, the user is prompted to enter their credentials and complete the authentication. The Microsoft identity platform verifies that the user has consented to the permissions indicated in the `scope` query parameter. If the user hasn't consented to any of those permissions, the Microsoft identity platform prompts the user to consent to the required permissions. You can read more about [permissions, consent, and multi-tenant apps](v2-permissions-and-consent.md).
active-directory Security Operations User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/security-operations-user-accounts.md
The log files you use for investigation and monitoring are:
* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)
-* Risky Users log
+* .[Risky Users log].(../../identity-protection/howto-identity-protection-investigate-risk.md)
-* UserRiskEvents log
+* .[UserRiskEvents log].(../../identity-protection/howto-identity-protection-investigate-risk.md)
From the Azure portal you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
As you design and operationalize a log monitoring and alerting strategy, conside
| Azure AD Threat Intelligence user risk detection| High| Azure AD Risk Detection logs| UX: Azure AD threat intelligence <br><br>API: See [riskDetection resource type - Microsoft Graph beta](/graph/api/resources/riskdetection?view=graph-rest-beta)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | | Anonymous IP address sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Anonymous IP address <br><br>API: See [riskDetection resource type - Microsoft Graph beta](/graph/api/resources/riskdetection?view=graph-rest-beta)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | | Atypical travel sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Atypical travel <br><br>API: See [riskDetection resource type - Microsoft Graph beta](/graph/api/resources/riskdetection?view=graph-rest-beta)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) |
+| Anomalous Token| Varies| Azure AD Risk Detection logs| UX: Anomalous Token <br><br>API: See [riskDetection resource type - Microsoft Graph beta](/graph/api/resources/riskdetection?view=graph-rest-beta)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) |
| Malware linked IP address sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Malware linked IP address <br><br>API: See [riskDetection resource type - Microsoft Graph beta](/graph/api/resources/riskdetection?view=graph-rest-beta)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | | Suspicious browser sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Suspicious browser <br><br>API: See [riskDetection resource type - Microsoft Graph beta](/graph/api/resources/riskdetection?view=graph-rest-beta)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | | Unfamiliar sign-in properties sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Unfamiliar sign-in properties <br><br>API: See [riskDetection resource type - Microsoft Graph beta](/graph/api/resources/riskdetection?view=graph-rest-beta)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) |
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-assignments.md
$policy = $accesspackage.AccessPackageAssignmentPolicies[0]
$req = New-MgEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetId "a43ee6df-3cc5-491a-ad9d-ea964ef8e464" ```
+You can also assign multiple users to an access package in PowerShell with the `New-MgEntitlementManagementAccessPackageAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.1 or later. This cmdlet takes as parameters
+* the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet,
+* the access package assignment policy ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackageAssignmentPolicy`cmdlet,
+* the object IDs of the target users, either as an array of strings, or as a list of user members returned from the `Get-MgGroupMember` cmdlet.
+
+For example, if you want to ensure all the users who are currently members of a group also have assignments to an access package, you can use this cmdlet to create requests for those users who don't currently have assignments. Note that this will cmdlet will only create assignments; it does not remove assignments.
+
+```powershell
+Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All,Directory.Read.All"
+Select-MgProfile -Name "beta"
+$members = Get-MgGroupMember -GroupId "a34abd69-6bf8-4abd-ab6b-78218b77dc15"
+$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "accessPackageAssignmentPolicies"
+$policy = $accesspackage.AccessPackageAssignmentPolicies[0]
+$req = New-MgEntitlementManagementAccessPackageAssignment -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -RequiredGroupMember $members
+```
+ ## Remove an assignment **Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-version-history.md
Please follow this link to read more about [auto upgrade](how-to-connect-install
## 2.0.3.0 >[!NOTE]
->This is a major release of Azure AD Connect. Please refer to the Azure Active Directory V2.0 article for more details.
+>This is a major release of Azure AD Connect. Please refer to the [Azure Active Directory V2.0 article](whatis-azure-ad-connect-v2.md) for more details.
### Release status 7/20/2021: Released for download only, not available for auto upgrade
You can use these cmdlets to retrieve the TLS 1.2 enablement status, or set it a
- We have updated the Generic LDAP connector and the Generic SQL Connector to the latest versions. Read more about these connectors here: - [Generic LDAP Connector reference documentation](https://docs.microsoft.com/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericldap) - [Generic SQL Connector reference documentation](https://docs.microsoft.com/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericsql)-- In the M365 Admin Center, we now report the AADConnect client version whenever there is export activity to Azure AD. This ensures that the M365 Admin Center always has the most up to date AADConnect client version, and that it can detect when youΓÇÖre using and outdated version-- Provides a batch import execution script which can be called from Windows scheduled job so that the customers can automate the batch import operations with scheduling.
- - Credentials are provided as an encrypted file using Windows Data Protection API (DPAPI).
- - Credential files can be use only at the same machine and user account where it's created.
-- The Azure AD Kerberos Feature supported for the MSAL library. To use the AAD Kerberos Feature, the customer needs to register an on-premises service principal name into the Azure AD. Provides importing of an on-premises service principal object into the Azure AD.
+- In the M365 Admin Center, we now report the AADConnect client version whenever there is export activity to Azure AD. This ensures that the M365 Admin Center always has the most up to date AADConnect client version, and that it can detect when youΓÇÖre using an outdated version
+ ### Bug fixes-- We fixed an accessibility bug where the screen reader is announcing incorrect role of the 'Learn More' link.-- We fixed a bug where sync rules with large precedence values (i.e. 387163089) cause upgrade to fail. We updated sproc 'mms_UpdateSyncRulePrecedence' to cast the precedence number as an integer prior to incrementing the value.-- Fixed a bug where group writeback permissions are not set on the sync account if a group writeback configuration is imported. We now set the group writeback permissions if group writeback is enabled on the imported configuration.
+- We fixed an accessibility bug where the screen reader is announcing an incorrect role of the 'Learn More' link.
+- We fixed a bug where sync rules with large precedence values (i.e. 387163089) cause an upgrade to fail. We updated the sproc 'mms_UpdateSyncRulePrecedence' to cast the precedence number as an integer prior to incrementing the value.
+- We fixed a bug where group writeback permissions are not set on the sync account if a group writeback configuration is imported. We now set the group writeback permissions if group writeback is enabled on the imported configuration.
- We updated the Azure AD Connect Health agent version to 3.1.110.0 to fix an installation failure. - We are seeing an issue with non-default attributes from exported configurations where directory extension attributes are configured. When importing these configurations to a new server/installation, the attribute inclusion list is overridden by the directory extension configuration step, so after import only default and directory extension attributes are selected in the sync service manager (non-default attributes are not included in the installation, so the user must manually reenable them from the sync service manager if they want their imported sync rules to work). We now refresh the AAD Connector before configuring directory extension to keep existing attributes from the attribute inclusion list. - We fixed an accessibility issues where the page header's font weight is set as "Light". Font weight is now set to "Bold" for the page title, which applies to the header of all pages.
You can use these cmdlets to retrieve the TLS 1.2 enablement status, or set it a
- We fixed a bug where AADConnect cannot read Application Proxy items using Microsoft Graph due to a permissions issue with calling Microsoft Graph directly based on AAD Connect client id. To fix this, we removed the dependency on Microsoft Graph and instead use AAD PowerShell to work with the App Proxy Application objects. - We removed the writeback member limit from 'Out to AD - Group SOAInAAD Exchange' sync rule - We fixed a bug where, when changing connector account permissions, if an object comes in scope that has not changed since the last delta import, a delta import will not import it. We now display warning alerting user of the issue.-- We fixed an accessibility issue where the screen reader is not reading radio button position, i.e. 1 of 2. We added added positional text to the radio button accessibility text field.
+- We fixed an accessibility issue where the screen reader is not reading radio button position. We added added positional text to the radio button accessibility text field.
- We updated the Pass-Thru Authentication Agent bundle. The older bundle did not have correct reply URL for HIP's first party application in US Gov. - We fixed a bug where there is a ΓÇÿstopped-extension-dll-exceptionΓÇÖ on AAD connector export after clean installing AADConnect version 1.6.X.X, which defaults to using DirSyncWebServices API V2, using an existing database. Previously the setting export version to v2 was only being done for upgrade, we changed so that it is set on clean install as well. - The ΓÇ£ADSyncPrep.psm1ΓÇ¥ module is no longer used and is removed from the installation.
You can use these cmdlets to retrieve the TLS 1.2 enablement status, or set it a
> - Azure Commercial > - Azure China cloud > - Azure US Government cloud
-> It will not be made available in the Azure German cloud
+> - This release will not be made available in the Azure German cloud
### Release status 3/31/2021: Released for download only, not available for auto upgrade
You can use these cmdlets to retrieve the TLS 1.2 enablement status, or set it a
- Get-ADSyncAADConnectorExportApiVersion - to get export AWS API version - Changes made to synchronization rules are now tracked to assist troubleshooting changes in the service. The cmdlet "Get-ADSyncRuleAudit" will retrieve tracked changes.
+ - Updated the Add-ADSyncADDSConnectorAccount cmdlet in the [ADSyncConfig PowerShell module](./how-to-connect-configure-ad-ds-connector-account.md#using-the-adsyncconfig-powershell-module) to allow a user in ADSyncAdmin group to change the AD DS Connector account.
### Bug fixes - Updated disabled foreground color to satisfy luminosity requirements on a white background. Added additional conditions for navigation tree to set foreground text color to white when a disabled page is selected to satisfy luminosity requirements.
active-directory Tshoot Connect Recover From Localdb 10Gb Limit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/tshoot-connect-recover-from-localdb-10gb-limit.md
# Azure AD Connect: How to recover from LocalDB 10-GB limit
-Azure AD Connect requires a SQL Server database to store identity data. You can either use the default SQL Server 2012 Express LocalDB installed with Azure AD Connect or use your own full SQL. SQL Server Express imposes a 10-GB size limit. When using LocalDB and this limit is reached, Azure AD Connect Synchronization Service can no longer start or synchronize properly. This article provides the recovery steps.
+Azure AD Connect requires a SQL Server database to store identity data. You can either use the default SQL Server 2019 Express LocalDB installed with Azure AD Connect or use your own full SQL. SQL Server Express imposes a 10-GB size limit. When using LocalDB and this limit is reached, Azure AD Connect Synchronization Service can no longer start or synchronize properly. This article provides the recovery steps.
## Symptoms There are two common symptoms:
active-directory Delegate App Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/delegate-app-roles.md
In some cases, enterprise applications created from the application gallery incl
### To assign an owner to an enterprise application 1. Sign in to [your Azure AD organization](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) with an account that eligible for the Application Administrator or Cloud Application Administrator for the organization.
-1. On the [App registrations page](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps/menuId/) for the organization, select an app to open the Overview page for the app.
+1. On the [Enterprise applications page](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps/menuId/) for the organization, select an app to open the Overview page for the app.
1. Select **Owners** to see the list of the owners for the app. 1. Select **Add** to select one or more owners to add to the app.
active-directory Sentry Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sentry-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://sentry.io/organizations/<ORGANIZATION_SLUG>/` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Sentry Client support team](mailto:support@sentry.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual values Identifier, Reply URL, and Sign-on URL. For more information about finding these values, see the [Sentry documentation](https://docs.sentry.io/product/accounts/sso/azure-sso/#installation). You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click the copy icon to copy the **App
+Metadata Url** value, and then save it on your computer.
- ![The Certificate download link](common/copy-metadataurl.png)
+ ![The Certificate download link](common/copy-metadataurl.png)
+
### Create an Azure AD test user
-In this section, you'll create a test user in the Azure portal called B.Simon.
+In this section, you'll create a test user called B.Simon in the Azure portal.
1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. 1. Select **New user** at the top of the screen.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Sentry SSO
-To configure single sign-on on **Sentry** side, you need to send the **App Federation Metadata Url** to [Sentry support team](mailto:support@sentry.io). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on the **Sentry** side, go to **Org Settings** > **Auth** (or go to `https://sentry.io/settings/<YOUR_ORG_SLUG>/auth/`) and select **Configure** for Active Directory. Paste the App Federation Metadata URL from your Azure SAML configuration.
### Create Sentry test user
-In this section, a user called Britta Simon is created in Sentry. Sentry supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Sentry, a new one is created after authentication.
+In this section, a user called B.Simon is created in Sentry. Sentry supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Sentry, a new one is created after authentication.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-1. Click on **Test this application** in Azure portal. This will redirect to Sentry Sign on URL where you can initiate the login flow.
+1. In the Azure portal, select **Test this application**. You're redirected to the Sentry sign-on URL, where you can initiate the sign-in flow.
-1. Go to Sentry Sign-on URL directly and initiate the login flow from there.
+1. Go to Sentry sign-on URL directly and initiate the sign-in flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Sentry for which you set up the SSO
+* In the Azure portal, select **Test this application**. You should be automatically signed in to the Sentry application for which you set up the SSO.
-You can also use Microsoft Access Panel to test the application in any mode. When you click the Sentry tile in the Access Panel, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Sentry for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### Either mode:
+
+You can use the My Apps portal to test the application in any mode. When you click the Sentry tile in the My Apps portal, if configured in SP mode, you are redirected to the application sign-on page to initiate the sign-in flow. If configured in IDP mode, you should be automatically signed in to the Sentry application for which you set up the SSO. For more information about the My Apps portal, see [Sign in and start apps from the My Apps portal](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Sentry you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Sentry you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Zip Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zip-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Zip for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Zip.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 8aea0505-a3a1-4f84-8deb-6e557997c815
+++
+ na
+ms.devlang: na
+ Last updated : 07/16/2021+++
+# Tutorial: Configure Zip for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Zip and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Zip](https://ziphq.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Zip
+> * Remove users in Zip when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Zip
+> * Provision groups and group memberships in Zip
+> * [Single sign-on](zip-tutorial.md) to Zip (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A [Zip](https://ziphq.com/) tenant.
+* A user account in Zip with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Zip](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Zip to support provisioning with Azure AD
+
+To configure Zip to support provisioning with Azure AD - please contact Zip support team <support@ziphq.com>.They will provide the tenant url and secret token needed to setup automatic user provisioning to Zip as mentioned in Step 5.
+
+## Step 3. Add Zip from the Azure AD application gallery
+
+Add Zip from the Azure AD application gallery to start managing provisioning to Zip. If you have previously setup Zip for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Zip, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to Zip
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Zip based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Zip in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Zip**.
+
+ ![The Zip link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, input your Zip **Tenant URL** and **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to Zip. If the connection fails , ensure your Zip account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Zip**.
+
+1. Review the user attributes that are synchronized from Azure AD to Zip in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Zip for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Zip API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;
+ |active|Boolean|
+ |emails[type eq "work"].value|String|
+ |preferredLanguage|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |addresses[type eq "work"].formatted|String|
+ |addresses[type eq "work"].streetAddress|String|
+ |addresses[type eq "work"].locality|String|
+ |addresses[type eq "work"].region|String|
+ |addresses[type eq "work"].postalCode|String|
+ |addresses[type eq "work"].country|String|
+ |phoneNumbers[type eq "work"].value|String|
+ |externalId|String|
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to Zip**.
+
+1. Review the group attributes that are synchronized from Azure AD to Zip in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Zip for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |displayName|String|&check;
+ |members|Reference|
+ |externalId|String|
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Zip, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Zip by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
+
+ Title: Azure Active Directory architecture overview (preview)
+description: Learn foundational information to plan and design your solution
+documentationCenter: ''
+++++ Last updated : 07/20/2021+++
+# Azure AD Verifiable Credentials architecture overview (preview)
+
+> [!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+ItΓÇÖs important to plan your verifiable credential solution so that in addition to issuing and or validating credentials, you have a complete view of the architectural and business impacts of your solution. If you havenΓÇÖt reviewed them already, we recommend you review [Introduction to Azure Active Directory Verifiable Credentials](decentralized-identifier-overview.md) and the[ FAQs](verifiable-credentials-faq.md), and then complete the [Getting Started](get-started-verifiable-credentials.md) tutorial.
+
+This architectural overview introduces the capabilities and components of the Azure Active Directory Verifiable Credentials service. For more detailed information on issuance and validation, see
+
+* Plan your Azure AD Verifiable Credentials issuance solution
+
+* Plan your Azure AD Verifiable Credentials validation solution
++
+## Approaches to identity
+
+Today most organizations use centralized identity systems to provide employees credentials. They also use use various methods to bring customers, partners, vendors, and relying parties into the organizationΓÇÖs trust boundaries. These methods include federation, creating and managing guest accounts with systems like Azure AD B2B, and creating explicit trusts with relying parties. Most business relationships have a digital component, so enabling some form of trust between organizations requires significant effort.
+
+### Centralized identity systems
+
+Centralized approaches still work well in many cases, such as when applications, services, and devices rely on the trust mechanisms used within a domain or trust boundary.
+
+In centralized identity systems, the identity provider (IDP) controls the lifecycle and usage of credentials.
+
+![Example of a centralized identity system](./media/introduction-to-verifiable-credentials-architecture/centralized-identity-architecture.png)
++
+However, there are scenarios where using a decentralized architecture using verifiable credentials can provide value by augmenting key scenarios such as
+
+* secure onboarding of employeesΓÇÖ and othersΓÇÖ identities, including remote scenarios.
+
+* access to resources inside the organizational trust boundary based on specific criteria.
+
+* accessing resources outside the trust boundary, such as accessing partnersΓÇÖ resources, with a portable credential issued by the organization.
+
+
+
+### Decentralized identity systems
+
+In decentralized identity systems, control of the lifecycle and usage of the credentials is shared between the issuer, the holder, and relying party consuming the credential.
+
+Consider the scenario in the diagram below where Proseware, an e-commerce website, wants to offer Woodgrove employees corporate discounts.
+
+ ![Example of a decentralized identity system](media/introduction-to-verifiable-credentials-architecture/decentralized-architecture.png)
++
+Terminology for verifiable credentials (VCs) might be confusing if you're not familiar with VCs. The following definitions are from the [Verifiable Credentials Data Model 1.0](https://www.w3.org/TR/vc-data-model/) terminology section. After each, we relate them to entities in the preceding diagram.
+
+ ΓÇ£An ***issuer*** is a role an entity can perform by asserting claims about one or more subjects, creating a verifiable credential from these claims, and transmitting the verifiable credential to a holder.ΓÇ¥
+
+* In the preceding diagram, Woodgrove is the issuer of verifiable credentials to its employees.
+
+ ΓÇ£A ***holder*** is a role an entity might perform by possessing one or more verifiable credentials and generating presentations from them. A holder is usually, but not always, a subject of the verifiable credentials they are holding. Holders store their credentials in credential repositories.ΓÇ¥
+
+* In the preceding diagram, Alice is a Woodgrove employee. They obtained a verifiable credential from the Woodgrove issuer, and is the holder of that credential.
+
+ ΓÇ£A ***verifier*** is a role an entity performs by receiving one or more verifiable credentials, optionally inside a verifiable presentation for processing. Other specifications might refer to this concept as a relying party.ΓÇ¥
+
+* In the preceding diagram, Proseware is a verifier of credentials issued by Woodgrove.
+
+ΓÇ£A ***credential*** is a set of one or more claims made by an issuer. A verifiable credential is a tamper-evident credential that has authorship that can be cryptographically verified. Verifiable credentials can be used to build verifiable presentations, which can also be cryptographically verified. The claims in a credential can be about different subjects.ΓÇ¥
+
+ ΓÇ£A ***decentralized identifier*** is a portable URL-based identifier, also known as a DID, associated with an entity. These identifiers are often used in a verifiable credential and are associated with subjects, issuers, and verifiers.ΓÇ¥.
+
+* In the preceding diagram, the public keys of the actorΓÇÖs DIDs are shown stored in the decentralized ledger (ION).- in the decentralized identifier document.
+
+ ΓÇ£A ***decentralized identifier document***, also referred to as a ***DID document***, is a document that is accessible using a verifiable data registry and contains information related to a specific decentralized identifier, such as the associated repository and public key information.ΓÇ¥
+
+* In the scenario above, both the issuer and verifier have a DID, and a DID document. The DID document contains the public key, and the list of DNS web domains associated with DID (also known as linked domains).
+
+* Woodgrove (issuer) signs their employeesΓÇÖ VCs with its public key; similarly, Proseware (verifier) signs requests to present a VC using its key, which is also associated with its DID.
+
+ ΓÇ£A ***distributed ledger*** is a non-centralized system for recording events. These systems establish sufficient confidence for participants to rely upon the data recorded by others to make operational decisions. They typically use distributed databases where different nodes use a consensus protocol to confirm the ordering of cryptographically signed transactions. The linking of digitally signed transactions over time often makes the history of the ledger effectively immutable.ΓÇ¥
+
+* The Microsoft solution uses the ***Identity Overlay Network (ION)*** to provide decentralized public key infrastructure (PKI) capability.
+
+
+
+### Combining centralized and decentralized identity architectures
+
+When you examine a verifiable credential solution, it's helpful to understand how decentralized identity architectures can be combined with centralized identity architectures to provide a solution that reduces risk and offers significant operational benefits.
+
+## The user journey
+
+This architectural overview follows the journey of a job candidate and employee, who applies for and accepts employment with an organization. It then follows the employee and organization through changes where verifiable credentials can augment centralized processes.
+
+
+
+### Actors in these use cases
+
+* **Alice**, a person applying for and accepting employment with Woodgrove, Inc.
+
+* **Woodgrove**, Inc, a fictitious company.
+
+* **Adatum**, WoodgroveΓÇÖs fictitious identity verification partner.
+
+* **Proseware**, WoodgroveΓÇÖs fictitious partner organization.
+
+Woodgrove uses both centralized and decentralized identity architectures.
+
+### Steps in the user journey
+
+* Alice applying for, accepting, and onboarding to a position with Woodgrove, Inc.
+
+* Accessing digital resources within WoodgroveΓÇÖs trust boundary.
+
+* Accessing digital resources outside of WoodgroveΓÇÖs trust boundary without extending Woodgrove or partnersΓÇÖ trust boundaries.
+
+As Woodgrove continues to operate its business, it must continually manage identities. The use cases in this guidance describe how Alice can use self-service functions to obtain and maintain their identifiers and how Woodgrove can add, modify, and end business-to-business relationships with varied trust requirements.
+
+These use cases demonstrate how centralized identities and decentralized identities can be combined to provide a more robust and efficient identity and trust strategy and lifecycle.
++
+## User journey: Onboarding to Woodgrove
+
+![User's onboarding journey to Woodgrove](media/introduction-to-verifiable-credentials-architecture/onboarding-journey.png)
+
+ **Awareness**: Alice is interested in working for Woodgrove, Inc. and visits WoodgroveΓÇÖs career website.
+
+**Activation**: The Woodgrove site presents Alice with a method to prove their identity by promptinthem with a QR code or a deep link to visit its trusted identity proofing partner, Adatum.
+
+**Request and upload**: Adatum requests proof of identity from Alice. Alice takes a selfie and a driverΓÇÖs license picture and uploads them to Adatum.
+
+**Issuance**: Once Adatum verifies AliceΓÇÖs identity, Adatum issues Alice a verifiable credential (VC) attesting to their identity.
+
+**Presentation**: Alice (the holder and subject of the credential) can then access the Woodgrove career portal to complete the application process. When Alice uses the VC to access the portal, Woodgrove takes the roles of verifier and the relying party, trusting the attestation from Adatum.
++
+### Distributing initial credentials
+
+Alice accepts employment with Woodgrove. As part of the onboarding process, an Azure Active Directory (AD) account is created for Alice to use inside of the Woodgrove trust boundary. AliceΓÇÖs manager must figure out how to enable Alice, who works remotely, to receive initial sign in information in a secure way. In the past, the IT department might have provided those credentials to their manager, who would print them and hand them to Alice. This doesnΓÇÖt work with remote employees.
+
+VCs can add value to centralized systems by augmenting the credential distribution process. Instead of needing the manager to provide credentials, Alice can use their VC as proof of identity to receive their initial username and credentials for centralized systems access. Alice presents the proof of identity they added to their wallet as part of the onboarding process.
+
+
+
+In the onboarding use case, the trust relationship roles are distributed between the issuer, the verifier, and the holder.
+
+* The issuer is responsible for validating the claims that are part of the VC they issue. Adatum validates AliceΓÇÖs identity to issue the VCVCs are issued without the consideration of a verifier or relying party.
+
+* The holder possesses the VC and must initiate use of the VC for verification. Only Alice can present the VCs she holds.
+
+* The verifier accepts the claims in the VC from issuers they trust and validate the VC using the decentralized ledger capability described in the verifiable credentials data model. Woodgrove trusts AdatumΓÇÖs claims about AliceΓÇÖs identity.
+
+
+
+By combining centralized and decentralized identity architectures for onboarding, privileged information about Alice necessary for identity verification, such as a government ID number, need not be stored by Woodgrove, because they trust that AliceΓÇÖs VC issued by the identity verification partner (Adatum) confirms their identity. Duplication of effort is minimized, and a programmatic and predictable approach to initial onboarding tasks can be implemented.
+
+
+
+## User journey: Accessing resources inside the trust boundary
+
+![Accessing resources inside of the trust boundary](media/introduction-to-verifiable-credentials-architecture/inside-trust-boundary.png)
+
+As an employee, Alice is operating inside of the trust boundary of Woodgrove. Woodgrove acts as the identity provider (IDP) and maintains complete control of the identity and the configuration of the apps Alice uses to interact within the Woodgrove trust boundary. To use resources in the Azure AD trust boundary, Alice provides potentially multiple forms of proof of identification to log on to WoodgroveΓÇÖs trust boundary and access the resources inside of WoodgroveΓÇÖs technology environment. This is a typical scenario that is well served using a centralized identity architecture.
+
+* Woodgrove manages the trust boundary, and using good security practices provides the least-privileged level of access to Alice based on the job performed. To maintain a strong security posture, and potentially for compliance reasons, Woodgrove must also be able to track employeesΓÇÖ permissions and access to resources and must be able to revoke permissions when the employment is terminated.
+
+* Alice only uses the credential that Woodgrove maintains to access Woodgrove resources. Alice has no need to track when the credential is used since the credential is managed by Woodgrove and only used with Woodgrove resources. The identity is only valid inside of the Woodgrove trust boundary when access to Woodgrove resources is necessary, so Alice has no need to possess the credential.
+
+### Using VCs inside the trust boundary
+
+Individual employees have changing identity needs, and VCs can augment centralized systems to manage those changes.
+
+* While employed by Woodgrove Alice might need additional access to resources based on meeting specific requirements. For example, when Alice completes privacy training, she can be issued a new employee VC with that claim, and that VC can be used to access restricted resources.
+
+* VCs can be used inside of the trust boundary for account recovery. For example, if the employee has lost their phone and computer, they can regain access by getting a new VC from the identity verification service trusted by Woodgrove, and then use that VC to get new credentials.
+
+ ## User journey: Accessing external resources
+
+Woodgrove negotiates a product purchase discount with Proseware. All Woodgrove employees are eligible for the discount. Woodgrove wants to provide Alice a way to access ProsewareΓÇÖs website and receive the discount on products purchased. If Woodgrove uses a centralized identity architecture, there are two approaches to providing Alice the discount:
+
+* Alice could provide personal information to create an account with Proseware, and then Proseware would have to verify AliceΓÇÖs employment with Woodgrove.
+
+* Woodgrove could expand their trust boundary to include Proseware as a relying party and Alice could use the extended trust boundary to receive the discount.
+
+With decentralized identifiers, Woodgrove can provide Alice with a verifiable credential (VC) that Alice can use to access ProsewareΓÇÖs website and other external resources.
+
+![Accessing resources outside of the trust boundary](media/introduction-to-verifiable-credentials-architecture/external-resources.png)
+
+
+
+By providing Alice the VC, Woodgrove is attesting that Alice is an employee. Woodgrove is a trusted VC issuer in ProsewareΓÇÖs validation solution. This trust in WoodgroveΓÇÖs issuance process allows Proseware to electronically accept the VC as proof that Alice is a Woodgrove employee and provide Alice the discount. As part of validation of the VC Alice presents, Proseware checks the validity of the VC by using the distributed ledger. In this solution:
+
+* Woodgrove enables Alice to provide Proseware proof of employment without Woodgrove having to extend its trust boundary.
+
+* Proseware doesnΓÇÖt need to expand their trust boundary to validate Alice is an employee of Woodgrove. Proseware can use the VC that Woodgrove provides instead. Because the trust boundary isnΓÇÖt expanded, managing the trust relationship is easier and Proseware can easily end the relationship by not accepting the VCs anymore.
+
+* Alice doesnΓÇÖt need to provide Proseware personal information, such as an email. Alice maintains the VC in a wallet application on a personal device. The only person that can use the VC is Alice, and Alice must initiate usage of the credential. Each usage of the VC is recorded by the wallet application, so Alice has a record of when and where the VC is used.
+
+
+
+By combining centralized and decentralized identity architectures for operating inside and outside of trust boundaries, complexity and risk can be reduced and limited relationships become easier to manage.
+
+### Changes over time
+
+Woodgrove will add and end business relationships with other organizations and will need to determine when centralized and decentralized identity architectures are used.
+
+By combining centralized and decentralized identity architectures, the responsibility and effort associated with identity and proof of identity is distributed, risk is reduced, and the user does not risk releasing their private information as often or to as many unknown verifiers. Specifically:
+
+* In centralized identity architectures, the IDP issues credentials and performs verification of issued those issued credentials. Information about all identities is processed by the IDP, either storing them in or retrieving them from a directory. IDPs may also dynamically accept security tokens from other IDP systems, such as social sign-ins or business partners. For a relying party to use identities in the IDP trust boundary, they must be configured to accept the tokens issued by the IDP.
+
+## How decentralized identity systems work
+
+In decentralized identity architectures, the issuer, user, and relying party (RP) each have a role in establishing and ensuring ongoing trusted exchange of each otherΓÇÖs credentials. The public keys of the actorsΓÇÖ DIDs are resolvable in ION, which allows signature validation and therefore trust of any artifact, including a verifiable credential. Relying parties can consume verifiable credentials without establishing trust relationships with the issuer. Instead, the issuer provides the subject a credential to present as proof to relying parties. All messages between actors are signed with the actorΓÇÖs DID; DIDs from issuers and verifiers also need to own the DNS domains that generated the requests.
+
+For example: When the holder of a VC wants to use it to access a resource, they must present the VC to that relying party. They do so by using the wallet application to read the RPΓÇÖs request to present a VC. As a part of reading that request, the wallet application uses the RPΓÇÖs DID to find the RPs public keys using ION, validating that the request to present the VC has not been tampered with. The wallet also checks that the DID is referenced in a metadata document that is hosted in the DNS domain of the RP, to prove domain ownership.
+
+
+
+![How a decentralized identity system works](media/introduction-to-verifiable-credentials-architecture/how-decentralized-works.png)
+
+### Flow 1: Verifiable credential issuance
+
+In this flow, the credential holder interacts with the issuer to request a verifiable credential as illustrated in the following diagram
+
+![Verifiable credential issuance](media/introduction-to-verifiable-credentials-architecture/issuance.png)
+
+1. The holder starts the flow by using a browser or native application to access the issuerΓÇÖs web frontend. There, the issuer website drives the user to collect data and executes issuer-specific logic to determine whether the credential can be issued, and its content.)
+
+1. The issuer web frontend calls the Azure AD VC Service to generate a VC issuance request.
+
+1. The web frontend renders a link to the request as a QR code or a device-specific deep link (depending on the device).
+
+1. The holder scans the QR code or deep link from step 3 using a Wallet app such as Microsoft Authenticator
+
+1. The wallet downloads the request from the link. The request includes:
+
+ * DID of the issuer. This is used by the wallet app to resolve in ION to find the public keys and linked domains.
+
+ * URL with the VC manifest, which specifies the contract requirements to issue the VC. This can include id_token, self-attested attributes that must be provided, or the presentation of another VC.
+
+ * Look and feel of the VC (URL of the logo file, colors, etc.).
+
+1. The wallet validates the issuance requests and processes the contract requirements:
+
+ 1. Validates that the issuance request message is signed by the issuerΓÇÖ keys found in the DID document resolved in ION. This ensures that the message has not been tampered with.
+
+ 1. Validates that the DNS domain referenced in the issuerΓÇÖs DID document is owned by the issuer.
+
+ 1. Depending on the VC contract requirements, the wallet guides the holder to collect additional information, for example asking for self-issued attributes, or navigating through an OIDC flow to obtain an id_token.
+
+1. Submits the artifacts required by the contract to the Azure AD VC Service. The Azure AD VC service returns the VC, signed with the issuerΓÇÖs DID key and the wallet securely stores the VC.
+
+For detailed information on how to build an issuance solution and architectural considerations, see [Plan your Azure Active Directory Verifiable Credentials issuance solution](plan-issuance-solution.md).
+
+### Flow 2: Verifiable credential presentation
+
+![Verifiable credential presentation](media/introduction-to-verifiable-credentials-architecture/presentation.png)
+
+In this flow, a holder interacts with a relying party (RP) to present a VC as part of its authorization requirements.
+
+1. The holder starts the flow by using a browser or native application to access the relying partyΓÇÖs web frontend.
+
+1. The web frontend calls the Azure AD VC Service to generate a VC presentation request.
+
+1. The web frontend renders a link to the request as a QR code or a device-specific deep link (depending on the device).
+
+1. The holder scans the QR code or deep link from step 3 using a wallet app such as Microsoft Authenticator
+
+1. The wallet downloads the request from the link. The request includes:
+
+ * a [standards based request for credentials](https://identity.foundation/presentation-exchange/) of a schema or credentialType.
+
+ * the DID of the RP, which the wallet looks up in ION.
++
+1. The wallet validates that the presentation request and finds stored VC(s) that satisfy the request. Based on the required VCs, the wallet guides the subject to select and consent to use the VCs.
+
+ * After the subject consents to use of the VC, the wallet generates a unique pairwise DID between the subject and the RP.
+
+ Then, the wallet sends a presentation response payload to the Azure AD VC Service signed by the subject. It contains:
+
+ * The VC(s) the subject consented to.
+
+ * The pairwise DID generated as the ΓÇ£subjectΓÇ¥ of the payload.
+
+ * The RP DID as the ΓÇ£audienceΓÇ¥ of the payload.
+
+1. The Azure AD VC service validates the response sent by the wallet. Depending on how the original presentation request was created in step 2, this validation can include checking the status of the presented VC with the VC issuer for cases such as revocation.
+
+1. Upon validation, the Azure AD VC service calls back the RP with the result.
+
+For detailed information on how to build a validation solution and architectural considerations, see [Plan your Azure Active Directory Verifiable Credentials verification solution](plan-verification-solution.md).
+
+## Key Takeaways
+
+Decentralized architectures can be used to enhance existing solutions and provide new capabilities.
+
+To deliver on the aspirations of the [Decentralized Identity Foundation](https://identity.foundation/) (DIF) and W3C [Design goals](https://www.w3.org/TR/did-core/), the following should be considered when creating a verifiable credential solution:
+
+* There are no central points of trust establishment between actors in the system. That is, trust boundaries are not expanded through federation because actors trust specific VCs
+
+ * ION enables the discovery of any actorΓÇÖs decentralized identifier (DID).
+
+ * The solution enables verifiers to validate any verifiable credentials (VCs) from any issuer.
+
+ * The solution does not enable the issuer to control authorization of the subject or the verifier (relying party).
+
+* The actors operate in a decoupled manner, each capable of completing the tasks for their roles.
+
+ * Issuers service every VC request and do not discriminate on the requests serviced.
+
+ * Subjects own their VC once issued and can present their VC to any verifier.
+
+ * Verifiers can validate any VC from any subject or issuer.
+
+## Next steps
+
+Learn more about architecture for verifiable credentials
+
+* [Plan your issuance solution](plan-issuance-solution.md)
+
+* [Plan your verification solution](plan-verification-solution.md)
+
+* [Get started with Azure Active Directory Verifiable Credentials](get-started-verifiable-credentials.md)
active-directory Issue Verify Verifiable Credentials Your Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/issue-verify-verifiable-credentials-your-tenant.md
Previously updated : 04/01/2021 Last updated : 07/20/2021
active-directory Plan Issuance Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/plan-issuance-solution.md
+
+ Title: Plan your Azure Active Directory Verifiable Credentials issuance solution(preview)
+description: Learn to plan your end-to-end issuance solution.
+documentationCenter: ''
+++++ Last updated : 07/20/2021++++
+# Plan your Azure Active Directory Verifiable Credentials issuance solution (preview)
+
+ >[!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+ItΓÇÖs important to plan your issuance solution so that in addition to issuing credentials, you have a complete view of the architectural and business impacts of your solution. If you havenΓÇÖt done so, we recommend you view the [Azure Active Directory Verifiable Credentials architecture overview](introduction-to-verifiable-credentials-architecture.md) for foundational information.
+
+## Scope of guidance
+
+This article covers the technical aspects of planning for a verifiable credential issuance solution using Microsoft products to interoperate with the Identity Overlay Network (ION). The Microsoft solution for verifiable credentials follows the World Wide Web Consortium (W3C) [Verifiable Credentials Data Model 1.0](https://www.w3.org/TR/vc-data-model/) and [Decentralized Identifiers (DIDs) V1.0](https://www.w3.org/TR/did-core/) standards so can interoperate with non-Microsoft services. However, the examples in this content reflect the Microsoft solution stack for verifiable credentials.
+
+Out of scope for this content are topics covering supporting technologies that aren't specific to issuance solutions. For example, websites are used in a verifiable credential issuance solution but planning a website deployment isn't covered in detail.
+
+## Components of the solution
+
+As part of your plan for an issuance solution, you must design a solution that enables the interactions between the issuer, the user, and the verifier. You may issue more than one verifiable credential. The following diagram shows the components of your issuance architecture.
+
+### Microsoft VC issuance solution architecture
+
+![Components of an issuance solution](media/plan-issuance-solution/plan-issuance-solution-architecture.png)
++
+### Azure Active Directory tenant
+
+A prerequisite for running the Azure AD Verifiable Credentials service is that it's hosted in an Azure Active Directory (Azure AD) tenant. The Azure AD tenant provides an Identity and Access Management (IAM) control plane for the Azure resources that are part of the solution.
+
+Each tenant has a single instance of the Azure AD Verifiable Credentials service, and a single decentralized identifier (DID). The DID provides proof that the issuer owns the domain incorporated into the DID. The DID is used by the subject and the verifier to validate the issuer.
+
+### Microsoft Azure services
+
+![Components of an issuance solution, focusing on Azure services](media/plan-issuance-solution/plan-issuance-solution-azure-services.png)
+
+The **Azure Key Vault** service stores your issuer keys, which are generated when you initiate the Azure AD Verifiable Credentials issuance service. The keys and metadata are used to execute credential management operations and provide message security.
+
+Each issuer has a single key set used for signing, updating, and recovery. This key set is used for every issuance of every verifiable credential you produce.
+
+**Azure Storage** is used to store credential metadata and definitions; specifically, the rules and display files for your credentials.
+
+* Display files determine which claims are stored in the VC and how it's displayed in the holderΓÇÖs wallet. The display file also includes branding and other elements. Rules files are limited in size to 50 KB, while display files are limited to 150 KB. See [How to customize your verifiable credentials](../verifiable-credentials/credential-design.md).
+
+* Rules are an issuer-defined model that describes the required inputs of a verifiable credential, the trusted sources of the inputs, and the mapping of input claims to output claims.
+
+ * Input ΓÇô Are a subset of the model in the rules file for client consumption. The subset must describe the set of inputs, where to obtain the inputs and the endpoint to call to obtain a verifiable credential.
+
+* Rules and display files for different credentials can be configured to use different containers, subscriptions, and storage. For example, you can delegate permissions to different teams that own management of specific VCs.
+
+### Azure AD Verifiable Credentials service
+
+![Microsoft Azure AD Verifiable Credentials service](media/plan-issuance-solution/plan-issuance-azure-active-directory-verifiable-credential-services.png)
+
+The Azure AD Verifiable Credentials service enables you to issue and revoke VCs based on your configuration. The service:
+
+* Provisions the decentralized identifier (DID) and writes the DID document to ION, where it can be used by subjects and verifiers. Each issuer has a single DID per tenant.
+
+* Provisions key sets to Key Vault.
+
+* Stores the configuration metadata used by the issuance service and Microsoft Authenticator.
+
+### ION
+
+![ION](media/plan-issuance-solution/plan-issuance-solution-ion.png)
+
+Microsoft uses the [Identity Overlay Network (ION)](https://identity.foundation/ion/), [a Sidetree-based network](https://identity.foundation/sidetree/spec/) that uses BitcoinΓÇÖs blockchain for decentralized identifier (DID) implementation. The DID document of the issuer is stored in ION and is used to perform cryptographic signature checks by parties to the transaction.
+
+### Microsoft Authenticator application
+
+![Microsoft Authenticator application](media/plan-issuance-solution/plan-issuance-solution-microsoft-authenticator.png)
+
+Microsoft Authenticator is the mobile application that orchestrates the interactions between the user, the Azure AD Verifiable Credentials service, and dependencies that are described in the contract used to issue VCs. It acts as a digital wallet in which the holder of the VC stores the VC, including the private key of the subject of the VC. Authenticator is also the mechanism used to present VCs for verification.
+
+### Issuance business logic
+
+![Issuance business logic](media/plan-issuance-solution/plan-issuance-solution-business-logic.png)
+
+Your issuance solution includes a web front end where users request a VC, an identity store and or other attribute store to obtain values for claims about the subject, and other backend services.
+
+A web front end serves issuance requests to the subjectΓÇÖs wallet by generating deep links or QR codes. Based on the configuration of the contract, other components might be required to satisfy the requirements to create a VC.
+
+These services provide supporting roles that don't necessarily need to integrate with ION or Azure AD Verifiable Credentials issuance service. This layer typically includes:
+
+* **Open ID Connect (OIDC)-compliant service or services** are used to obtain id_tokens needed to issue the VC. Existing identity systems such as Azure AD or Azure AD B2C can provide the OIDC-compliant service, as can custom solutions such as Identity Server.
+
+* **Attribute stores** ΓÇô These might be outside of directory services and provide attributes needed to issue a VC. For example, a student information system might provide claims about degrees earned.
+
+* **Additional middle-tier services** that contain business rules for lookups, validating, billing, and any other runtime checks and workflows needed to issue credentials.
+
+For more information on setting up your web front end, see the tutorial [Configure you Azure AD to issue verifiable credentials](../verifiable-credentials/enable-your-tenant-verifiable-credentials.md).
+
+## Credential Design Considerations
+
+Your specific use cases determine your credential design. The use case will determine:
+
+* the interoperability requirements
+
+* the way users will need to prove their identity to get their VC
+
+* the claims that are needed in the credentials
+
+* if credentials will ever need to be revoked
+
+
+
+### Credential Use Cases
+
+With Azure AD Verifiable Credentials, the most common credential use cases are:
+
+**Identity Verification**: a credential is issued based on multiple criteria. This may include verifying the authenticity of government-issued documents like a passport or driverΓÇÖs license and corelating the information in that document with other information such as:
+
+* a userΓÇÖs selfie
+
+* verification of liveness.
+
+This kind of credential is a good fit for identity onboarding scenarios of new employees, partners, service providers, students, and other instances where identity verification is essential.
+
+
+
+![Identity verification use case](media/plan-issuance-solution/plan-issuance-solution-identity-verification-use-cases.png)
+
+**Proof of employment/membership**: a credential is issued to prove a relationship between the user and an institution. This kind of credential is a good fit to access loosely coupled business-to-business applications, such as retailers offering discounts to employees or students. One main value of VCs is their portability: Once issued, the user can use the VC in many scenarios.
+
+![Proof of employment use case](media/plan-issuance-solution/plan-issuance-solution-employment-proof-use-cases.png)
+
+For more use cases, see [Verifiable Credentials Use Cases (w3.org)](https://www.w3.org/TR/vc-use-cases/).
+
+### Credential interoperability
+
+As part of the design process, investigate industry-specific schemas, namespaces, and identifiers to which you can align to maximize interoperability and usage. Examples can be found in [Schema.org](https://schema.org/) and the [DIF - Claims and Credentials Working Group.](https://identity.foundation/working-groups/claims-credentials.html)
+
+Note that common schemas are an area where standards are still emerging. One example of such an effort is the [Verifiable Credentials for Education Task Force](https://github.com/w3c-ccg/vc-ed). We encourage you to investigate and contribute to emerging standards in your organization's industry.
+
+### Credential Attributes
+
+After establishing the use case for a credential, you need to decide what attributes to include in the credential. Verifiers can read the claims in the VC presented by the users.
+
+In addition to the industry-specific standards and schemas that might be applicable to your scenarios, consider the following aspects:
+
+* **Minimize private information**: Meet the use cases with the minimal amount of private information necessary. For example, a VC used for e-commerce websites that offers discounts to employees and alumni can be fulfilled by presenting the credential with just the first and last name claims. Additional information such as hiring date, title, department, etc. are not needed.
+
+* **Favor abstract claims**: Each claim should meet the need while minimizing the detail. For example, a claim called ΓÇ£ageOverΓÇ¥ with discrete values such as ΓÇ£13ΓÇ¥,ΓÇ¥21ΓÇ¥,ΓÇ¥60ΓÇ¥, is more abstract than a date of birth claim.
+
+* **Plan for revocability**: We recommend you define an index claim to enable mechanisms to find and revoke credentials. You are limited to defining one index claim per contract. It is important to note that values for indexed claims are not stored in the backend, only a hash of the claim value. For more information, see [Revoke a previously issued verifiable credential](../verifiable-credentials/how-to-issuer-revoke.md).
+
+For additional considerations on credential attributes, refer to the [Verifiable Credentials Data Model 1.0 (w3.org)](https://www.w3.org/TR/vc-data-model/) specification.
+
+## Plan quality attributes
+
+### Plan for performance
+
+As with any solution, you must plan for performance. The key areas to focus on are latency, throughput storage, and scalability. During initial phases of a release cycle, performance should not be a concern. However, when adoption of your issuance solution results in many verifiable credentials being issued, performance planning might become a critical part of your solution.
+
+The following provides areas to consider when planning for performance:
+
+* The Azure AD Verifiable Credentials issuance service is deployed in West Europe, North Europe, West US 2, and West Central US Azure regions. You do not select a region to deploy the service to.
+
+* To limit latency, deploy your issuance frontend website, key vault, and storage in the region listed above that is closest to where requests are expected to originate.
+
+Model based on throughput:
+* The Issuer service is subject to [Azure Key Vault service limits](../../key-vault/general/service-limits.md).
+
+* For Azure Key Vault, there are three signing operations involved in each a VC issuance:
+
+ * One for issuance request from the website
+
+ * One for the VC created
+
+ * One for the contract download
+
+* Maximum signing performance of a Key Vault is 2,000 signing/~10 seconds. This is about 12,000 signings per minute. This means your solution can support up to 4,000 VC issuances per minute.
+
+* You cannot control throttling; however, we recommend you read [Azure Key Vault throttling guidance](../../key-vault/general/overview-throttling.md).
+
+* If you are planning a large rollout and onboarding of VCs, consider batching VC creation to ensure you do not exceed limits.
+
+* The issuance service is subject to Azure storage limits. In typical use cases storage should not be a concern. However, if you feel you might exceed storage limits or feel storage might be a bottleneck, review the following:
+
+ * We recommend reading [Scalability and performance targets for Blob storage](../../storage/blobs/scalability-targets.md) as part of your planning process. Azure AD Verifiable Credentials issuance service reads rules and displays files, and results are cached by the service.
+
+ * We also recommend you review [Performance and scalability checklist for Blob storage - Azure Storage](../../storage/blobs/storage-performance-checklist.md).
+
+As part of your plan for performance, determine what you will monitor to better understand the performance of the solution. In addition to application-level website monitoring, consider the following as you define your VC issuance monitoring strategy:
+
+For scalability, consider implementing metrics for the following:
+
+ * Define the logical phases of your issuance process. For example:
+
+ * Initial request
+
+ * Servicing of the QR code or deep link
+
+ * Attribute lookup
+
+ * Calls to Azure AD Verifiable Credentials issuance service
+
+ * Credential issued
+
+ * Define metrics based on the phases:
+
+ * Total count of requests (volume)
+
+ * Requests per unit of time (throughput)
+
+ * Time spent (latency)
+
+* Monitor Azure Key Vault and Storage using the following:
+
+ * [Azure Key Vault monitoring and alerting](../../key-vault/general/alert.md)
+
+ * [Monitoring Azure Blob Storage](../../storage/blobs/monitor-blob-storage.md)
+
+* Monitor the components used for your business logic layer.
+
+### Plan for reliability
+
+To plan for reliability, we recommend:
+
+* After you define your availability and redundancy goals, use the following guides to understand how to achieve your goals:
+
+ * [Azure Key Vault availability and redundancy - Azure Key Vault](../../key-vault/general/disaster-recovery-guidance.md)
+
+ * [Disaster recovery and storage account failover - Azure Storage](../../storage/common/storage-disaster-recovery-guidance.md)
+
+* For frontend and business layer, your solution can manifest in an unlimited number of ways. As with any solution, for the dependencies you identify, ensure that the dependencies are resilient and monitored.
+
+If the rare event that the Azure AD Verifiable Credentials issuance service, Azure Key Vault, or Azure Storage services become unavailable, the entire solution will become unavailable.
+
+### Plan for compliance
+
+Your organization may have specific compliance needs related to your industry, type of transactions, or country of operation.
+
+**Data residency**: The Azure AD Verifiable Credentials issuance service is deployed in a subset of Azure regions. The service is used for compute functions only. We do not store values of verifiable credentials in Microsoft systems. However, as part of the issuance process, personal data is sent and used when issuing VCs. Using the VC service should not impact data residency requirements. If, as a part of identity verification you store any personal information, that should be stored in a manner and region that meets your compliance requirements. For Azure-related guidance, visit the Microsoft Trust Center website.
+
+**Revoking credentials**: Determine if your organization will need to revoke credentials. For example, an admin may need to revoke credentials when an employee leaves the company. Or if a credential is issued for a driverΓÇÖs license, and the holder is caught doing something that would cause the driverΓÇÖs license to be suspended, the VC might need to be revoked. For more information, see [Revoke a previously issued verifiable credential](how-to-issuer-revoke.md).
+
+**Expiring credentials**: Determine if you will expire credentials, and if so under what circumstances. For example, if you issue a VC as proof of having a driverΓÇÖs license, it might expire after a few years. If you issue a VC as a verification of an association with a user, you may want to expire it annually to ensure users come back annually to get the most updated version of the VC.
+
+## Plan for operations
+
+When planning for operations, it is critical you develop a schema to use for troubleshooting, reporting, and distinguishing various customers you support. Additionally, if the operations team is responsible for executing VC revocation, that process must be defined. Each step in the process should be correlated so that you can determine which log entries can be associated with each unique issuance request. For auditing, we recommend you capture each attempt of credential issuing individually. Specifically:
+
+* Generate unique transaction IDs that customers and support engineers can refer to as needed.
+
+* Devise a mechanism to correlate the logs of Azure Key Vault transactions to the transaction IDs of the issuance portion of the solution.
+
+* If you are an identity verification service issuing VCs on behalf of multiple customers, monitor and mitigate by customer or contract ID for customer-facing reporting and billing.
+
+* If you are an identity verification service issuing VCs on behalf of multiple customers, use the customer or contract ID for customer-facing reporting and billing, monitoring, and mitigating.
+
+## Plan for security
+
+As part of your design considerations focused on security, we recommend the following:
+
+* For key management:
+
+ * Create a dedicated Key Vault for VC issuance. Limit Azure Key Vault permissions to the Azure AD Verifiable Credentials issuance service and the issuance service frontend website service principal.
+
+ * Treat Azure Key Vault as a highly privileged system - Azure Key Vault issues credentials to customers. We recommend that no human identities have standing permissions over the Azure Key Vault service. Administrators should have only just I time access to Key Vault. For more best practices for Azure Key Vault usage, refer to [Azure Security Baseline for Key Vault](https://docs.microsoft.com/security/benchmark/azure/baselines/key-vault-security-baseline).
+
+* For service principal that represents the issuance frontend website:
+
+ * Define a dedicated service principal to authorize access Azure Key Vault. If your website is on Azure, we recommend that you use an [Azure Managed Identity](../managed-identities-azure-resources/overview.md).
+
+ * Treat the service principal that represents the website and the user as a single trust boundary. While it is possible to create multiple websites, there is only one key set for the issuance solution.
+
+For security logging and monitoring, we recommend the following:
+
+* Enable logging and alerting of Azure Key Vault to track credential issuance operations, key extraction attempts, permission changes, and to monitor and send alert for configuration changes. More information can be found at [How to enable Key Vault logging](../../key-vault/general/howto-logging.md).
+
+* Enable logging of your Azure Storage account to monitor and send alert for configuration changes. More information can be found at [Monitoring Azure Blob Storage](../../storage/blobs/monitor-blob-storage.md).
+
+* Archive logs in a security information and event management (SIEM) systems, such as [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel.md) for long-term retention.
+
+* Mitigate spoofing risks by using the following
+
+ * DNS verification to help customers identify issuer branding.
+
+ * Domain names that are meaningful to end users.
+
+ * Trusted branding the end user recognizes.
+
+* Mitigate distributed denial of service (DDOS) and Key Vault resource exhaustion risks. Every request that triggers a VC issuance request generates Key Vault signing operations that accrue towards service limits. We recommend protecting traffic by incorporating authentication or captcha before generating issuance requests.
+
+For guidance on managing your Azure environment, we recommend you review [Azure Security Benchmark](https://docs.microsoft.com/security/benchmark/azure/) and [Securing Azure environments with Azure Active Directory](https://aka.ms/AzureADSecuredAzure). These guides provide best practices for managing the underlying Azure resources, including Azure Key Vault, Azure Storage, websites, and other Azure-related services and capabilities.
+
+## Additional considerations
+
+When you complete your POC, gather all the information and documentation generated, and consider tearing down the issuer configuration. This will help avoid issuing verifiable credentials after your POC timeframe expires.
+
+For more information on Key Vault implementation and operation, refer to [Best practices to use Key Vault](../../key-vault/general/best-practices.md). For more information on Securing Azure environments with Active Directory, refer to [Securing Azure environments with Azure Active Directory](https://aka.ms/AzureADSecuredAzure).
+
+## Next steps
+
+[Read the architectural overview](introduction-to-verifiable-credentials-architecture.md)
+
+[Plan your verification solution](plan-verification-solution.md)
+
+[Get started with verifiable credentials](get-started-verifiable-credentials.md)
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/plan-verification-solution.md
+
+ Title: Plan your Azure Active Directory Verifiable Credentials verification solution (preview)
+description: Learn foundational information to plan and design your verification solution
+documentationCenter: ''
+++++ Last updated : 07/20/2021++++
+# Plan your Azure Active Directory Verifiable Credentials verification solution (Preview)
+
+>[!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+MicrosoftΓÇÖs Azure Active Directory Verifiable Credentials (Azure AD VC) service enables you to trust proofs of user identity without expanding your trust boundary by creating accounts or federating with another identity provider. By using verifiable credentials based on an open standard, a verification exchange enables applications to request credentials that are not bound to a specific domain. This makes it easier to request and verify credentials at scale.
+
+If you havenΓÇÖt already, we suggest you review the [Azure AD Verifiable Credentials architecture overview](introduction-to-verifiable-credentials-architecture.md). You may also want to review [Plan your Azure AD Verifiable Credentials issuance solution](plan-issuance-solution.md).
+
+## Scope of guidance
+
+This content covers the technical aspects of planning for a verifiable credential (VC) verification solution using Microsoft products and services. The solution interfaces with the Identity Overlay Network (ION) which acts as the decentralized public key infrastructure (DPKI).
+
+Supporting technologies that are not specific to verification solutions are out of scope. For example, websites are used in a verifiable credential verification solution but planning a website deployment is not covered in detail.
+
+As you plan your verification solution you must consider what business capability is being added or modified and what IT capabilities can be leveraged or must be added to create the solution. You must also consider what training is needed for the people involved in the business process as well as the people that support the end users and staff of the solution. These topics are not covered in this content. We recommend reviewing the [Microsoft Azure Well-Architected Framework](https://docs.microsoft.com/azure/architecture/framework/) for information covering these topics.
+
+## Components of the solution
+
+As part of your plan for a verification solution, you must enable the interactions between the verifier, the subject, and the issuer. In this article, the terms relying party and verifier are used interchangeably. The following diagram shows the components of your verification architecture.
+
+![Components of a verification solution](media/plan-verification-solution/verification-solution-architecture.png)
++
+### Azure AD Verifiable Credentials service
+
+In the context of a verifier solution, the Azure AD Verifiable Credentials service is the interface between the Microsoft components of the solution and ION. The service provisions the key set to Key Vault, provisions the decentralized identifier (DID), and writes the DID document to ION, where it can be used by subjects and issuers.
+
+### Azure Active Directory tenant
+
+The service requires an Azure AD tenant that provides an Identity and Access Management (IAM) control plane for the Azure resources that are part of the solution. There is a single instance of the Azure AD VC service within a tenant, and it issues a single DID document representing the verifier. If you have multiple relying parties using your verification service, they all use the same verifier DID. The verifier DID provides pointers to the public key that allows subjects and issuers to validate messages that come from the relying party.
+
+### Azure Key Vault
+
+![Azure Key Vault](./media/plan-verification-solution/verification-solution-key-vault.png)
+
+The Azure Key Vault service stores your verifier keys, which are generated when you enable the Azure AD Verifiable Credentials issuance service. The keys are used to provide message security. Each verifier has a single key set used for signing, updating, and recovering VCs. This key set is used each time you service a verification request. Microsoft key set currently uses Elliptic Curve Cryptography (ECC) [SECP256k1](https://en.bitcoin.it/wiki/Secp256k1). We are exploring other cryptographic signature schemas that will be adopted by the broader DID community.
+
+### Azure AD VC APIs and SDKs
+
+![Azure AD VC APIs and SDKs](./media/plan-verification-solution/verification-solution-tools.png)
+
+Application programming interfaces (APIs) and a software developer kit (SDK) provide developers a method to abstract interactions between components of the solution to execute verification operations.
+
+### ION
+
+![Azure AD VC ION](./media/plan-verification-solution/verification-solution-ion.png)
+
+Verifiable credential solutions use a decentralized ledger system to record transactions. Microsoft uses the [Identity Overlay Network (ION)](https://identity.foundation/ion/), [a Sidetree-based network](https://identity.foundation/sidetree/spec/) that uses Bitcoin as its blockchain-styled ledger for decentralized identifier (DID) implementation. The DID document of the issuer is stored in ION and used by parties to the transaction to perform cryptographic signature checks.
+
+### Microsoft Authenticator application
+
+![Microsoft Authenticator application](media/plan-verification-solution/verification-solution-authenticator.png)
+
+Microsoft Authenticator is the mobile application that orchestrates the interactions between the relying party, the user, the Azure AD Verifiable Credentials issuance service, and dependencies described in the contract used to issue VCs. It acts as a digital wallet in which the holder of the VC stores the VC. It is also the mechanism used to present VCs for verification.
+
+### Relying party (RP)
+
+![Relying party components](media/plan-verification-solution/verification-solution-relying-party.png)
+
+#### Web front end
+
+The relying party web frontend uses the Azure AD VC APIs or SDK to verify VCs by generating deep links or QR codes that are consumed by the subjectΓÇÖs wallet. Depending on the scenario, the frontend can be a publicly accessible or internal website to enable end-user experiences that require verification. However, the endpoints that the wallet accesses must be publicly accessible. Specifically, it controls redirection to the wallet with specific request parameters. This is accomplished using the Microsoft-provided APIs and SDK.
+
+#### Business logic
+
+You can create new logic or use existing logic that is specific to the relying party, and enhance that logic with the presentation of VCs.
+
+## Scenario-specific designs
+
+The following are examples of designs to satisfy specific use cases. The first is for account onboarding, used to reduce the time, cost, and risk associated with onboarding new employees. The second is for account recovery, which enables an end user to recover or unlock their account using a self-service mechanism. The third is for accessing high-value applications and resources, specifically for business-to-business use cases where access is given to people that work for other companies.
+
+### Account onboarding
+
+Verifiable credentials can also be used to enable faster onboarding by replacing some human interactions. VCs can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a VC verifying their identity to activate a badge delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access.
+
+![Account onboarding scenario](media/plan-verification-solution/verification-solution-onboarding.png)
+
+#### Additional elements
+
+**Onboarding portal**: This is a web frontend that orchestrates the Azure AD VC APIs/SDKs calls for VC presentation and validation, and the logic to onboard accounts.
+
+**Custom logic / workflows**: Specific logic with organization-specific steps before and after updating the user account. This might include approval workflows, additional validations, logging, notifications, etc.
+
+**Target identity systems**: Organization-specific identity repositories that the onboarding portal needs to interact with while onboarding subjects. The systems to integrate are determined based on the kinds of identities you want to onboard with VC validation. Common scenarios of identity verification for onboarding include:
+
+* External Identities such as vendors, partners, suppliers, and customers, which in centralized identity systems are onboarded to Azure AD using APIs to issue business-to-business (B2B) invitations, or entitlement management assignment to packages.
+
+* Employee identities, which in centralized identity systems are already onboarded through human resources (HR) systems. In this case, the identity verification might be integrated as part of existing stages of HR workflows.
+
+#### Design Considerations
+
+* **Issuer**: Account onboarding is a good fit for an external identity proofing service as the issuer of the VCs. Examples of checks for onboarding include: liveness check, government-issued document validation, address or phone number confirmation, etc.
+
+* **Storing VC Attributes**: Where possible do not store attributes from VCs in your app-specific store. Be especially careful with personal data. If this information is required by specific flows within your applications, consider asking for the VC to retrieve the claims on demand.
+
+* **VC Attribute correlation with backend systems**: When defining the attributes of the VC with the issuer, establish a mechanism to correlate information in the backend system after the user presents the VC. This typically uses a time-bound, unique identifier in the context of your RP in combination with the claims you receive. Some examples:
+
+ * **New employee**: When the HR workflow reaches the point where identity proofing is required, the RP can generate a link with a time-bound unique identifier and send it to the candidateΓÇÖs email address on the HR system. This unique identifier should be sufficient to correlate information such as Firstname, LastName from the VC verification request to the HR record or underlying data. The attributes in the VC can be used to complete user attributes in the HR system, or to validate accuracy of user attributes about the employee.
+
+ * **External identities** - invitation: When an existing user in your organization invites an external user to be onboarded in the target system, the RP can generate a link with a unique identifier that represents the invitation transaction and sends it to the external userΓÇÖs email address. This unique identifier should be sufficient to correlate the VC verification request to the invitation record or underlying data and continue the provisioning workflow. The attributes in the VC can be used to validate or complete the external user attributes.
+
+ * **External identities** ΓÇô self-service: When external identities sign up to the target system through self-service (for example, a B2C application) the attributes in the VC can be used to populate the initial attributes of the user account. The VC attributes can also be used to find out if a profile already exists.
+
+* **Interaction with target identity systems**: The service-to-service communication between the web front end and your target identity systems needs to be secured as a highly privileged system, because it can create accounts. Grant the web front end the least privileged roles possible. Some examples include:
+
+ * To create a new user in Azure AD, the RP website can use a service principal that is granted the MS Graph scope of User.ReadWrite.All to create users, and the scope UserAuthenticationMethod.ReadWrite.All to reset authentication method
+
+ * To invite users to Azure AD using B2B collaboration, the RP website can use a service principal that is granted the MS Graph scope of User.Invite.All to create invitations.
+
+ * If your RP is running in Azure, use Managed Identities to call Microsoft Graph; this will remove the risks of managing service principal credentials in code or configuration files. To learn more about Managed identities, go to [Managed identities for Azure resources.](../managed-identities-azure-resources/overview.md)
+
+### Accessing high-value applications inside organizations
+
+Verifiable credentials can be used as additional proof to access to sensitive applications inside the organization. For example, VCs can also be used to provide employees with access to line-of-business applications based on achieving specific criteria, such as a certification.
+
+![Access inside of the trust boundary](media/plan-verification-solution/inside-trust-boundary-access.png)
+
+#### Additional elements
+
+Relying party web fronted: This is the web frontend of the application that is enhanced through Azure AD Verifiable Credential SDK or API calls for VC presentation and validation, based on your business requirements.
+
+User access authorization logic: Logic layer in the application that authorizes user access and is enhanced to consume the user attributes inside the VC to make authorization decisions.
+
+Other backend services and dependencies: Represents the rest of the logic of the application, which typically is unchanged by the inclusion of identity proofing through VCs.
+
+#### Design Considerations
+
+* **Goal**: The goal of the scenario determines what kind of credential and issuer is needed. Typical scenarios include:
+
+ * **Authorization**: In this scenario the user presents the VC to make an authorization decision. VCs designed for proof of completion of a training or holding a specific certification, are a good fit for this scenario. The VC attributes should contain fine-grained information conducive to authorization decisions and auditing. For example, if the VC is used to certify the individual is trained and can access sensitive financial apps, the app logic can check the department claim for fine-grained authorization, and use the employee ID for audit purposes.
+
+ * **Confirmation of identity verification**: In this scenario the goal is to confirm that the same person who initially onboarded is indeed the one attempting to access the high-value application. A credential from an identity verification issuer would be a good fit and the application logic should validate that the attributes from the VC align with the user who logged in the application.
+
+* **Check Revocation**: When using VCs to access sensitive resources, it is common to check the status of the VC with the original issuer and deny access for revoked VCs. When working with the issuers, ensure that revocation is explicitly discussed as part of the design of your scenario.
+
+* **User Experience**: When using VCs to access sensitive resources, there are two patterns you can consider.
+
+ * **Step-up authentication**: users start the session with the application with existing authentication mechanisms. Users must present a VC for specific high-value operations within the application such as approvals of business workflows. This is a good fit for scenarios where such high-value operations are easy to identify and update within the application flows.
+
+ * **Session establishment**: Users must present a VC as part of initiating the session with the application. This is a good fit when the nature of the entire application is high-value.
+
+### Accessing applications outside organization boundaries
+
+Verifiable credentials can also be used by relying parties that want to grant access or benefits based on membership or employment relationship of a different organization. For example, an e-commerce portal can offer benefits such as discounts to employees of a particular company, students of a given institution, etc.
+
+The decentralized nature of verifiable credentials enables this scenario without establishing federation relationships.
+
+![Access outside of the trust boundary](media/plan-verification-solution/outside-trust-boundary-access.png)
+
+#### Additional elements
+
+Relying party web fronted: This is the web frontend of the application that is enhanced through Azure AD Verifiable Credential SDK or API calls for VC presentation and validation, based on your business requirements.
+
+User access authorization logic: Logic layer in the application that authorizes user access and is enhanced to consume the user attributes inside the VC to make authorization decisions.
+
+Other backend services and dependencies: Represents the rest of the logic of the application, which typically is unchanged by the inclusion of identity proofing through VCs.
+
+#### Design Considerations
+
+* **Goal**: The goal of the scenario determines what kind of credential and issuer is needed. Typical scenarios include:
+
+ * **Authentication**: In this scenario a user must have possession of VC to prove employment or relationship to a particular organization(s). In this case, the RP should be configured to accept VCs issued by the target organizations.
+
+ * **Authorization**: Based on the application requirements, the applications might consume the VC attributes for fine-grained authorization decisions and auditing. For example, if an e-commerce website offers discounts to employees of the organizations in a particular location, they can validate this based on the country claim in the VC (if present).
+
+* **Check Revocation**: When using VCs to access sensitive resources, it is common to check the status of the VC with the original issuer and deny access for revoked VCs. When working with the issuers, ensure that revocation is explicitly discussed as part of the design of your scenario.
+
+* **User Experience**: Users can present a VC as part of initiating the session with the application. Typically, applications also provide an alternative method to start the session to accommodate cases where users donΓÇÖt have VCs.
++
+### Account recovery
+
+Verifiable credentials can be used as an approach to account recovery. For example, when a user needs to recover their account they might access a website that requires them to present a VC and initiate an Azure AD credential reset by calling MS Graph APIs as shown in the following diagram.
+
+Note: While the scenario we describe in this section is specific to recover Azure AD accounts, this approach can also be used to recover accounts in other systems.
+
+![Account recovery solution](media/plan-verification-solution/account-recovery.png)
+
+#### Additional Elements
+
+**Account portal**: This is a web front end that orchestrates the API or SDK calls for VC presentation and validation. This orchestration can include Microsoft Graph calls to recover accounts in Azure AD.
+
+**Custom logic or workflows**: Logic with organization-specific steps before and after updating the user account. This might include approval workflows, additional validations, logging, notifications, etc.
+
+**Microsoft Graph**: Exposes representational state transfer (REST) APIs and client libraries to access Azure AD data that is used to perform account recovery.
+
+**Azure AD enterprise directory**: This is the Azure AD tenant that contains the accounts that are being created or updated through the account portal.
+
+#### Design considerations
+
+**VC Attribute correlation with Azure AD**: When defining the attributes of the VC in collaboration with the issuer, establish a mechanism to correlate information with internal systems based on the claims in the VC and user input. For example, if you have an identity verification provider (IDV) verify identity prior to onboarding employees, ensure that the issued VC includes claims that would also be present in an internal system such as a human resources system for correlation. This might be a phone number, address, or date of birth. In addition to claims in the VC, the RP can ask for some information such as the last 4 digits of their social security number (SSN) as part of this process.
+
+**Role of VCs with Existing Azure AD Credential Reset Capabilities**: Azure AD has a built-in self-service password reset (SSPR) capability. Verifiable Credentials can be used to provide an additional way to recover, particularly in cases where users do not have access to or lost control of the SSPR method, for example theyΓÇÖve lost both computer and mobile device. In this scenario, the user can re-obtain a VC from an identity proof issuer and present it to recover their account.
+
+Similarly, you can use a VC to generate a temporary access pass that will allow users to reset their MFA authentication methods without a password.
+
+**Authorization**: Create an authorization mechanism such as a security group that the RP checks before proceeding with the credential recovery. For example, only users in specific groups might be eligible to recover an account with a VC.
+
+**Interaction with Azure AD**: The service-to-service communication between the web frontend and Azure AD must be secured as a highly privileged system, because it can reset employeesΓÇÖ credentials. Grant the web frontend the least privileged roles possible. Some examples include:
+
+* Grant the RP website the ability to use a service principal granted the MS Graph scope UserAuthenticationMethod.ReadWrite.All to reset authentication methods. DonΓÇÖt grant the User.ReadWrite.All, which enables the ability to create and delete users.
+
+* If your RP is running in Azure, use Managed Identities to call Microsoft Graph. This removes the risks around managing service principal credentials in code or configuration files. For more information, see [Managed identities for Azure resources.](../managed-identities-azure-resources/overview.md)
+
+## Plan for identity management
+
+Below are some IAM considerations when incorporating VCs to relying parties. Relying parties are typically applications.
+
+### Authentication
+
+* The subject of a VC must be a human.
+
+* Presentation of VCs must be interactively performed by a human VC holder, who holds the VC in their wallet. Non-interactive flows such as on-behalf-of are not supported.
+
+### Authorization
+
+* A successful presentation of the VC can be considered a coarse-grained authorization gate by itself. The VC attributes can also be consumed for fine-grained authorization decisions.
+
+* Determine if an expired VC has meaning in your application; if so check the value of the ΓÇ£expΓÇ¥ claim (the expiration time) of the VC as part of the authorization checks. One example where expiration is not relevant is requiring a government-issued document such as a driverΓÇÖs license to validate if the subject is older than 18. The date of birth claim is valid, even if the VC is expired.
+
+* Determine if a revoked VC has meaning to your authorization decision.
+
+ * If it is not relevant, then skip the call to check status API (which is on by default).
+
+ * If it is relevant, add the proper handling of exceptions in your application.
+
+### User Profiles
+
+You can use information in presented VCs to build a user profile. If you want to consume attributes to build a profile, consider the following.
+
+* When the VC is issued, it contains a snapshot of attributes as of issuance. VCs might have long validity periods, and you must determine the age of attributes that you will accept as sufficiently fresh to use as a part of the profile.
+
+* If a VC needs to be presented every time the subject starts a session with the RP, consider using the output of the VC presentation to build a non-persistent user profile with the attributes. This helps to reduce privacy risks associated with storing user properties at rest. If the subjectΓÇÖs attributes need to be persisted locally by the application, only store the minimal set of claims required by your application (as opposed to store the entire content of the VC).
+
+* If the application requires a persistent user profile store:
+
+ * Consider using the ΓÇ£subΓÇ¥ claim as an immutable identifier of the user. This is an opaque unique attribute that will be constant for a given subject/RP pair.
+
+ * Define a mechanism to deprovision the user profile from the application. Due to the decentralized nature of the Azure AD Verifiable Credentials system, there is no application user provisioning lifecycle.
+
+ * Do not store personally data claims returned in the VC token.
+
+ * Only store claims needed for the logic of the relying party.
+
+## Plan for performance
+
+As with any solution, you must plan for performance. Focus areas include latency, throughput, storage, and scalability. During initial phases of a release cycle, performance should not be a concern. However, when adoption of your issuance solution results in many verifiable credentials being issued, performance planning might become a critical part of your solution.
+
+The following provides areas to consider when planning for performance:
+
+* The Azure AD Verifiable Credentials issuance service is deployed in West Europe, North Europe, West US 2, and West Central US Azure regions. To limit latency, deploy your verification frontend (website) and key vault in the region listed above that is closest to where requests are expected to originate from.
+
+* Model based on throughput:
+
+ * VC verification capacity is subject to [Azure Key Vault service limits](../../key-vault/general/service-limits.md).
+
+ * Each verification of a VC requires one Key Vault signature operation.
+
+ * Maximum signing performance of a Key Vault is 2000 signings/~10 seconds. This means your solution can support up to 12,000 VC validation requests per minute.
+
+ * You cannot control throttling; however, we recommend you read [Azure Key Vault throttling guidance](../../key-vault/general/overview-throttling.md) so that you understand how throttling might impact performance.
+
+## Plan for reliability
+
+To best plan for high availability and disaster recovery, we suggest the following:
+
+* Azure AD Verifiable Credentials service is deployed in the West Europe, North Europe, West US 2, and West Central US Azure regions. Consider deploying your supporting web servers and supporting applications in one of those regions, specifically in the ones from which you expect most of your validation traffic to originate.
+
+* Review and incorporate best practices from [Azure Key Vault availability and redundancy](../../key-vault/general/disaster-recovery-guidance.md) as you design for your availability and redundancy goals.
+
+## Plan for security
+
+As you are designing for security, consider the following:
+
+* All relying parties (RPs) in a single tenant have the same trust boundary since they share the same DID.
+
+* Define a dedicated service principal for a website accessing the Key Vault.
+
+* Only the Azure AD Verifiable Credentials service and the website service principals should have permissions to use Key Vault to sign messages with the private key.
+
+* Do not assign any human identity administrative permissions to the Key Vault. For more information on Key Vault best practices, refer to [Azure Security Baseline for Key Vault](../../key-vault/general/security-baseline.md).
+
+* Review [Securing Azure environments with Azure Active Directory](https://azure.microsoft.com/resources/securing-azure-environments-with-azure-active-directory/) for best practices for managing the supporting services for your solution.
+
+* Mitigate spoofing risks by:
+
+ * Implementing DNS verification to help customers identify issuer branding.
+
+ * Use domains that are meaningful to end users.
+
+* Mitigate distributed denial of service (DDOS) and Key Vault resource throttling risks. Every VC presentation request generates Key Vault signing operations that accrue towards service limits. We recommend protecting traffic by incorporating alternative authentication or captcha before generating issuance requests.
+
+## Plan for operations
+
+As you plan for operations, we recommend plan that you capture each attempt of credential validation as part of your auditing. Use that information for auditing and troubleshooting. Additionally, consider generating unique transaction identifiers (IDs) that customers and support engineers can refer to if needed.
+
+As part of your operational planning, consider monitoring the following:
+
+* For scalability:
+
+ * Monitor failed VC validation as a part of end-to-end security metrics of applications.
+
+ * Monitor end-to-end latency of credential verification.
+
+* For reliability and dependencies:
+
+ * Monitor underlying dependencies used by the verification solution.
+
+ * Follow [Azure Key Vault monitoring and alerting](../../key-vault/general/alert.md).
+
+* For security:
+
+ * Enable logging for Key Vault to track signing operations, as well as to monitor and alert on configuration changes. Refer to [How to enable Key Vault logging](../../key-vault/general/howto-logging.md) for more information.
+
+ * Archive logs in a security information and event management (SIEM) systems, such as [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel/) for long-term retention.
+
+
+
+## Next steps
+
+Learn more about architecting VC solutions
+
+ * [Azure AD Verifiable Credentials overview](introduction-to-verifiable-credentials-architecture.md)
+
+ * [Plan your Azure AD Verifiable Credentials issuance solution](plan-issuance-solution.md)
+
+Implement Verifiable Credentials
+
+[Introduction to Azure Active Directory Verifiable Credentials](decentralized-identifier-overview.md)
+
+[Get started with Verifiable Credentials](get-started-verifiable-credentials.md)
+
+[FAQs](verifiable-credentials-faq.md)
+
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/configure-azure-cni.md
A minimum value for maximum pods per node is enforced to guarantee space for sys
* **Azure CLI**: Specify the `--max-pods` argument when you deploy a cluster with the [az aks create][az-aks-create] command. The maximum value is 250. * **Resource Manager template**: Specify the `maxPods` property in the [ManagedClusterAgentPoolProfile] object when you deploy a cluster with a Resource Manager template. The maximum value is 250.
-* **Azure portal**: You can't change the maximum number of pods per node when you deploy a cluster with the Azure portal. Azure CNI networking clusters are limited to 30 pods per node when you deploy using the Azure portal.
+* **Azure portal**: You can't change the maximum number of pods per node when you deploy a cluster with the Azure portal. Azure CNI networking clusters are limited to 110 pods per node when you deploy using the Azure portal.
### Configure maximum - existing clusters
Using dynamic allocation of IPs and enhanced subnet support in your cluster is s
First, create the virtual network with two subnets: ```azurecli-interactive
-$resourceGroup="myResourceGroup"
-$vnet="myVirtualNetwork"
+resourceGroup="myResourceGroup"
+vnet="myVirtualNetwork"
+location="westcentralus"
+
+# Create the resource group
+az group create --name $resourceGroup --location $location
# Create our two subnet network
-az network vnet create -g $rg --name $vnet --address-prefixes 10.0.0.0/8 -o none
-az network vnet subnet create -g $rg --vnet-name $vnet --name nodesubnet --address-prefixes 10.240.0.0/16 -o none
-az network vnet subnet create -g $rg --vnet-name $vnet --name podsubnet --address-prefixes 10.241.0.0/16 -o none
+az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none
+az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefixes 10.240.0.0/16 -o none
+az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name podsubnet --address-prefixes 10.241.0.0/16 -o none
``` Then, create the cluster, referencing the node subnet using `--vnet-subnet-id` and the pod subnet using `--pod-subnet-id`: ```azurecli-interactive
-$clusterName="myAKSCluster"
-$location="eastus"
-$subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
-
-az aks create -n $clusterName -g $resourceGroup -l $location --max-pods 250 --node-count 2 --network-plugin azure --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet --pod-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/podsubnet
+clusterName="myAKSCluster"
+subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
+
+az aks create -n $clusterName -g $resourceGroup -l $location \
+ --max-pods 250 \
+ --node-count 2 \
+ --network-plugin azure \
+ --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet \
+ --pod-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/podsubnet
``` #### Adding node pool
When adding node pool, reference the node subnet using `--vnet-subnet-id` and th
az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name node2subnet --address-prefixes 10.242.0.0/16 -o none az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name pod2subnet --address-prefixes 10.243.0.0/16 -o none
-az aks nodepool add --cluster-name $clusterName -g $resourceGroup -n newNodepool --max-pods 250 --node-count 2 --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/node2subnet --pod-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/pod2subnet --no-wait
+az aks nodepool add --cluster-name $clusterName -g $resourceGroup -n newNodepool \
+ --max-pods 250 \
+ --node-count 2 \
+ --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/node2subnet \
+ --pod-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/pod2subnet \
+ --no-wait
``` ## Frequently asked questions
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-basic.md
This article uses [Helm 3][helm] to install the NGINX ingress controller on a [s
This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
+
+## Import the images used by the Helm chart into your ACR
+
+This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. Use `az acr import` to import those images into your ACR.
+
+```azurecli
+REGISTRY_NAME=<REGISTRY_NAME>
+CONTROLLER_REGISTRY=k8s.gcr.io
+CONTROLLER_IMAGE=ingress-nginx/controller
+CONTROLLER_TAG=v0.48.1
+PATCH_REGISTRY=docker.io
+PATCH_IMAGE=jettech/kube-webhook-certgen
+PATCH_TAG=v1.5.1
+DEFAULTBACKEND_REGISTRY=k8s.gcr.io
+DEFAULTBACKEND_IMAGE=defaultbackend-amd64
+DEFAULTBACKEND_TAG=1.5
+
+az acr import --name $REGISTRY_NAME --source $CONTROLLER_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG
+az acr import --name $REGISTRY_NAME --source $PATCH_REGISTRY/$PATCH_IMAGE:$PATCH_TAG --image $PATCH_IMAGE:$PATCH_TAG
+az acr import --name $REGISTRY_NAME --source $DEFAULTBACKEND_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG
+```
+
+> [!NOTE]
+> In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
+ ## Create an ingress controller To create the ingress controller, use Helm to install *nginx-ingress*. For added redundancy, two replicas of the NGINX ingress controllers are deployed with the `--set controller.replicaCount` parameter. To fully benefit from running replicas of the ingress controller, make sure there's more than one node in your AKS cluster.
The ingress controller also needs to be scheduled on a Linux node. Windows Serve
> [!TIP] > The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic*. Specify a namespace for your own environment as needed.-
-> [!TIP]
+>
> If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When using an ingress controller with client source IP preservation enabled, SSL pass-through will not work. ```console
kubectl create namespace ingress-basic
# Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+# Set variable for ACR location to use for pulling images
+ACR_URL=<REGISTRY_URL>
+ # Use Helm to deploy an NGINX ingress controller helm install nginx-ingress ingress-nginx/ingress-nginx \ --namespace ingress-basic \ --set controller.replicaCount=2 \ --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.image.registry=$ACR_URL \
+ --set controller.image.image=$CONTROLLER_IMAGE \
+ --set controller.image.tag=$CONTROLLER_TAG \
+ --set controller.image.digest="" \
+ --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.admissionWebhooks.patch.image.registry=$ACR_URL \
+ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \
+ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
+ --set defaultBackend.image.registry=$ACR_URL \
+ --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \
+ --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG
``` When the Kubernetes load balancer service is created for the NGINX ingress controller, a dynamic public IP address is assigned, as shown in the following example output:
You can also:
[helm]: https://helm.sh/ [helm-cli]: ./kubernetes-helm.md [nginx-ingress]: https://github.com/kubernetes/ingress-nginx
+[ingress-nginx-helm-chart]: https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
<!-- LINKS - internal --> [use-helm]: kubernetes-helm.md
You can also:
[aks-ingress-own-tls]: ingress-own-tls.md [client-source-ip]: concepts-network.md#ingress-controllers [aks-supported versions]: supported-kubernetes-versions.md
+[aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
+[acr-helm]: ../container-registry/container-registry-helm-repos.md
aks Ingress Internal Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-internal-ip.md
This article uses [Helm 3][helm] to install the NGINX ingress controller on a [s
This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
+
+## Import the images used by the Helm chart into your ACR
+
+This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. Use `az acr import` to import those images into your ACR.
+
+```azurecli
+REGISTRY_NAME=<REGISTRY_NAME>
+CONTROLLER_REGISTRY=k8s.gcr.io
+CONTROLLER_IMAGE=ingress-nginx/controller
+CONTROLLER_TAG=v0.48.1
+PATCH_REGISTRY=docker.io
+PATCH_IMAGE=jettech/kube-webhook-certgen
+PATCH_TAG=v1.5.1
+DEFAULTBACKEND_REGISTRY=k8s.gcr.io
+DEFAULTBACKEND_IMAGE=defaultbackend-amd64
+DEFAULTBACKEND_TAG=1.5
+
+az acr import --name $REGISTRY_NAME --source $CONTROLLER_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG
+az acr import --name $REGISTRY_NAME --source $PATCH_REGISTRY/$PATCH_IMAGE:$PATCH_TAG --image $PATCH_IMAGE:$PATCH_TAG
+az acr import --name $REGISTRY_NAME --source $DEFAULTBACKEND_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG
+```
+
+> [!NOTE]
+> In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
+ ## Create an ingress controller By default, an NGINX ingress controller is created with a dynamic public IP address assignment. A common configuration requirement is to use an internal, private network and IP address. This approach allows you to restrict access to your services to internal users, with no external access.
kubectl create namespace ingress-basic
# Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+# Set variable for ACR location to use for pulling images
+ACR_URL=<REGISTRY_URL>
+ # Use Helm to deploy an NGINX ingress controller helm install nginx-ingress ingress-nginx/ingress-nginx \ --namespace ingress-basic \
- -f internal-ingress.yaml \
--set controller.replicaCount=2 \ --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.image.registry=$ACR_URL \
+ --set controller.image.image=$CONTROLLER_IMAGE \
+ --set controller.image.tag=$CONTROLLER_TAG \
+ --set controller.image.digest="" \
+ --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.admissionWebhooks.patch.image.registry=$ACR_URL \
+ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \
+ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
+ --set defaultBackend.image.registry=$ACR_URL \
+ --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \
+ --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG
``` When the Kubernetes load balancer service is created for the NGINX ingress controller, your internal IP address is assigned. To get the public IP address, use the `kubectl get service` command.
You can also:
[client-source-ip]: concepts-network.md#ingress-controllers [aks-configure-kubenet-networking]: configure-kubenet.md [aks-configure-advanced-networking]: configure-azure-cni.md
-[aks-supported versions]: supported-kubernetes-versions.md
+[aks-supported versions]: supported-kubernetes-versions.md
+[ingress-nginx-helm-chart]: https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
+[aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
+[acr-helm]: ../container-registry/container-registry-helm-repos.md
aks Ingress Own Tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-own-tls.md
For more information on configuring and using Helm, see [Install applications wi
This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
+
+## Import the images used by the Helm chart into your ACR
+
+This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. Use `az acr import` to import those images into your ACR.
+
+```azurecli
+REGISTRY_NAME=<REGISTRY_NAME>
+CONTROLLER_REGISTRY=k8s.gcr.io
+CONTROLLER_IMAGE=ingress-nginx/controller
+CONTROLLER_TAG=v0.48.1
+PATCH_REGISTRY=docker.io
+PATCH_IMAGE=jettech/kube-webhook-certgen
+PATCH_TAG=v1.5.1
+DEFAULTBACKEND_REGISTRY=k8s.gcr.io
+DEFAULTBACKEND_IMAGE=defaultbackend-amd64
+DEFAULTBACKEND_TAG=1.5
+
+az acr import --name $REGISTRY_NAME --source $CONTROLLER_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG
+az acr import --name $REGISTRY_NAME --source $PATCH_REGISTRY/$PATCH_IMAGE:$PATCH_TAG --image $PATCH_IMAGE:$PATCH_TAG
+az acr import --name $REGISTRY_NAME --source $DEFAULTBACKEND_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG
+```
+
+> [!NOTE]
+> In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
+ ## Create an ingress controller To create the ingress controller, use `Helm` to install *nginx-ingress*. For added redundancy, two replicas of the NGINX ingress controllers are deployed with the `--set controller.replicaCount` parameter. To fully benefit from running replicas of the ingress controller, make sure there's more than one node in your AKS cluster.
kubectl create namespace ingress-basic
# Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+# Set variable for ACR location to use for pulling images
+ACR_URL=<REGISTRY_URL>
+ # Use Helm to deploy an NGINX ingress controller helm install nginx-ingress ingress-nginx/ingress-nginx \ --namespace ingress-basic \ --set controller.replicaCount=2 \ --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.image.registry=$ACR_URL \
+ --set controller.image.image=$CONTROLLER_IMAGE \
+ --set controller.image.tag=$CONTROLLER_TAG \
+ --set controller.image.digest="" \
+ --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.admissionWebhooks.patch.image.registry=$ACR_URL \
+ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \
+ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
+ --set defaultBackend.image.registry=$ACR_URL \
+ --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \
+ --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG
``` During the installation, an Azure public IP address is created for the ingress controller. This public IP address is static for the life-span of the ingress controller. If you delete the ingress controller, the public IP address assignment is lost. If you then create an additional ingress controller, a new public IP address is assigned. If you wish to retain the use of the public IP address, you can instead [create an ingress controller with a static public IP address][aks-ingress-static-tls].
You can also:
[nginx-ingress]: https://github.com/kubernetes/ingress-nginx [helm]: https://helm.sh/ [helm-install]: https://docs.helm.sh/using_helm/#installing-helm
+[ingress-nginx-helm-chart]: https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
<!-- LINKS - internal --> [use-helm]: kubernetes-helm.md
You can also:
[aks-http-app-routing]: http-application-routing.md [aks-ingress-tls]: ingress-tls.md [client-source-ip]: concepts-network.md#ingress-controllers
+[aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
+[acr-helm]: ../container-registry/container-registry-helm-repos.md
aks Ingress Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-static-ip.md
For more information on configuring and using Helm, see [Install applications wi
This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
+
+## Import the images used by the Helm chart into your ACR
+
+This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. Use `az acr import` to import those images into your ACR.
+
+```azurecli
+REGISTRY_NAME=<REGISTRY_NAME>
+CONTROLLER_REGISTRY=k8s.gcr.io
+CONTROLLER_IMAGE=ingress-nginx/controller
+CONTROLLER_TAG=v0.48.1
+PATCH_REGISTRY=docker.io
+PATCH_IMAGE=jettech/kube-webhook-certgen
+PATCH_TAG=v1.5.1
+DEFAULTBACKEND_REGISTRY=k8s.gcr.io
+DEFAULTBACKEND_IMAGE=defaultbackend-amd64
+DEFAULTBACKEND_TAG=1.5
+CERT_MANAGER_REGISTRY=quay.io
+CERT_MANAGER_TAG=v1.3.1
+CERT_MANAGER_IMAGE_CONTROLLER=jetstack/cert-manager-controller
+CERT_MANAGER_IMAGE_WEBHOOK=jetstack/cert-manager-webhook
+CERT_MANAGER_IMAGE_CAINJECTOR=jetstack/cert-manager-cainjector
+
+az acr import --name $REGISTRY_NAME --source $CONTROLLER_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG
+az acr import --name $REGISTRY_NAME --source $PATCH_REGISTRY/$PATCH_IMAGE:$PATCH_TAG --image $PATCH_IMAGE:$PATCH_TAG
+az acr import --name $REGISTRY_NAME --source $DEFAULTBACKEND_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG
+az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CONTROLLER:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CONTROLLER:$CERT_MANAGER_TAG
+az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_WEBHOOK:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_WEBHOOK:$CERT_MANAGER_TAG
+az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG
+```
+
+> [!NOTE]
+> In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
+ ## Create an ingress controller By default, an NGINX ingress controller is created with a new public IP address assignment. This public IP address is only static for the life-span of the ingress controller, and is lost if the controller is deleted and re-created. A common configuration requirement is to provide the NGINX ingress controller an existing static public IP address. The static public IP address remains if the ingress controller is deleted. This approach allows you to use existing DNS records and network configurations in a consistent manner throughout the lifecycle of your applications.
The ingress controller also needs to be scheduled on a Linux node. Windows Serve
Update the following script with the **IP address** of your ingress controller and a **unique name** that you would like to use for the FQDN prefix. > [!IMPORTANT]
-> You must update replace *STATIC_IP* and *DNS_LABEL* with your own IP address and unique name when running the command.
+> You must update replace `<STATIC_IP>` and `<DNS_LABEL>` with your own IP address and unique name when running the command.
```console # Create a namespace for your ingress resources
kubectl create namespace ingress-basic
# Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+# Set variable for ACR location to use for pulling images
+ACR_URL=<REGISTRY_URL>
+STATIC_IP=<STATIC_IP>
+DNS_LABEL=<DNS_LABEL>
+ # Use Helm to deploy an NGINX ingress controller helm install nginx-ingress ingress-nginx/ingress-nginx \ --namespace ingress-basic \ --set controller.replicaCount=2 \ --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.image.registry=$ACR_URL \
+ --set controller.image.image=$CONTROLLER_IMAGE \
+ --set controller.image.tag=$CONTROLLER_TAG \
+ --set controller.image.digest="" \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set controller.service.loadBalancerIP="STATIC_IP" \
- --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"="DNS_LABEL"
+ --set controller.admissionWebhooks.patch.image.registry=$ACR_URL \
+ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \
+ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
+ --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set defaultBackend.image.registry=$ACR_URL \
+ --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \
+ --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \
+ --set controller.service.loadBalancerIP=$STATIC_IP \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL
``` When the Kubernetes load balancer service is created for the NGINX ingress controller, your static IP address is assigned, as shown in the following example output:
helm repo add jetstack https://charts.jetstack.io
helm repo update # Install the cert-manager Helm chart
-helm install \
- cert-manager \
+helm install cert-manager jetstack/cert-manager \
--namespace ingress-basic \
- --version v1.3.1 \
+ --version $CERT_MANAGER_TAG \
--set installCRDs=true \ --set nodeSelector."beta\.kubernetes\.io/os"=linux \
- jetstack/cert-manager
+ --set image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CONTROLLER \
+ --set image.tag=$CERT_MANAGER_TAG \
+ --set webhook.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_WEBHOOK \
+ --set webhook.image.tag=$CERT_MANAGER_TAG \
+ --set cainjector.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CAINJECTOR \
+ --set cainjector.image.tag=$CERT_MANAGER_TAG
``` For more information on cert-manager configuration, see the [cert-manager project][cert-manager].
Before certificates can be issued, cert-manager requires an [Issuer][cert-manage
Create a cluster issuer, such as `cluster-issuer.yaml`, using the following example manifest. Update the email address with a valid address from your organization: ```yaml
-apiVersion: cert-manager.io/v1alpha2
+apiVersion: cert-manager.io/v1
kind: ClusterIssuer metadata: name: letsencrypt-staging
The output should be similar to this example:
ingress.extensions/hello-world-ingress created ```
-## Create a certificate object
+## Verify certificate object
Next, a certificate resource must be created. The certificate resource defines the desired X.509 certificate. For more information, see [cert-manager certificates][cert-manager-certificates].
Type Reason Age From Message
Normal CertIssued 10m cert-manager Certificate issued successfully ```
-If you need to create an additional certificate resource, you can do so with the following example manifest. Update the *dnsNames* and *domains* to the DNS name you created in a previous step. If you use an internal-only ingress controller, specify the internal DNS name for your service.
-
-```yaml
-apiVersion: cert-manager.io/v1alpha2
-kind: Certificate
-metadata:
- name: tls-secret
- namespace: ingress-basic
-spec:
- secretName: tls-secret
- dnsNames:
- - demo-aks-ingress.eastus.cloudapp.azure.com
- acme:
- config:
- - http01:
- ingressClass: nginx
- domains:
- - demo-aks-ingress.eastus.cloudapp.azure.com
- issuerRef:
- name: letsencrypt-staging
- kind: ClusterIssuer
-```
-
-To create the certificate resource, use the `kubectl apply` command.
-
-```
-$ kubectl apply -f certificates.yaml
-
-certificate.cert-manager.io/tls-secret created
-```
- ## Test the ingress configuration Open a web browser to the FQDN of your Kubernetes ingress controller, such as *`https://demo-aks-ingress.eastus.cloudapp.azure.com`*.
You can also:
[helm]: https://helm.sh/ [helm-install]: https://docs.helm.sh/using_helm/#installing-helm [ingress-shim]: https://docs.cert-manager.io/en/latest/tasks/issuing-certificates/ingress-shim.html
+[ingress-nginx-helm-chart]: https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
<!-- LINKS - internal --> [use-helm]: kubernetes-helm.md
You can also:
[client-source-ip]: concepts-network.md#ingress-controllers [install-azure-cli]: /cli/azure/install-azure-cli [aks-static-ip]: static-ip.md
-[aks-supported versions]: supported-kubernetes-versions.md
+[aks-supported versions]: supported-kubernetes-versions.md
+[aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
+[acr-helm]: ../container-registry/container-registry-helm-repos.md
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-tls.md
For more information on configuring and using Helm, see [Install applications wi
This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
+
+## Import the images used by the Helm chart into your ACR
+
+This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. Use `az acr import` to import those images into your ACR.
+
+```azurecli
+REGISTRY_NAME=<REGISTRY_NAME>
+CONTROLLER_REGISTRY=k8s.gcr.io
+CONTROLLER_IMAGE=ingress-nginx/controller
+CONTROLLER_TAG=v0.48.1
+PATCH_REGISTRY=docker.io
+PATCH_IMAGE=jettech/kube-webhook-certgen
+PATCH_TAG=v1.5.1
+DEFAULTBACKEND_REGISTRY=k8s.gcr.io
+DEFAULTBACKEND_IMAGE=defaultbackend-amd64
+DEFAULTBACKEND_TAG=1.5
+CERT_MANAGER_REGISTRY=quay.io
+CERT_MANAGER_TAG=v1.3.1
+CERT_MANAGER_IMAGE_CONTROLLER=jetstack/cert-manager-controller
+CERT_MANAGER_IMAGE_WEBHOOK=jetstack/cert-manager-webhook
+CERT_MANAGER_IMAGE_CAINJECTOR=jetstack/cert-manager-cainjector
+
+az acr import --name $REGISTRY_NAME --source $CONTROLLER_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG
+az acr import --name $REGISTRY_NAME --source $PATCH_REGISTRY/$PATCH_IMAGE:$PATCH_TAG --image $PATCH_IMAGE:$PATCH_TAG
+az acr import --name $REGISTRY_NAME --source $DEFAULTBACKEND_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG
+az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CONTROLLER:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CONTROLLER:$CERT_MANAGER_TAG
+az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_WEBHOOK:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_WEBHOOK:$CERT_MANAGER_TAG
+az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG
+```
+
+> [!NOTE]
+> In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
+ ## Create an ingress controller To create the ingress controller, use the `helm` command to install *nginx-ingress*. For added redundancy, two replicas of the NGINX ingress controllers are deployed with the `--set controller.replicaCount` parameter. To fully benefit from running replicas of the ingress controller, make sure there's more than one node in your AKS cluster.
kubectl create namespace ingress-basic
# Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+# Set variable for ACR location to use for pulling images
+ACR_URL=<REGISTRY_URL>
+ # Use Helm to deploy an NGINX ingress controller helm install nginx-ingress ingress-nginx/ingress-nginx \ --namespace ingress-basic \ --set controller.replicaCount=2 \ --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.image.registry=$ACR_URL \
+ --set controller.image.image=$CONTROLLER_IMAGE \
+ --set controller.image.tag=$CONTROLLER_TAG \
+ --set controller.image.digest="" \
+ --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.admissionWebhooks.patch.image.registry=$ACR_URL \
+ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \
+ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
+ --set defaultBackend.image.registry=$ACR_URL \
+ --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \
+ --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG
``` During the installation, an Azure public IP address is created for the ingress controller. This public IP address is static for the life-span of the ingress controller. If you delete the ingress controller, the public IP address assignment is lost. If you then create an additional ingress controller, a new public IP address is assigned. If you wish to retain the use of the public IP address, you can instead [create an ingress controller with a static public IP address][aks-ingress-static-tls].
helm repo update
# Install the cert-manager Helm chart helm install cert-manager jetstack/cert-manager \ --namespace ingress-basic \
+ --version $CERT_MANAGER_TAG \
--set installCRDs=true \
- --set nodeSelector."kubernetes\.io/os"=linux \
- --set webhook.nodeSelector."kubernetes\.io/os"=linux \
- --set cainjector.nodeSelector."kubernetes\.io/os"=linux
+ --set nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CONTROLLER \
+ --set image.tag=$CERT_MANAGER_TAG \
+ --set webhook.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_WEBHOOK \
+ --set webhook.image.tag=$CERT_MANAGER_TAG \
+ --set cainjector.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CAINJECTOR \
+ --set cainjector.image.tag=$CERT_MANAGER_TAG
``` For more information on cert-manager configuration, see the [cert-manager project][cert-manager].
You can also:
[lets-encrypt]: https://letsencrypt.org/ [nginx-ingress]: https://github.com/kubernetes/ingress-nginx [helm-install]: https://docs.helm.sh/using_helm/#installing-helm
+[ingress-nginx-helm-chart]: https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
<!-- LINKS - internal --> [use-helm]: kubernetes-helm.md
You can also:
[aks-quickstart-portal]: kubernetes-walkthrough-portal.md [client-source-ip]: concepts-network.md#ingress-controllers [install-azure-cli]: /cli/azure/install-azure-cli
-[aks-supported versions]: supported-kubernetes-versions.md
+[aks-supported versions]: supported-kubernetes-versions.md
+[aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
+[acr-helm]: ../container-registry/container-registry-helm-repos.md
aks Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-helm.md
To install charts with Helm, use the [helm install][helm-install-command] comman
```console helm install my-nginx-ingress ingress-nginx/ingress-nginx \ --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux
+ --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.image.registry=mcr.microsoft.com \
+ --set defaultBackend.image.registry=mcr.microsoft.com \
+ --set controller.admissionWebhooks.patch.image.registry=mcr.microsoft.com
``` The following condensed example output shows the deployment status of the Kubernetes resources created by the Helm chart: ```console
-$ helm install my-nginx-ingress ingress-nginx/ingress-nginx \
-> --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
-> --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux
- NAME: my-nginx-ingress LAST DEPLOYED: Fri Nov 22 10:08:06 2019 NAMESPACE: default
For more information about managing Kubernetes application deployments with Helm
<!-- LINKS - internal --> [aks-quickstart-cli]: kubernetes-walkthrough.md [aks-quickstart-portal]: kubernetes-walkthrough-portal.md
-[taints]: operator-best-practices-advanced-scheduler.md
+[taints]: operator-best-practices-advanced-scheduler.md
aks Quickstart Helm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/quickstart-helm.md
description: Use Helm with AKS and Azure Container Registry to package and run a
Previously updated : 03/15/2021 Last updated : 07/15/2021
To connect a Kubernetes cluster locally, use the Kubernetes command-line client,
## Download the sample application
-This quickstart uses [an example Node.js application][example-nodejs]. Clone the application from GitHub and navigate to the `dev-spaces/samples/nodejs/getting-started/webfrontend` directory.
+This quickstart uses the [Azure Vote application][azure-vote-app]. Clone the application from GitHub and navigate to the `azure-vote` directory.
```console
-git clone https://github.com/Azure/dev-spaces
-cd dev-spaces/samples/nodejs/getting-started/webfrontend
-```
-
-## Create a Dockerfile
-
-Create a new *Dockerfile* file using the following commands:
-
-```dockerfile
-FROM node:latest
-
-WORKDIR /webfrontend
-
-COPY package.json ./
-
-RUN npm install
-
-COPY . .
-
-EXPOSE 80
-CMD ["node","server.js"]
+git clone https://github.com/Azure-Samples/azure-voting-app-redis.git
+cd azure-voting-app-redis/azure-vote/
``` ## Build and push the sample application to the ACR
CMD ["node","server.js"]
Using the preceding Dockerfile, run the [az acr build][az-acr-build] command to build and push an image to the registry. The `.` at the end of the command sets the location of the Dockerfile (in this case, the current directory). ```azurecli
-az acr build --image webfrontend:v1 \
+az acr build --image azure-vote-front:v1 \
--registry MyHelmACR \ --file Dockerfile . ```
+> [!NOTE]
+> In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
+ ## Create your Helm chart Generate your Helm chart using the `helm create` command. ```console
-helm create webfrontend
+helm create azure-vote-front
+```
+
+Update *azure-vote-front/Chart.yaml* to add a dependency for the *redis* chart from the `https://charts.bitnami.com/bitnami` chart repository and update `appVersion` to `v1`. For example:
+
+```yml
+apiVersion: v2
+name: azure-vote-front
+description: A Helm chart for Kubernetes
+
+dependencies:
+ - name: redis
+ version: 14.7.1
+ repository: https://charts.bitnami.com/bitnami
+
+...
+# This is the version number of the application being deployed. This version number should be
+# incremented each time you make changes to the application.
+appVersion: v1
+```
+
+Update your helm chart dependencies using `helm dependency update`:
+
+```console
+helm dependency update azure-vote-front
```
-Update *webfrontend/values.yaml*:
-* Replace the loginServer of your registry that you noted in an earlier step, such as *myhelmacr.azurecr.io*.
-* Change `image.repository` to `<loginServer>/webfrontend`
-* Change `service.type` to `LoadBalancer`
+Update *azure-vote-front/values.yaml*:
+* Add a *redis* section to set the image details, container port, and deployment name.
+* Add a *backendName* for connecting the frontend portion to the *redis* deployment.
+* Change *image.repository* to `<loginServer>/azure-vote-front`.
+* Change *image.tag* to `v1`.
+* Change *service.type* to *LoadBalancer*.
For example: ```yml
-# Default values for webfrontend.
+# Default values for azure-vote-front.
# This is a YAML-formatted file. # Declare variables to be passed into your templates. replicaCount: 1
+backendName: azure-vote-backend-master
+redis:
+ image:
+ registry: mcr.microsoft.com
+ repository: oss/bitnami/redis
+ tag: 6.0.8
+ fullnameOverride: azure-vote-backend
+ auth:
+ enabled: false
image:
- repository: myhelmacr.azurecr.io/webfrontend
+ repository: myhelmacr.azurecr.io/azure-vote-front
pullPolicy: IfNotPresent
+ tag: "v1"
... service: type: LoadBalancer
service:
... ```
-Update `appVersion` to `v1` in *webfrontend/Chart.yaml*. For example
+Add an `env` section to *azure-vote-front/templates/deployment.yaml* for passing the name of the *redis* deployment.
```yml
-apiVersion: v2
-name: webfrontend
...
-# This is the version number of the application being deployed. This version number should be
-# incremented each time you make changes to the application.
-appVersion: v1
+ containers:
+ - name: {{ .Chart.Name }}
+ securityContext:
+ {{- toYaml .Values.securityContext | nindent 12 }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ - name: REDIS
+ value: {{ .Values.backendName }}
+...
``` ## Run your Helm chart
appVersion: v1
Install your application using your Helm chart using the `helm install` command. ```console
-helm install webfrontend webfrontend/
+helm install azure-vote-front azure-vote-front/
``` It takes a few minutes for the service to return a public IP address. Monitor progress using the `kubectl get service` command with the `--watch` argument. ```console
-$ kubectl get service --watch
-
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-webfrontend LoadBalancer 10.0.141.72 <pending> 80:32150/TCP 2m
+$ kubectl get service azure-vote-front --watch
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+azure-vote-front LoadBalancer 10.0.18.228 <pending> 80:32021/TCP 6s
...
-webfrontend LoadBalancer 10.0.141.72 <EXTERNAL-IP> 80:32150/TCP 7m
+azure-vote-front LoadBalancer 10.0.18.228 52.188.140.81 80:32021/TCP 2m6s
``` Navigate to your application's load balancer in a browser using the `<EXTERNAL-IP>` to see the sample application.
For more information about using Helm, see the Helm documentation.
[az-group-delete]: /cli/azure/group#az_group_delete [az aks get-credentials]: /cli/azure/aks#az_aks_get_credentials [az aks install-cli]: /cli/azure/aks#az_aks_install_cli
-[example-nodejs]: https://github.com/Azure/dev-spaces/tree/master/samples/nodejs/getting-started/webfrontend
+[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/ [helm]: https://helm.sh/ [helm-documentation]: https://helm.sh/docs/ [helm-existing]: kubernetes-helm.md [helm-install]: https://helm.sh/docs/intro/install/ [sp-delete]: kubernetes-service-principal.md#additional-considerations
+[acr-helm]: ../container-registry/container-registry-helm-repos.md
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-ad-pod-identity.md
export IDENTITY_RESOURCE_ID="$(az identity show -g ${IDENTITY_RESOURCE_GROUP} -n
## Assign permissions for the managed identity
-The *IDENTITY_CLIENT_ID* managed identity must have Managed Identity Operator permissions in the resource group that contains the virtual machine scale set of your AKS cluster.
+To run the demo, the *IDENTITY_CLIENT_ID* managed identity must have Virtual Machine Contributor permissions in the resource group that contains the virtual machine scale set of your AKS cluster.
```azurecli-interactive NODE_GROUP=$(az aks show -g myResourceGroup -n myAKSCluster --query nodeResourceGroup -o tsv) NODES_RESOURCE_ID=$(az group show -n $NODE_GROUP -o tsv --query "id")
-az role assignment create --role "Managed Identity Operator" --assignee "$IDENTITY_CLIENT_ID" --scope $NODES_RESOURCE_ID
+az role assignment create --role "Virtual Machine Contributor" --assignee "$IDENTITY_CLIENT_ID" --scope $NODES_RESOURCE_ID
``` ## Create a pod identity
az role assignment create --role "Managed Identity Operator" --assignee "$IDENTI
Create a pod identity for the cluster using `az aks pod-identity add`. > [!IMPORTANT]
-> You must have the appropriate permissions, such as `Owner`, on your subscription to create the identity and role binding.
+> You must have the relevant permissions (for example, Owner) on your subscription to create the identity and assign role binding to the cluster identity.
+>
+> The cluster identity must have Managed Identity Operator permissions for the identity to be assigned.
```azurecli-interactive export POD_IDENTITY_NAME="my-pod-identity"
az aks pod-identity add --resource-group myResourceGroup --cluster-name myAKSClu
> [!NOTE] > When you enable pod-managed identity on your AKS cluster, an AzurePodIdentityException named *aks-addon-exception* is added to the *kube-system* namespace. An AzurePodIdentityException allows pods with certain labels to access the Azure Instance Metadata Service (IMDS) endpoint without being intercepted by the node-managed identity (NMI) server. The *aks-addon-exception* allows AKS first-party addons, such as AAD pod-managed identity, to operate without having to manually configure an AzurePodIdentityException. Optionally, you can add, remove, and update an AzurePodIdentityException using `az aks pod-identity exception add`, `az aks pod-identity exception delete`, `az aks pod-identity exception update`, or `kubectl`.
+> [!NOTE]
+> When you assign the pod identity by using `pod-identity add`, the Azure CLI attempts to grant the Managed Identity Operator role over the pod identity (*IDENTITY_RESOURCE_ID*) to the cluster identity.
+ ## Run a sample application For a pod to use AAD pod-managed identity, the pod needs an *aadpodidbinding* label with a value that matches a selector from a *AzureIdentityBinding*. To run a sample application using AAD pod-managed identity, create a `demo.yaml` file with the following contents. Replace *POD_IDENTITY_NAME*, *IDENTITY_CLIENT_ID*, and *IDENTITY_RESOURCE_GROUP* with the values from the previous steps. Replace *SUBSCRIPTION_ID* with your subscription ID.
kind: Pod
metadata: name: demo labels:
- aadpodidbinding: POD_IDENTITY_NAME
+ aadpodidbinding: $POD_IDENTITY_NAME
spec: containers: - name: demo image: mcr.microsoft.com/oss/azure/aad-pod-identity/demo:v1.6.3 args:
- - --subscriptionid=SUBSCRIPTION_ID
- - --clientid=IDENTITY_CLIENT_ID
- - --resourcegroup=IDENTITY_RESOURCE_GROUP
+ - --subscriptionid=$SUBSCRIPTION_ID
+ - --clientid=$IDENTITY_CLIENT_ID
+ - --resourcegroup=$IDENTITY_RESOURCE_GROUP
env: - name: MY_POD_NAME valueFrom:
spec:
- name: demo image: mcr.microsoft.com/oss/azure/aad-pod-identity/demo:v1.6.3 args:
- - --subscriptionid=SUBSCRIPTION_ID
- - --clientid=IDENTITY_CLIENT_ID
- - --resourcegroup=IDENTITY_RESOURCE_GROUP
+ - --subscriptionid=$SUBSCRIPTION_ID
+ - --clientid=$IDENTITY_CLIENT_ID
+ - --resourcegroup=$IDENTITY_RESOURCE_GROUP
env: - name: MY_POD_NAME valueFrom:
api-management Api Management Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-key-concepts.md
Policies are a powerful capability of API Management that allow the Azure portal
Policy expressions can be used as attribute values or text values in any of the API Management policies, unless the policy specifies otherwise. Some policies such as the [Control flow](./api-management-advanced-policies.md#choose) and [Set variable](./api-management-advanced-policies.md#set-variable) policies are based on policy expressions. For more information, see [Advanced policies](./api-management-advanced-policies.md#AdvancedPolicies) and [Policy expressions](./api-management-policy-expressions.md).
-For a complete list of API Management policies, see [Policy reference][Policy reference]. For more information on using and configuring policies, see [API Management policies][API Management policies]. For a tutorial on creating a product with rate limit and quota policies, see [How create and configure advanced product settings][How create and configure advanced product settings].
+For a complete list of API Management policies, see [Policy reference][Policy reference]. For more information on using and configuring policies, see [API Management policies][API Management policies]. For a tutorial on creating a product with rate limit and quota policies, see [How to create and configure advanced product settings][How to create and configure advanced product settings].
## <a name="developer-portal"> </a> Developer portal
Complete the following quickstart and start using Azure API Management:
[How to create and publish a product]: api-management-howto-add-products.md [How to create and use groups]: api-management-howto-create-groups.md [How to associate groups with developers]: api-management-howto-create-groups.md#associate-group-developer
-[How create and configure advanced product settings]: transform-api.md
+[How to create and configure advanced product settings]: transform-api.md
[How to create or invite developers]: api-management-howto-create-or-invite-developers.md [Policy reference]: ./api-management-policies.md [API Management policies]: api-management-howto-policies.md
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/overview.md
Reserved Instance pricing for Isolated v2 will be available after GA.
The ASEv3 is available in the following regions.
-|Normal ASEv3 regions| Dedicated hosts regions| AZ ASEv3 regions|
-|--|-||
-|Australia East| Australia East| Australia East|
-|Australia Southeast| Australia Southeast |Canada Central|
-|Brazil South |Brazil South |Central US|
-|Canada Central| Canada Central| East US|
-|Central India |Central India| East US 2|
-|Central US |Central US |France Central|
-|East Asia |East Asia| Germany West Central|
-|East US |East US | North Europe|
-|East US 2| East US 2| South Central US|
-|France Central |France Central | Southeast Asia|
-|Germany West Central |Germany West Central| UK South|
-|Korea Central |Korea Central | West Europe|
-|North Europe |North Europe| West US 2|
-|Norway East |Norway East| |
-|South Africa North| South Africa North| |
-|South Central US |South Central US | |
-|Southeast Asia| Southeast Asia | |
-|Switzerland North |Switzerland North| |
-|UK South| UK West| |
-|UK West| West Central US | |
-|West Central US |West Europe| |
-|West Europe |West US | |
-|West US |West US 2| |
-|West US 2 | |
+|Normal and dedicated host ASEv3 regions| AZ ASEv3 regions|
+|||
+|Australia East| Australia East|
+|Australia Southeast|Canada Central|
+|Brazil South |Central US|
+|Canada Central| East US|
+|Central India | East US 2|
+|Central US |France Central|
+|East Asia | Germany West Central|
+|East US | North Europe|
+|East US 2| South Central US|
+|France Central | Southeast Asia|
+|Germany West Central | UK South|
+|Korea Central | West Europe|
+|North Europe |West US 2|
+|Norway East | |
+|South Africa North| |
+|South Central US | |
+|Southeast Asia| |
+|Switzerland North | |
+|UK South| |
+|UK West| |
+|West Central US | |
+|West Europe | |
+|West US | |
+|West US 2| |
automation Automation Edit Textual Runbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-edit-textual-runbook.md
Each runbook in Azure Automation has two versions, Draft and Published. You edit
This article provides detailed steps for performing different functions with this editor. These are not applicable to [graphical runbooks](automation-runbook-types.md#graphical-runbooks). To work with these runbooks, see [Graphical authoring in Azure Automation](automation-graphical-authoring-intro.md).
+> [!IMPORTANT]
+> Do not include the keyword "AzureRm" in any script designed to be executed with the Az module. Inclusion of the keyword, even in a comment, may cause the AzureRm to load and then conflict with the Az module.
+ ## Edit a runbook with the Azure portal 1. In the Azure portal, select your Automation account.
automation Automation Runbook Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-gallery.md
PowerShell modules contain cmdlets that you can use in your runbooks. Existing m
You can also find modules to import in the Azure portal. They're listed for your Automation Account in the **Modules gallery** under **Shared resources**.
+> [!IMPORTANT]
+> Do not include the keyword "AzureRm" in any script designed to be executed with the Az module. Inclusion of the keyword, even in a comment, may cause the AzureRm to load and then conflict with the Az module.
+ ## Common scenarios available in the PowerShell Gallery The list below contains a few runbooks that support common scenarios. For a full list of runbooks created by the Azure Automation team, see [AzureAutomationTeam profile](https://www.powershellgallery.com/profiles/AzureAutomationTeam).
automation Automation Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-webhooks.md
Title: Start an Azure Automation runbook from a webhook
description: This article tells how to use a webhook to start a runbook in Azure Automation from an HTTP call. Previously updated : 03/18/2021 Last updated : 07/21/2021 + # Start a runbook from a webhook A webhook allows an external service to start a particular runbook in Azure Automation through a single HTTP request. External services include Azure DevOps Services, GitHub, Azure Monitor logs, and custom applications. Such a service can use a webhook to start a runbook without implementing the full Azure Automation API. You can compare webhooks to other methods of starting a runbook in [Starting a runbook in Azure Automation](./start-runbooks.md).
The following table describes the properties that you must configure for a webho
|: |: | | Name |Name of the webhook. You can provide any name you want, since it isn't exposed to the client. It's only used for you to identify the runbook in Azure Automation. As a best practice, you should give the webhook a name related to the client that uses it. | | URL |URL of the webhook. This is the unique address that a client calls with an HTTP POST to start the runbook linked to the webhook. It's automatically generated when you create the webhook. You can't specify a custom URL. <br> <br> The URL contains a security token that allows a third-party system to invoke the runbook with no further authentication. For this reason, you should treat the URL like a password. For security reasons, you can only view the URL in the Azure portal when creating the webhook. Note the URL in a secure location for future use. |
-| Expiration date | Expiration date of the webhook, after which it can no longer be used. You can modify the expiration date after the webhook is created, as long as the webhook has not expired. |
+| Expiration date | Expiration date of the webhook, after which it can no longer be used. You can modify the expiration date after the webhook is created, as long as the webhook hasn't expired. |
| Enabled | Setting indicating if the webhook is enabled by default when it's created. If you set this property to Disabled, no client can use the webhook. You can set this property when you create the webhook or any other time after its creation. | ## Parameters used when the webhook starts a runbook
The `WebhookData` parameter has the following properties:
| Property | Description | |: |: |
-| `WebhookName` | Name of the webhook. |
-| `RequestHeader` | Hashtable containing the headers of the incoming POST request. |
-| `RequestBody` | Body of the incoming POST request. This body retains any data formatting, such as string, JSON, XML, or form-encoded. The runbook must be written to work with the data format that is expected. |
+| WebhookName | Name of the webhook. |
+| RequestHeader | Hashtable containing the headers of the incoming POST request. |
+| RequestBody | Body of the incoming POST request. This body keeps any data formatting, such as string, JSON, XML, or form-encoded. The runbook must be written to work with the data format that is expected. |
There's no configuration of the webhook required to support the `WebhookData` parameter, and the runbook isn't required to accept it. If the runbook doesn't define the parameter, any details of the request sent from the client are ignored. > [!NOTE] > When calling a webhook, the client should always store any parameter values in case the call fails. If there is a network outage or connection issue, the application can't retrieve failed webhook calls.
-If you specify a value for `WebhookData` at webhook creation, it is overridden when the webhook starts the runbook with the data from the client POST request. This happens even if the application does not include any data in the request body.
+If you specify a value for `WebhookData` at webhook creation, it's overridden when the webhook starts the runbook with the data from the client POST request. This happens even if the application doesn't include any data in the request body.
If you start a runbook that defines `WebhookData` using a mechanism other than a webhook, you can provide a value for `WebhookData` that the runbook recognizes. This value should be an object with the same [properties](#webhook-properties) as the `WebhookData` parameter so that the runbook can work with it just as it works with actual `WebhookData` objects passed by a webhook.
-For example, if you are starting the following runbook from the Azure portal and want to pass some sample webhook data for testing, you must pass the data in JSON in the user interface.
+For example, if you're starting the following runbook from the Azure portal and want to pass some sample webhook data for testing, you must pass the data in JSON in the user interface.
![WebhookData parameter from UI](media/automation-webhooks/WebhookData-parameter-from-UI.png)
Now we pass the following JSON object in the UI for the `WebhookData` parameter.
## Webhook security
-The security of a webhook relies on the privacy of its URL, which contains a security token that allows the webhook to be invoked. Azure Automation does not perform any authentication on a request as long as it is made to the correct URL. For this reason, your clients should not use webhooks for runbooks that perform highly sensitive operations without using an alternate means of validating the request.
+The security of a webhook relies on the privacy of its URL, which contains a security token that allows the webhook to be invoked. Azure Automation doesn't perform any authentication on a request as long as it's made to the correct URL. For this reason, your clients shouldn't use webhooks for runbooks that perform highly sensitive operations without using an alternate means of validating the request.
Consider the following strategies:
-* You can include logic within a runbook to determine if it is called by a webhook. Have the runbook check the `WebhookName` property of the `WebhookData` parameter. The runbook can perform further validation by looking for particular information in the `RequestHeader` and `RequestBody` properties.
+* You can include logic within a runbook to determine if it's called by a webhook. Have the runbook check the `WebhookName` property of the `WebhookData` parameter. The runbook can perform further validation by looking for particular information in the `RequestHeader` and `RequestBody` properties.
* Have the runbook perform some validation of an external condition when it receives a webhook request. For example, consider a runbook that is called by GitHub any time there's a new commit to a GitHub repository. The runbook might connect to GitHub to validate that a new commit has occurred before continuing.
-* Azure Automation supports Azure virtual network service tags, specifically [GuestAndHybridManagement](../virtual-network/service-tags-overview.md). You can use service tags to define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md) and trigger webhooks from within your virtual network. Service tags can be used in place of specific IP addresses when you create security rules. By specifying the service tag name **GuestAndHybridManagement** in the appropriate source or destination field of a rule, you can allow or deny the traffic for the Automation service. This service tag does not support allowing more granular control by restricting IP ranges to a specific region.
+* Azure Automation supports Azure virtual network service tags, specifically [GuestAndHybridManagement](../virtual-network/service-tags-overview.md). You can use service tags to define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md) and trigger webhooks from within your virtual network. Service tags can be used in place of specific IP addresses when you create security rules. By specifying the service tag name **GuestAndHybridManagement** in the appropriate source or destination field of a rule, you can allow or deny the traffic for the Automation service. This service tag doesn't support allowing more granular control by restricting IP ranges to a specific region.
## Create a webhook
-Use the following procedure to create a new webhook linked to a runbook in the Azure portal.
+A webhook requires a published runbook. This walk through uses a modified version of the runbook created from [Create an Azure Automation runbook](automation-quickstart-create-runbook.md). To follow along, edit your PowerShell runbook with the following code:
+
+```powershell
+param
+(
+ [Parameter(Mandatory=$false)]
+ [object] $WebhookData
+)
+
+if ($WebhookData.RequestBody) {
+ $names = (ConvertFrom-Json -InputObject $WebhookData.RequestBody)
+
+ foreach ($x in $names)
+ {
+ $name = $x.Name
+ Write-Output "Hello $name"
+ }
+}
+else {
+ Write-Output "Hello World!"
+}
+```
+
+Then save and publish the revised runbook. The examples below show to create a webhook using the Azure portal, PowerShell, and REST.
-1. From the Runbooks page in the Azure portal, click the runbook that the webhook starts to view the runbook details. Ensure that the runbook **Status** field is set to **Published**.
-2. Click **Webhook** at the top of the page to open the Add Webhook page.
-3. Click **Create new webhook** to open the Create Webhook page.
-4. Fill in the **Name** and **Expiration Date** fields for the webhook and specify if it should be enabled. See [Webhook properties](#webhook-properties) for more information about these properties.
-5. Click the copy icon and press Ctrl+C to copy the URL of the webhook. Then record it in a safe place.
+### From the portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the Azure portal, navigate to your Automation account.
+
+1. Under **Process Automation**, select **Runbooks** to open the **Runbooks** page.
+
+1. Select your runbook from the list to open the Runbook **Overview** page.
+
+1. Select **Add webhook** to open the **Add Webhook** page.
+
+ :::image type="content" source="media/automation-webhooks/add-webhook-icon.png" alt-text="Runbook overview page with Add webhook highlighted.":::
+
+1. On the **Add Webhook** page, select **Create new webhook**.
+
+ :::image type="content" source="media/automation-webhooks/add-webhook-page-create.png" alt-text="Add webhook page with create highlighted.":::
+
+1. Enter in the **Name** for the webhook. The expiration date for the field **Expires** defaults to one year from the current date.
+
+1. Click the copy icon or press <kbd>Ctrl+C to</kbd> copy the URL of the webhook. Then save the URL to a secure location.
+
+ :::image type="content" source="media/automation-webhooks/create-new-webhook.png" alt-text="Creaye webhook page with URL highlighted.":::
> [!IMPORTANT] > Once you create the webhook, you cannot retrieve the URL again. Make sure you copy and record it as above.
- ![Webhook URL](media/automation-webhooks/copy-webhook-url.png)
+1. Select **OK** to return to the **Add Webhook** page.
+
+1. From the **Add Webhook** page, select **Configure parameters and run settings** to open the **Parameters** page.
+
+ :::image type="content" source="media/automation-webhooks/add-webhook-page-parameters.png" alt-text="Add webhook page with parameters highlighted.":::
+
+1. Review the **Parameters** page. For the example runbook used in this article, no changes are needed. Select **OK** to return to the **Add Webhook** page.
+
+1. From the **Add Webhook** page, select **Create**. The webhook is created and you're returned to the Runbook **Overview** page.
-1. Click **Parameters** to provide values for the runbook parameters. If the runbook has mandatory parameters, you can't create the webhook unless you provide values.
+### Using PowerShell
-2. Click **Create** to create the webhook.
+1. Verify you have the latest version of the PowerShell [Az Module](/powershell/azure/new-azureps-module-az) installed.
+
+1. Sign in to Azure interactively using the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet and follow the instructions.
+
+ ```powershell
+ # Sign in to your Azure subscription
+ $sub = Get-AzSubscription -ErrorAction SilentlyContinue
+ if(-not($sub))
+ {
+ Connect-AzAccount
+ }
+ ```
+
+1. Use the [New-AzAutomationWebhook](/powershell/module/az.automation/new-azautomationwebhook) cmdlet to create a webhook for an Automation runbook. Provide an appropriate value for the variables and then execute the script.
+
+ ```powershell
+ # Initialize variables with your relevant values
+ $resourceGroup = "resourceGroupName"
+ $automationAccount = "automationAccountName"
+ $runbook = "runbookName"
+ $psWebhook = "webhookName"
+
+ # Create webhook
+ $newWebhook = New-AzAutomationWebhook `
+ -ResourceGroup $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $psWebhook `
+ -RunbookName $runbook `
+ -IsEnabled $True `
+ -ExpiryTime "12/31/2022" `
+ -Force
+
+ # Store URL in variable; reveal variable
+ $uri = $newWebhook.WebhookURI
+ $uri
+ ```
+
+ The output will be a URL that looks similar to: `https://ad7f1818-7ea9-4567-b43a.webhook.wus.azure-automation.net/webhooks?token=uTi69VZ4RCa42zfKHCeHmJa2W9fd`
+
+1. You can also verify the webhook with the PowerShell cmdlet [Get-AzAutomationWebhook](/powershell/module/az.automation/get-azautomationwebhook).
+
+ ```powershell
+ Get-AzAutomationWebhook `
+ -ResourceGroup $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $psWebhook
+ ```
+
+### Using REST
+
+The PUT command is documented at [Webhook - Create Or Update](/rest/api/automation/webhook/create-or-update). This example uses the PowerShell cmdlet [Invoke-RestMethod](/powershell/module/microsoft.powershell.utility/invoke-restmethod) to send the PUT request.
+
+1. Create a file called `webhook.json` and then paste the following code:
+
+ ```json
+ {
+ "name": "RestWebhook",
+ "properties": {
+ "isEnabled": true,
+ "expiryTime": "2022-03-29T22:18:13.7002872Z",
+ "runbook": {
+ "name": "runbookName"
+ }
+ }
+ }
+ ```
+
+ Before running, modify the value for the **runbook:name** property with the actual name of your runbook. Review [Webhook properties](#webhook-properties) for more information about these properties.
+
+1. Verify you have the latest version of the PowerShell [Az Module](/powershell/azure/new-azureps-module-az) installed.
+
+1. Sign in to Azure interactively using the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet and follow the instructions.
+
+ ```powershell
+ # Sign in to your Azure subscription
+ $sub = Get-AzSubscription -ErrorAction SilentlyContinue
+ if(-not($sub))
+ {
+ Connect-AzAccount
+ }
+ ```
+
+1. Provide an appropriate value for the variables and then execute the script.
+
+ ```powershell
+ # Initialize variables
+ $subscription = "subscriptionID"
+ $resourceGroup = "resourceGroup"
+ $automationAccount = "automationAccount"
+ $runbook = "runbookName"
+ $restWebhook = "webhookName"
+ $file = "path\webhook.json"
+
+ # consume file
+ $body = Get-Content $file
+
+ # Craft Uri
+ $restURI = "https://management.azure.com/subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Automation/automationAccounts/$automationAccount/webhooks/$restWebhook`?api-version=2015-10-31"
+ ```
+
+1. Run the following script to obtain an access token. If your access token expired, you need to rerun the script.
+
+ ```powershell
+ # Obtain access token
+ $azContext = Get-AzContext
+ $azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+ $profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList ($azProfile)
+ $token = $profileClient.AcquireAccessToken($azContext.Subscription.TenantId)
+ $authHeader = @{
+ 'Content-Type'='application/json'
+ 'Authorization'='Bearer ' + $token.AccessToken
+ }
+ ```
+
+1. Run the following script to create the webhook using the REST API.
+
+ ```powershell
+ # Invoke the REST API
+ # Store URL in variable; reveal variable
+ $response = Invoke-RestMethod -Uri $restURI -Method Put -Headers $authHeader -Body $body
+ $webhookURI = $response.properties.uri
+ $webhookURI
+ ```
+
+ The output is a URL that looks similar to: `https://ad7f1818-7ea9-4567-b43a.webhook.wus.azure-automation.net/webhooks?token=uTi69VZ4RCa42zfKHCeHmJa2W9fd`
+
+1. You can also use [Webhook - Get](/rest/api/automation/webhook/get) to retrieve the webhook identified by its name. You can run the following PowerShell commands:
+
+ ```powershell
+ $response = Invoke-RestMethod -Uri $restURI -Method GET -Headers $authHeader
+ $response | ConvertTo-Json
+ ```
## Use a webhook
-To use a webhook after it has been created, your client must issue an HTTP `POST` request with the URL for the webhook. The syntax is:
+This example uses the PowerShell cmdlet [Invoke-WebRequest](/powershell/module/microsoft.powershell.utility/invoke-webrequest) to send the POST request to your new webhook.
-```http
-http://<Webhook Server>/token?=<Token Value>
-```
+1. Prepare values to pass to the runbook as the body for the webhook call. For relatively simple values, you could script the values as follows:
-The client receives one of the following return codes from the `POST` request.
+ ```powershell
+ $Names = @(
+ @{ Name="Hawaii"},
+ @{ Name="Seattle"},
+ @{ Name="Florida"}
+ )
+
+ $body = ConvertTo-Json -InputObject $Names
+ ```
-| Code | Text | Description |
-|: |: |: |
-| 202 |Accepted |The request was accepted, and the runbook was successfully queued. |
-| 400 |Bad Request |The request was not accepted for one of the following reasons: <ul> <li>The webhook has expired.</li> <li>The webhook is disabled.</li> <li>The token in the URL is invalid.</li> </ul> |
-| 404 |Not Found |The request was not accepted for one of the following reasons: <ul> <li>The webhook was not found.</li> <li>The runbook was not found.</li> <li>The account was not found.</li> </ul> |
-| 500 |Internal Server Error |The URL was valid, but an error occurred. Please resubmit the request. |
+1. For larger sets, you may wish to use a file. Create a file named `names.json` and then paste the following code::
-Assuming the request is successful, the webhook response contains the job ID in JSON format as shown below. It contains a single job ID, but the JSON format allows for potential future enhancements.
+ ```json
+ [
+ { "Name": "Hawaii" },
+ { "Name": "Florida" },
+ { "Name": "Seattle" }
+ ]
+ ```
-```json
-{"JobIds":["<JobId>"]}
-```
+ Change the value for the variable `$file` with the actual path to the json file before running the following PowerShell commands.
-The client can't determine when the runbook job completes or its completion status from the webhook. It can find out this information using the job ID with another mechanism, such as [Windows PowerShell](/powershell/module/servicemanagement/azure.service/get-azureautomationjob) or the [Azure Automation API](/rest/api/automation/job).
+ ```powershell
+ # Revise file path with actual path
+ $file = "path\names.json"
+ $bodyFile = Get-Content -Path $file
+ ```
-### Use a webhook from an ARM template
+1. Run the following PowerShell commands to call the webhook using the REST API.
-Automation webhooks can also be invoked by [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md). The ARM template issues a `POST` request and receives a return code just like any other client. See [Use a webhook](#use-a-webhook).
+ ```powershell
+ $response = Invoke-WebRequest -Method Post -Uri $webhookURI -Body $body -UseBasicParsing
+ $response
+
+ $responseFile = Invoke-WebRequest -Method Post -Uri $webhookURI -Body $bodyFile -UseBasicParsing
+ $responseFile
+ ```
- > [!NOTE]
- > For security reasons, the URI is only returned the first time a template is deployed.
+ For illustrative purposes, two calls were made for the two different methods of producing the body. For production, use only one method. The output should look similar as follows (only one output is shown):
-This sample template creates a test environment and returns the URI for the webhook it creates.
+ :::image type="content" source="media/automation-webhooks/webhook-post-output.png" alt-text="Output from webhook call.":::
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "automationAccountName": {
- "type": "String",
- "metadata": {
- "description": "Automation account name"
- }
- },
- "webhookName": {
- "type": "String",
- "metadata": {
- "description": "Webhook Name"
- }
- },
- "runbookName": {
- "type": "String",
- "metadata": {
- "description": "Runbook Name for which webhook will be created"
- }
- },
- "WebhookExpiryTime": {
- "type": "String",
- "metadata": {
- "description": "Webhook Expiry time"
- }
- },
- "_artifactsLocation": {
- "defaultValue": "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.automation/101-automation/",
- "type": "String",
- "metadata": {
- "description": "URI to artifacts location"
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.Automation/automationAccounts",
- "apiVersion": "2020-01-13-preview",
- "name": "[parameters('automationAccountName')]",
- "location": "[resourceGroup().location]",
- "properties": {
- "sku": {
- "name": "Free"
- }
- },
- "resources": [
- {
- "type": "runbooks",
- "apiVersion": "2018-06-30",
- "name": "[parameters('runbookName')]",
- "location": "[resourceGroup().location]",
- "dependsOn": [
- "[parameters('automationAccountName')]"
- ],
- "properties": {
- "runbookType": "Python2",
- "logProgress": "false",
- "logVerbose": "false",
- "description": "Sample Runbook",
- "publishContentLink": {
- "uri": "[uri(parameters('_artifactsLocation'), 'scripts/AzureAutomationTutorialPython2.py')]",
- "version": "1.0.0.0"
- }
- }
- },
- {
- "type": "webhooks",
- "apiVersion": "2018-06-30",
- "name": "[parameters('webhookName')]",
- "dependsOn": [
- "[parameters('automationAccountName')]",
- "[parameters('runbookName')]"
- ],
- "properties": {
- "isEnabled": true,
- "expiryTime": "[parameters('WebhookExpiryTime')]",
- "runbook": {
- "name": "[parameters('runbookName')]"
- }
- }
- }
- ]
- }
- ],
- "outputs": {
- "webhookUri": {
- "type": "String",
- "value": "[reference(parameters('webhookName')).uri]"
- }
- }
-}
-```
+ The client receives one of the following return codes from the `POST` request.
-## Renew a webhook
+ | Code | Text | Description |
+ |: |: |: |
+ | 202 |Accepted |The request was accepted, and the runbook was successfully queued. |
+ | 400 |Bad Request |The request wasn't accepted for one of the following reasons: <ul> <li>The webhook has expired.</li> <li>The webhook is disabled.</li> <li>The token in the URL is invalid.</li> </ul> |
+ | 404 |Not Found |The request wasn't accepted for one of the following reasons: <ul> <li>The webhook wasn't found.</li> <li>The runbook wasn't found.</li> <li>The account wasn't found.</li> </ul> |
+ | 500 |Internal Server Error |The URL was valid, but an error occurred. Resubmit the request. |
-When a webhook is created, it has a validity time period of ten years, after which it automatically expires. Once a webhook has expired, you can't reactivate it. You can only remove and then recreate it.
+ Assuming the request is successful, the webhook response contains the job ID in JSON format as shown below. It contains a single job ID, but the JSON format allows for potential future enhancements.
-You can extend a webhook that has not reached its expiration time. To extend a webhook:
+ ```json
+ {"JobIds":["<JobId>"]}
+ ```
-1. Navigate to the runbook that contains the webhook.
-2. Select **Webhooks** under **Resources**.
-3. Click the webhook that you want to extend.
-4. In the Webhook page, choose a new expiration date and time and click **Save**.
+1. The PowerShell cmdlet [Get-AzAutomationJobOutput](/powershell/module/az.automation/get-azautomationjoboutput) will be used to get the output. The [Azure Automation API](/rest/api/automation/job) could also be used.
-## Sample runbook
+ ```powershell
+ #isolate job ID
+ $jobid = (ConvertFrom-Json ($response.Content)).jobids[0]
+
+ # Get output
+ Get-AzAutomationJobOutput `
+ -AutomationAccountName $automationAccount `
+ -Id $jobid `
+ -ResourceGroupName $resourceGroup `
+ -Stream Output
+ ```
-The following sample runbook accepts the webhook data and starts the virtual machines specified in the request body. To test this runbook, in your Automation account under **Runbooks**, click **Create a runbook**. If you don't know how to create a runbook, see [Creating a runbook](automation-quickstart-create-runbook.md).
+ The output should look similar to the following:
-> [!NOTE]
-> For non-graphical PowerShell runbooks, `Add-AzAccount` and `Add-AzureRMAccount` are aliases for [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount). You can use these cmdlets or you can [update your modules](automation-update-azure-modules.md) in your Automation account to the latest versions. You might need to update your modules even if you have just created a new Automation account.
+ :::image type="content" source="media/automation-webhooks/webhook-job-output.png" alt-text="Output from webhook job.":::
-```powershell
-param
-(
- [Parameter (Mandatory = $false)]
- [object] $WebhookData
-)
+## Update a webhook
-# If runbook was called from Webhook, WebhookData will not be null.
-if ($WebhookData) {
+When a webhook is created, it has a validity time period of 10 years, after which it automatically expires. Once a webhook has expired, you can't reactivate it. You can only remove and then recreate it. You can extend a webhook that hasn't reached its expiration time. To extend a webhook, perform the following steps.
- # Check header for message to validate request
- if ($WebhookData.RequestHeader.message -eq 'StartedbyContoso')
- {
- Write-Output "Header has required information"}
- else
- {
- Write-Output "Header missing required information";
- exit;
- }
+1. Navigate to the runbook that contains the webhook.
+1. Under **Resources**, select **Webhooks**, and then the webhook that you want to extend.
+1. From the **Webhook** page, choose a new expiration date and time and then select **Save**.
- # Retrieve VMs from Webhook request body
- $vms = (ConvertFrom-Json -InputObject $WebhookData.RequestBody)
+Review the API call [Webhook - Update](/rest/api/automation/webhook/update) and PowerShell cmdlet [Set-AzAutomationWebhook](/powershell/module/az.automation/set-azautomationwebhook) for other possible modifications.
- # Authenticate to Azure by using the service principal and certificate. Then, set the subscription.
+## Clean up resources
- Write-Output "Authenticating to Azure with service principal and certificate"
- $ConnectionAssetName = "AzureRunAsConnection"
- Write-Output "Get connection asset: $ConnectionAssetName"
+Here are examples of removing a webhook from an Automation runbook.
- $Conn = Get-AutomationConnection -Name $ConnectionAssetName
- if ($Conn -eq $null)
- {
- throw "Could not retrieve connection asset: $ConnectionAssetName. Check that this asset exists in the Automation account."
- }
- Write-Output "Authenticating to Azure with service principal."
- Add-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint | Write-Output
+- Using PowerShell, the [Remove-AzAutomationWebhook](/powershell/module/az.automation/remove-azautomationwebhook) cmdlet can be used as shown below. No output is returned.
- # Start each virtual machine
- foreach ($vm in $vms)
- {
- $vmName = $vm.Name
- Write-Output "Starting $vmName"
- Start-AzVM -Name $vm.Name -ResourceGroup $vm.ResourceGroup
- }
-}
-else {
- # Error
- write-Error "This runbook is meant to be started from an Azure alert webhook only."
-}
-```
+ ```powershell
+ Remove-AzAutomationWebhook `
+ -ResourceGroup $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $psWebhook
+ ```
-## Test the sample
+- Using REST, the REST [Webhook - Delete](/rest/api/automation/webhook/delete) API can be used as shown below.
-The following example uses Windows PowerShell to start a runbook with a webhook. Any language that can make an HTTP request can use a webhook. Windows PowerShell is used here as an example.
+ ```powershell
+ Invoke-WebRequest -Method Delete -Uri $restURI -Headers $authHeader
+ ```
-The runbook is expecting a list of virtual machines formatted in JSON in the body of the request. The runbook validates as well that the headers contain a defined message to validate that the webhook caller is valid.
+ An output of `StatusCode : 200` means a successful deletion.
-```azurepowershell-interactive
-$uri = "<webHook Uri>"
+## Create runbook and webhook with ARM template
-$vms = @(
- @{ Name="vm01";ResourceGroup="vm01"},
- @{ Name="vm02";ResourceGroup="vm02"}
- )
-$body = ConvertTo-Json -InputObject $vms
-$header = @{ message="StartedbyContoso"}
-$response = Invoke-WebRequest -Method Post -Uri $uri -Body $body -Headers $header
-$jobid = (ConvertFrom-Json ($response.Content)).jobids[0]
-```
+Automation webhooks can also be created using [Azure Resource Manager](../azure-resource-manager/templates/overview.md) templates. This sample template creates an Automation account, four runbooks, and a webhook for the named runbook.
-The following example shows the body of the request that is available to the runbook in the `RequestBody` property of `WebhookData`. This value is formatted in JSON to be compatible with the format included in the body of the request.
+1. Create a file named `webhook_deploy.json` and then paste the following code:
-```json
-[
+ ```json
{
- "Name": "vm01",
- "ResourceGroup": "myResourceGroup"
- },
- {
- "Name": "vm02",
- "ResourceGroup": "myResourceGroup"
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "automationAccountName": {
+ "type": "String",
+ "metadata": {
+ "description": "Automation account name"
+ }
+ },
+ "webhookName": {
+ "type": "String",
+ "metadata": {
+ "description": "Webhook Name"
+ }
+ },
+ "runbookName": {
+ "type": "String",
+ "metadata": {
+ "description": "Runbook Name for which webhook will be created"
+ }
+ },
+ "WebhookExpiryTime": {
+ "type": "String",
+ "metadata": {
+ "description": "Webhook Expiry time"
+ }
+ },
+ "_artifactsLocation": {
+ "defaultValue": "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.automation/101-automation/",
+ "type": "String",
+ "metadata": {
+ "description": "URI to artifacts location"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Automation/automationAccounts",
+ "apiVersion": "2020-01-13-preview",
+ "name": "[parameters('automationAccountName')]",
+ "location": "[resourceGroup().location]",
+ "properties": {
+ "sku": {
+ "name": "Free"
+ }
+ },
+ "resources": [
+ {
+ "type": "runbooks",
+ "apiVersion": "2018-06-30",
+ "name": "[parameters('runbookName')]",
+ "location": "[resourceGroup().location]",
+ "dependsOn": [
+ "[parameters('automationAccountName')]"
+ ],
+ "properties": {
+ "runbookType": "Python2",
+ "logProgress": "false",
+ "logVerbose": "false",
+ "description": "Sample Runbook",
+ "publishContentLink": {
+ "uri": "[uri(parameters('_artifactsLocation'), 'scripts/AzureAutomationTutorialPython2.py')]",
+ "version": "1.0.0.0"
+ }
+ }
+ },
+ {
+ "type": "webhooks",
+ "apiVersion": "2018-06-30",
+ "name": "[parameters('webhookName')]",
+ "dependsOn": [
+ "[parameters('automationAccountName')]",
+ "[parameters('runbookName')]"
+ ],
+ "properties": {
+ "isEnabled": true,
+ "expiryTime": "[parameters('WebhookExpiryTime')]",
+ "runbook": {
+ "name": "[parameters('runbookName')]"
+ }
+ }
+ }
+ ]
+ }
+ ],
+ "outputs": {
+ "webhookUri": {
+ "type": "String",
+ "value": "[reference(parameters('webhookName')).uri]"
+ }
+ }
}
-]
-```
+ ```
+
+1. The following PowerShell code sample deploys the template from your machine. Provide an appropriate value for the variables and then execute the script.
+
+ ```powershell
+ $resourceGroup = "resourceGroup"
+ $templateFile = "path\webhook_deploy.json"
+ $armAutomationAccount = "automationAccount"
+ $armRunbook = "ARMrunbookName"
+ $armWebhook = "webhookName"
+ $webhookExpiryTime = "12-31-2022"
+
+ New-AzResourceGroupDeployment `
+ -Name "testDeployment" `
+ -ResourceGroupName $resourceGroup `
+ -TemplateFile $templateFile `
+ -automationAccountName $armAutomationAccount `
+ -runbookName $armRunbook `
+ -webhookName $armWebhook `
+ -WebhookExpiryTime $webhookExpiryTime
+ ```
-The following image shows the request being sent from Windows PowerShell and the resulting response. The job ID is extracted from the response and converted to a string.
-
-![Webhooks button](media/automation-webhooks/webhook-request-response.png)
+ > [!NOTE]
+ > For security reasons, the URI is only returned the first time a template is deployed.
## Next steps
automation Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/modules.md
Due to the number of modules and cmdlets included, it's difficult to know before
These are known limitations with the sandbox. The recommended workaround is to deploy a [Hybrid Runbook Worker](../automation-hybrid-runbook-worker.md) or use [Azure Functions](../../azure-functions/functions-overview.md).
+> [!IMPORTANT]
+> Do not include the keyword "AzureRm" in any script designed to be executed with the Az module. Inclusion of the keyword, even in a comment, may cause the AzureRm to load and then conflict with the Az module.
+ ## Default modules The following table lists modules that Azure Automation imports by default when you create your Automation account. Automation can import newer versions of these modules. However, you can't remove the original version from your Automation account, even if you delete a newer version. Note that these default modules include several AzureRM modules.
automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/runbooks.md
Title: Troubleshoot Azure Automation runbook issues description: This article tells how to troubleshoot and resolve issues with Azure Automation runbooks. Previously updated : 07/07/2021 Last updated : 02/11/2021
When you receive errors during runbook execution in Azure Automation, you can us
1. If your runbook is suspended or unexpectedly fails: * [Renew the certificate](../manage-runas-account.md#cert-renewal) if the Run As account has expired.
- * [Renew the webhook](../automation-webhooks.md#renew-a-webhook) if you're trying to use an expired webhook to start the runbook.
+ * [Renew the webhook](../automation-webhooks.md#update-a-webhook) if you're trying to use an expired webhook to start the runbook.
* [Check job statuses](../automation-runbook-execution.md#job-statuses) to determine current runbook statuses and some possible causes of the issue. * [Add additional output](../automation-runbook-output-and-messages.md#working-with-message-streams) to the runbook to identify what happens before the runbook is suspended. * [Handle any exceptions](../automation-runbook-execution.md#exceptions) that are thrown by your job.
When you receive errors during runbook execution in Azure Automation, you can us
If you're running your runbooks on a Hybrid Runbook Worker instead of in Azure Automation, you might need to [troubleshoot the hybrid worker itself](hybrid-runbook-worker.md).
-## Scenario: PowerShell #Requires statement does not work as expected
-
-### Issue
-
-Your Azure Automation cloud or hybrid jobs includes the PowerShell [#Requires](/powershell/module/microsoft.powershell.core/about/about_requires) statement, but the statement does not prevent the script from executing when the required condition is not met.
-
-### Cause
-
-Runbooks can't use the PowerShell [#Requires](/powershell/module/microsoft.powershell.core/about/about_requires) statement, it is not supported in Azure sandbox or on Hybrid Runbook Workers and will cause the job to fail.
-
-### Resolution
-
-Ensure all script requirements are met before execution.
- ## <a name="runbook-fails-no-permission"></a>Scenario: Runbook fails with a No permission or Forbidden 403 error ### Issue
The webhook that you're trying to call is either disabled or is expired.
### Resolution
-If the webhook is disabled, you can re-enable it through the Azure portal. If the webhook has expired, you must delete and then re-create it. You can only [renew a webhook](../automation-webhooks.md#renew-a-webhook) if it hasn't already expired.
+If the webhook is disabled, you can re-enable it through the Azure portal. If the webhook has expired, you must delete and then re-create it. You can only [renew a webhook](../automation-webhooks.md#update-a-webhook) if it hasn't already expired.
## <a name="429"></a>Scenario: 429: The request rate is currently too large
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/whats-new.md
This page is updated monthly, so revisit it regularly.
## June 2021
-### Hybrid Runbook Worker support for Ubuntu 20.04 LTS
-
-**Type:** New feature
-
-See [Supported Linux operating systems](./automation-linux-hrw-install.md#supported-linux-operating-systems) for a complete list.
- ### Security update for Log Analytics Contributor role **Type:** Plan for change
Two new scripts have been added to the Azure Automation [GitHub repository](http
**Type:** New feature
-For more information, see [Use a webhook from an ARM template](./automation-webhooks.md#use-a-webhook-from-an-arm-template).
+For more information, see [Use a webhook from an ARM template](./automation-webhooks.md#create-runbook-and-webhook-with-arm-template).
### Azure Update Management now supports Centos 8.x, Red Hat Enterprise Linux Server 8.x, and SUSE Linux Enterprise Server 15
Automation support of service tags allows or denies the traffic for the Automati
**Type:** Plan for change
-Azure Automation fully supports TLS 1.2 and all client calls (through webhooks, DSC nodes, and hybrid worker). TLS 1.1 and TLS 1.0 are still supported for backward compatibility with older clients until customers standardize and fully migrate to TLS 1.2. To learn more, see the [documentation](automation-managing-data.md#tls-12-for-azure-automation).
+Azure Automation fully supports TLS 1.2 and all client calls (through webhooks, DSC nodes, and hybrid worker). TLS 1.1 and TLS 1.0 are still supported for backward compatibility with older clients until customers standardize and fully migrate to TLS 1.2.
## January 2020
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/private-link-security.md
Title: Use Azure Private Link to securely connect networks to Azure Arc description: Learn how to use Azure Private Link to securely connect networks to Azure Arc. Previously updated : 07/16/2021 Last updated : 07/20/2021 # Use Azure Private Link to securely connect networks to Azure Arc
See the visual diagram under the section [How it works](#how-it-works) for the n
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. To register your subscription for the Azure Arc-enabled servers Private Link preview, you need to register the resource provider **Microsoft.HybridCompute**. You can do this from the Azure portal, with Azure PowerShell, or the Azure CLI. Do not proceed with step 3 until you've confirmed the resource provider is registered, otherwise you'll recieve a deployment error.
-
- * To register from the Azure portal, see [Register the resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal) to enable the Arc-enabled servers Private Link preview from the Azure portal. For step 5, specify **Microsoft.HybridCompute**.
-
- * To register using the Azure PowerShell, run the following command. See [registering a resource provider with Azure PowerShell](../../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell) to learn more.
-
- ```azurepowershell
- Register-AzProviderFeature -ProviderNamespace Microsoft.HybridCompute -FeatureName ArcServerPrivateLinkPreview
- ```
-
- Which returns a message that registration is on-going. To verify the resource provider is successfully registered, use:
-
- ```azurepowershell
- Get-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute
- ```
-
- * To register using the Azure CLI, run the following command. See [registering a resource provider with the Azure CLI](../../azure-resource-manager/management/resource-providers-and-types.md#azure-cli) to learn more.
-
- ```azurecli
- az feature register --namespace Microsoft.HybridCompute --name ArcServerPrivateLinkPreview
- ```
-
- Which returns a message that registration is on-going. To verify the resource provider is successfully registered, use:
-
- ```azurecli-interactive
- az provider show --namespace Microsoft.HybridCompute
- ```
- 1. Go to **Create a resource** in the Azure portal and search for **Azure Arc Private Link Scope**. Or you can use the following link to open the [Azure Arc Private Link Scope](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.HybridCompute%2FprivateLinkScopes) page in the portal. :::image type="content" source="./media/private-link-security/find-scope.png" alt-text="Find Private Link Scope" border="true":::
The private endpoint documentation provides guidance for configuring [on-premise
If you opted out of using Azure private DNS zones during private endpoint creation, you will need to create the required DNS records in your on-premises DNS server.
-1. Go to the Azure portal with the Azure Arc-enabled servers private link preview features enabled.
+1. Go to the Azure portal.
1. Navigate to the private endpoint resource associated with your virtual network and private link scope.
It may take up to 15 minutes for the Private Link Scope to accept connections fr
## Troubleshooting
-1. Ensure the required resource providers and feature flags are registered for your subscription.
-
- To check with the Azure CLI, run the following commands.
-
- ```azurecli
- az feature show --namespace Microsoft.Network --name AllowPrivateEndpoints
-
- {
- "id": "/subscriptions/ID/providers/Microsoft.Features/providers/Microsoft.Network/features/AllowPrivateEndpoints",
- "name": "Microsoft.Network/AllowPrivateEndpoints",
- "properties": {
- "state": "Registered"
- },
- "type": "Microsoft.Features/providers/features"
- }
- ```
-
- ```azurecli
- az feature show --namespace Microsoft.HybridCompute --name ArcServerPrivateLinkPreview
-
- {
- "id": "/subscriptions/ID/providers/Microsoft.Features/providers/microsoft.hybridcompute/features/ArcServerPrivateLinkPreview",
- "name": "microsoft.hybridcompute/ArcServerPrivateLinkPreview",
- "properties": {
- "state": "Registered"
- },
- "type": "Microsoft.Features/providers/features"
- }
- ```
-
- To check with Azure PowerShell, run the following commands:
-
- ```azurepowershell
- Get-AzProviderFeature -ProviderNamespace Microsoft.Network -FeatureName AllowPrivateEndpoints
-
- FeatureName ProviderName RegistrationState
- -- --
- AllowPrivateEndpoints Microsoft.Network Registered
- ```
-
- ```azurepowershell
- Get-AzProviderFeature -ProviderNamespace Microsoft.HybridCompute -FeatureName ArcServerPrivateLinkPreview
-
- FeatureName ProviderName RegistrationState
- -- --
- ArcServerPrivateLinkPreview Microsoft.HybridCompute Registered
- ```
-
- If the features show as registered but you are still unable to see the `Microsoft.HybridCompute/privateLinkScopes` resource when creating a private endpoint, try re-registering the resource provider as shown [here](agent-overview.md#register-azure-resource-providers).
- 1. Check your on-premises DNS server(s) to verify it is either forwarding to Azure DNS or is configured with appropriate A records in your private link zone. These lookup commands should return private IP addresses in your Azure virtual network. If they resolve public IP addresses, double check your machine or server and networkΓÇÖs DNS configuration. nslookup gbl.his.arc.azure.com
It may take up to 15 minutes for the Private Link Scope to accept connections fr
* If you are experiencing issues with your Azure Private Endpoint connectivity setup, see [Troubleshoot Azure Private Endpoint connectivity problems](../../private-link/troubleshoot-private-endpoint-connectivity.md).
-* See the following to configure Private Link for [Azure Automation](../../automation/how-to/private-link-security.md), [Azure Monitor](../../azure-monitor/logs/private-link-security.md), [Azure Key Vault](../../key-vault/general/private-link-service.md), or [Azure Blob storage](../../private-link/tutorial-private-endpoint-storage-portal.md).
+* See the following to configure Private Link for [Azure Automation](../../automation/how-to/private-link-security.md), [Azure Monitor](../../azure-monitor/logs/private-link-security.md), [Azure Key Vault](../../key-vault/general/private-link-service.md), or [Azure Blob storage](../../private-link/tutorial-private-endpoint-storage-portal.md).
azure-functions Durable Functions Storage Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-storage-providers.md
Durable Functions automatically persists function parameters, return values, and other state to durable storage to guarantee reliable execution. The default configuration for Durable Functions stores this runtime state in an Azure Storage (classic) account. However, it's possible to configure Durable Functions v2.0 and above to use an alternate durable storage provider.
-Durable Functions is a set of Azure Functions triggers and bindings that are internally powered by the [Durable Task Framework](https://github.com/Azure/durabletask) (DTFx). DTFx supports various backend storage providers, including the Azure Storage provider used by Durable Functions. Starting in Durable Functions **v2.4.3**, users can configure their function apps to use DTFx storage providers other than the Azure Storage provider.
+Durable Functions is a set of Azure Functions triggers and bindings that are internally powered by the [Durable Task Framework](https://github.com/Azure/durabletask) (DTFx). DTFx supports various backend storage providers, including the Azure Storage provider used by Durable Functions. Starting in Durable Functions **v2.5.0**, users can configure their function apps to use DTFx storage providers other than the Azure Storage provider.
> [!NOTE] > The choice to use storage providers other than Azure Storage should be made carefully. Most function apps running in Azure should use the default Azure Storage provider for Durable Functions. However, there are important cost, scalability, and data management tradeoffs that should be considered when deciding whether to use an alternate storage provider. This article describes many of these tradeoffs in detail.
+>
+> Also note that it's not currently possible to migrate data from one storage provider to another. If you want to use a new storage provider, you should create a new app configured with the new storage provider.
Two alternate DTFx storage providers were developed for use with Durable Functions, the _Netherite_ storage provider and the _Microsoft SQL Server (MSSQL)_ storage provider. This article describes all three supported providers, compares them against each other, and provides basic information about how to get started using them.
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
The following table shows the current support for Azure Monitor agent with Azure
## Coexistence with other agents The Azure Monitor agent can coexist with the existing agents so that you can continue to use their existing functionality during evaluation or migration. This is particularly important because of the limitations supporting existing solutions. You should be careful though in collecting duplicate data since this could skew query results and result in additional charges for data ingestion and retention.
-For example, VM insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You may also have configured the workspace to collect Windows events and Syslog events from agents. If you install the Azure Monitor agent and create a data collection rule for these same events and performance data, it will result in duplicate data.
+For example, VM Insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You may also have configured the workspace to collect Windows events and Syslog events from agents. If you install the Azure Monitor agent and create a data collection rule for these same events and performance data, it will result in duplicate data.
-As such, ensure you're not collecting the same data from both agents, and if so, ensure they are going to separate destinations.
+As such, ensure you're not collecting the same data from both agents. If you are, ensure they are going to separate destinations.
## Costs
See [Supported operating systems](agents-overview.md#supported-operating-systems
The Azure Monitor agent doesn't require any keys but instead requires a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity). You must have a system-assigned managed identity enabled on each virtual machine before deploying the agent. ## Networking
-The Azure Monitor agent supports Azure service tags (both AzureMonitor and AzureResourceManager tags are required) but does not yet work with Azure Monitor Private Link Scopes or direct proxies.
+The Azure Monitor agent supports Azure service tags (both AzureMonitor and AzureResourceManager tags are required) but does not yet work with Azure Monitor Private Link Scopes. If the machine connects through a proxy server to communicate over the internet, review requirements below to understand the network configuration required.
+### Proxy configuration
+
+The Azure Monitor agent extensions for Windows and Linux can communicate either through a proxy server or Log Analytics gateway to Azure Monitor using the HTTPS protocol (for Azure virtual machines, Azure virtual machine scale sets and Azure Arc for servers). This is configured using extensions settings as described below, and supports both anonymous and basic authentication (username/password) are supported.
+
+1. Use this simple flowchart to determine the values of *setting* and *protectedSetting* parameters first:
+
+ ![Flowchart to determine the values of setting and protectedSetting parameters when enabling the extension](media/azure-monitor-agent-overview/proxy-flowchart.png)
++
+2. Once the values *setting* and *protectedSetting* parameters are determined, provide these additional parameters when deploying the Azure Monitor agent using PowerShell commands (examples below for Azure virtual machines):
+
+ | Parameter | Value |
+ |:|:|
+ | SettingString | JSON object from flowchart above, converted to string; skip if not applicable. Example: {"proxy":{"mode":"application","address":"http://[address]:[port]","auth": false}} |
+ | ProtectedSettingString | JSON object from flowchart above, converted to string; skip if not applicable. Example: {"proxy":{"username": "[username]","password": "[password]"}} |
++
+# [Windows](#tab/PowerShellWindows)
+```powershell
+Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -SettingString <settingString> -ProtectedSettingString <protectedSettingString>
+```
+
+# [Linux](#tab/PowerShellLinux)
+```powershell
+Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -SettingString <settingString> -ProtectedSettingString <protectedSettingString>
+```
++ ## Next steps - [Install Azure Monitor agent](azure-monitor-agent-install.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Alerts Metric Create Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-metric-create-templates.md
Previously updated : 10/7/2020 Last updated : 7/21/2021 # Create a metric alert with a Resource Manager template
Save the json below as simplestaticmetricalert.json for the purpose of this walk
"defaultValue": "GreaterThan", "allowedValues": [ "Equals",
- "NotEquals",
"GreaterThan", "GreaterThanOrEqual", "LessThan",
Save the json below as customstaticmetricalert.json for the purpose of this walk
"defaultValue": "GreaterThan", "allowedValues": [ "Equals",
- "NotEquals",
"GreaterThan", "GreaterThanOrEqual", "LessThan",
Save the json below as all-vms-in-resource-group-static.json for the purpose of
"defaultValue": "GreaterThan", "allowedValues": [ "Equals",
- "NotEquals",
"GreaterThan", "GreaterThanOrEqual", "LessThan",
Save the json below as all-vms-in-subscription-static.json for the purpose of th
"defaultValue": "GreaterThan", "allowedValues": [ "Equals",
- "NotEquals",
"GreaterThan", "GreaterThanOrEqual", "LessThan",
Save the json below as list-of-vms-static.json for the purpose of this walk-thro
"defaultValue": "GreaterThan", "allowedValues": [ "Equals",
- "NotEquals",
"GreaterThan", "GreaterThanOrEqual", "LessThan",
az deployment group create \
- Read more about [alerts in Azure](./alerts-overview.md) - Learn how to [create an action group with Resource Manager templates](../alerts/action-groups-create-resource-manager-template.md)-- For the JSON syntax and properties, see [Microsoft.Insights/metricAlerts](/azure/templates/microsoft.insights/metricalerts) template reference.
+- For the JSON syntax and properties, see [Microsoft.Insights/metricAlerts](/azure/templates/microsoft.insights/metricalerts) template reference.
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/annotations.md
Title: Release annotations for Application Insights | Microsoft Docs description: Learn how to create annotations to track deployment or other significant events with Application Insights. Previously updated : 07/02/2021 Last updated : 07/20/2021
Create a separate API key for each of your Azure Pipelines release templates.
> [!NOTE] > Limits for API keys are described in the [REST API rate limits documentation](https://dev.applicationinsights.io/documentation/Authorization/Rate-limits).
+### Transition from classic to new release annotation
+
+To use the new release annotations:
+1. [Remove the Release Annotations extension](/azure/devops/marketplace/uninstall-disable-extensions).
+1. Remove the Application Insights Release Annotation task in your Azure Pipelines deployment.
+1. Create new release annotations with [Azure Pipelines](#release-annotations-with-azure-pipelines-build) or [Azure CLI](#create-release-annotations-with-azure-cli).
+ ## Next steps * [Create work items](./diagnostic-search.md#create-work-item)
azure-monitor Metrics Charts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-charts.md
Previously updated : 01/22/2019 Last updated : 06/30/2020
For example, suppose a chart shows the *Server response time* metric. It uses th
- If the time granularity is set to 30 minutes, the chart is drawn from 48 aggregated data points. That is, the line chart connects 48 dots in the chart plot area (24 hours x 2 data points per hour). Each data point represents the *average* of all captured response times for server requests that occurred during each of the relevant 30-minute time periods. - If you switch the time granularity to 15 minutes, you get 96 aggregated data points. That is, you get 24 hours x 4 data points per hour.
-The metrics explorer has five basic statistical aggregation types: sum, count, min, max, and average. The *sum* aggregation is sometimes called the *total* aggregation. For many metrics, the metrics explorer hides the aggregations that are irrelevant and can't be used.
+The metrics explorer has five basic statistical aggregation types: sum, count, min, max, and average. The *sum* aggregation is sometimes called the *total* aggregation. For many metrics, the metrics explorer hides the aggregations that are irrelevant and can't be used.
+
+For a deeper discussion of how metric aggregation works, see [Azure Monitor metrics aggregation and display explained](metrics-aggregation-explained.md).
* **Sum**: The sum of all values captured during the aggregation interval.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-supported.md
Using diagnostic settings is the easiest way to route the metrics, but there are
> **Host OS metrics ARE available and listed below.** They are not the same. The Host OS metrics relate to the Hyper-V session hosting your guest OS session. > [!TIP]
-> Best practice is to use and configure the [Azure Diagnostics extension](../agents/diagnostics-extension-overview.md) to send guest OS performance metrics into the same Azure Monitor metric database where platform metrics are stored. The extension routes guest OS metrics through the [custom metrics](../essentials/metrics-custom-overview.md) API. Then you can chart, alert and otherwise use guest OS metrics like platform metrics. Alternatively or in addition, you can use the Log Analytics agent to send guest OS metrics to Azure Monitor Logs / Log Analytics. There you can query on those metrics in combination with non-metric data.
+> Best practice is to use and configure the Azure Monitor Agent to send guest OS performance metrics into the same Azure Monitor metric database where platform metrics are stored. The agent routes guest OS metrics through the [custom metrics](../essentials/metrics-custom-overview.md) API. You can then chart, alert and otherwise use guest OS metrics like platform metrics. Alternatively or in addition, you can send the guest OS metrics to Azure Monitor Logs using the same agent. There you can query on those metrics in combination with non-metric data using Log Analytics.
-For important additional information, see [Monitoring Agents Overview](../agents/agents-overview.md).
+The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analytics agent which were previously used for this routing. For important additional information, see [Monitoring Agents Overview](../agents/agents-overview.md).
## Table formatting
azure-monitor Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/data-explorer.md
Title: Azure Data Explorer Insights (ADX Insights preview)| Microsoft Docs
+ Title: Azure Data Explorer Insights (ADX Insights)| Microsoft Docs
description: This article describes Azure Data Explorer Insights (ADX Insights)
-# Azure Data Explorer Insights (preview)
+# Azure Data Explorer Insights
-Azure Data Explorer Insights (preview) provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures.
-This article will help you understand how to onboard and use Azure Data Explorer Insights (preview).
+Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures.
+This article will help you understand how to onboard and use Azure Data Explorer Insights.
-## Introduction to Azure Data Explorer Insights (preview)
+## Introduction to Azure Data Explorer Insights
Before jumping into the experience, you should understand how it presents and visualizes information. - **At scale perspective** showing a snapshot view of your clusters' primary metrics, to easily track performance of queries, ingestion, and export operations.
To view the performance of your clusters across all your subscriptions, perform
1. Sign into the [Azure portal](https://portal.azure.com/)
-2. Select **Monitor** from the left-hand pane in the Azure portal, and under the Insights Hub section, select **Azure Data Explorer Clusters (preview)**.
+2. Select **Monitor** from the left-hand pane in the Azure portal, and under the Insights Hub section, select **Azure Data Explorer Clusters**.
![Screenshot of overview experience with multiple graphs](./media/data-explorer/insights-hub.png)
To access Azure Data Explorer Insights directly from an Azure Data Explorer Clus
1. In the Azure portal, select **Azure Data Explorer Clusters**.
-2. From the list, choose an Azure Data Explorer Cluster. In the monitoring section, choose **Insights (preview)**.
+2. From the list, choose an Azure Data Explorer Cluster. In the monitoring section, choose **Insights**.
These views are also accessible by selecting the resource name of an Azure Data Explorer cluster from within the Azure Monitor insights view.
Customizations are saved to a custom workbook to prevent overwriting the default
For general troubleshooting guidance, refer to the dedicated workbook-based insights [troubleshooting article](troubleshoot-workbooks.md).
-This section will help you with the diagnosis and troubleshooting of some of the common issues you may encounter when using Azure Data Explorer Insights (preview). Use the list below to locate the information relevant to your specific issue.
+This section will help you with the diagnosis and troubleshooting of some of the common issues you may encounter when using Azure Data Explorer Insights. Use the list below to locate the information relevant to your specific issue.
### Why don't I see all my subscriptions in the subscription picker?
Currently, diagnostic logs do not work retroactively, so the data will only star
## Next steps
-Learn the scenarios workbooks are designed to support, how to author new and customize existing reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../visualize/workbooks-overview.md).
+Learn the scenarios workbooks are designed to support, how to author new and customize existing reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../visualize/workbooks-overview.md).
azure-monitor Surface Hubs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/surface-hubs.md
Last updated 01/16/2018
This article describes how you can use the Surface Hub solution in Azure Monitor to monitor Microsoft Surface Hub devices. The solution helps you track the health of your Surface Hubs as well as understand how they are being used.
-Each Surface Hub has the Microsoft Monitoring Agent installed. Its through the agent that you can send data from your Surface Hub to a Log Analytics workspace in Azure Monitor. Log files are read from your Surface Hubs and are then are sent to Azure Monitor. Issues like servers being offline, the calendar not syncing, or if the device account is unable to log into Skype are shown in the Surface Hub dashboard in Azure Monitor. By using the data in the dashboard, you can identify devices that are not running, or that are having other problems, and potentially apply fixes for the detected issues.
+Each Surface Hub has the Microsoft Monitoring Agent installed. Its through the agent that you can send data from your Surface Hub to a Log Analytics workspace in Azure Monitor. Log files are read from your Surface Hubs and are then sent to Azure Monitor. Issues like servers being offline, the calendar not syncing, or if the device account is unable to log into Skype are shown in the Surface Hub dashboard in Azure Monitor. By using the data in the dashboard, you can identify devices that are not running, or that are having other problems, and potentially apply fixes for the detected issues.
## Install and configure the solution Use the following information to install and configure the solution. In order to manage your Surface Hubs in Azure Monitor, you'll need the following:
You'll need the workspace ID and workspace key for the Log Analytics workspace t
Intune is a Microsoft product that allows you to centrally manage the Log Analytics workspace configuration settings that are applied to one or more of your devices. Follow these steps to configure your devices through Intune:
-1. Sign in to Intune.
-2. Navigate to **Settings** > **Connected Sources**.
-3. Create or edit a policy based on the Surface Hub template.
-4. Navigate to the Azure Operational Insights section of the policy, and add the Log Analytics *Workspace ID* and *Workspace Key* to the policy.
-5. Save the policy.
-6. Associate the policy with the appropriate group of devices.
+1. Sign in to [Microsoft Endpoint Manager Admin Center](https://endpoint.microsoft.com/).
+2. Go to **Devices** > **Configuration profiles**.
+3. Create a new Windows 10 profile, and then select **templates**.
+4. In the list of templates, select **Device restrictions (Windows 10 Team)**.
+5. Enter a name and description for the profile.
+6. For **Azure Operational Insights**, select **Enable**.
+7. Enter the Log Analytics **Workspace ID** and enter the **Workspace Key** for the policy.
+8. Assign the policy to your group of Surface Hub devices and save the policy.
- ![Intune policy](./media/surface-hubs/intune.png)
+ :::image type="content" source="./media/surface-hubs/intune.png" alt-text="Screenshot that shows setting an Intune policy.":::
Intune then syncs the Log Analytics settings with the devices in the target group, enrolling them in your Log Analytics workspace.
If you don't use Intune to manage your environment, you can enroll devices manua
2. Enter the device admin credentials when prompted. 3. Click **This device**, and the under **Monitoring**, click **Configure Log Analytics Settings**. 4. Select **Enable monitoring**.
-5. In the Log Analytics settings dialog, type the Log Analytics **Workspace ID** and type the **Workspace Key**.
- ![Screenshot shows the Microsoft Operations Manager Suite settings with Enable monitoring selected and text boxes for Workspace ID and Workspace Key.](./media/surface-hubs/settings.png)
-6. Click **OK** to complete the configuration.
+5. In the Log Analytics settings dialog, type the Log Analytics **Workspace ID** and type the **Workspace Key**.
+
+ ![Screenshot shows the Microsoft Operations Manager Suite settings with Enable monitoring selected and text boxes for Workspace ID and Workspace Key.](./media/surface-hubs/settings.png)
+1. Click **OK** to complete the configuration.
A confirmation appears telling you whether or not the configuration was successfully applied to the device. If it was, a message appears stating that the agent successfully connected to Azure Monitor. The device then starts sending data to Azure Monitor where you can view and act on it.
azure-monitor Log Excel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/log-excel.md
description: Get a Log Analytics query into Excel and refresh results inside Exc
Previously updated : 11/03/2020 Last updated : 06/10/2021 # Integrate Log Analytics and Excel
-You can integrate Azure Monitor Log Analytics and Microsoft Excel using M query and the Log Analytics API. This integration allows you to send up to 500,000 records to Excel as long as the total volume of the results doesnΓÇÖt exceed 61MiB.
+You can integrate Azure Monitor Log Analytics and Microsoft Excel using M query and the Log Analytics API. This integration allows you to send up a certain number of records and MB of data. These limits are documented in the [Azure Monitor Log Analytics workspace limits](../service-limits.md#log-analytics-workspaces) in the Azure portal section.
> [!NOTE] > Because Excel is a local client application, local hardware and software limitations impact it's performance and ability to process large sets of data. ## Create your M query in Log Analytics
-1. **Create and run your query** in Log analytics as you normally would. DonΓÇÖt worry if you hit the 10,000 records limitation in the user interface. We recommend you use relative dates - like the ΓÇÿagoΓÇÖ function or the UI time picker - so Excel refreshes the right set of data.
+1. **Create and run your query** in Log analytics as you normally would. DonΓÇÖt worry if you hit the number of records limitation in the user interface. We recommend you use relative dates - like the ΓÇÿagoΓÇÖ function or the UI time picker - so Excel refreshes the right set of data.
2. **Export Query** - Once you are happy with the query and its results, export the query to M using Log Analytics **Export to Power BI (M query)** menu choice under the *Export* menu:
You can refresh your data directly from Excel. In the **Data** menu group in the
## Next steps
-For more information about ExcelΓÇÖs integrations with external data sources, see [Import data from external data sources (Power Query)](https://support.office.com/article/import-data-from-external-data-sources-power-query-be4330b3-5356-486c-a168-b68e9e616f5a)
+For more information about ExcelΓÇÖs integrations with external data sources, see [Import data from external data sources (Power Query)](https://support.office.com/article/import-data-from-external-data-sources-power-query-be4330b3-5356-486c-a168-b68e9e616f5a)
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
In addition to the Pay-As-You-Go model, Log Analytics has **Commitment Tiers**,
> [!NOTE] > Starting June 2, 2021, **Capacity Reservations** are now called **Commitment Tiers**. Data collected above your commitment tier level (overage) is now billed at the same price-per-GB as the current commitment tier level, lowering costs compared to the old method of billing at the Pay-As-You-Go rate, and reducing the need for users with large data volumes to fine-tune their commitment level. There are also three new larger commitment tiers: 1000, 2000, and 5000 GB/day.
-In all pricing tiers, an event's data size is calculated from a string representation of the properties that are stored in Log Analytics for this event, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges; for example, the AzureActivity, Heartbeat, and Usage types. To determine whether an event was excluded from billing for data ingestion, you can use the `_IsBillable` property as shown [below](#data-volume-for-specific-events). Usage is reported in GB (1.0E9 bytes).
+In all pricing tiers, an event's data size is calculated from a string representation of the properties that are stored in Log Analytics for this event, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage) and [Operation](/azure/azure-monitor/reference/tables/operation) types. To determine whether an event was excluded from billing for data ingestion, you can use the [_IsBillable](log-standard-columns.md#_isbillable) property as shown [below](#data-volume-for-specific-events). Usage is reported in GB (1.0E9 bytes).
+ Also, some solutions, such as [Azure Defender (Security Center)](https://azure.microsoft.com/pricing/details/azure-defender/), [Azure Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/), and [Configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models.
azure-monitor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Monitor description: Sample Azure Resource Graph queries for Azure Monitor showing use of resource types and tables to access Azure Monitor related resources and properties. Previously updated : 07/07/2021 Last updated : 07/21/2021
azure-monitor Vminsights Health Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-health-enable.md
az deployment group create --name GuestHealthDeployment --resource-group my-reso
"publisher": "Microsoft.Azure.Monitor", "type": "AzureMonitorWindowsAgent", "typeHandlerVersion": "1.0",
- "autoUpgradeMinorVersion": false
+ "autoUpgradeMinorVersion": true
}, "linux": { "publisher": "Microsoft.Azure.Monitor", "type": "AzureMonitorLinuxAgent",
- "typeHandlerVersion": "1.5",
- "autoUpgradeMinorVersion": false
+ "typeHandlerVersion": "1.0",
+ "autoUpgradeMinorVersion": true
} } },
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
na ms.devlang: na Previously updated : 07/12/2021 Last updated : 07/21/2021 # Create a dual-protocol (NFSv3 and SMB) volume for Azure NetApp Files
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
The **Allow local NFS users with LDAP** option in Active Directory connections enables local NFS client users not present on the Windows LDAP server to access a dual-protocol volume that has LDAP with extended groups enabled. > [!NOTE]
-> Before enabling this option, you should understand the [considerations](#considerations).
+> Before enabling this option, you should understand the [considerations](#considerations).
+> The **Allow local NFS users with LDAP** option is part of the **LDAP with extended groups** feature and requires registration. See [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) for details.
1. Click **Active Directory connections**. On an existing Active Directory connection, click the context menu (the three dots `…`), and select **Edit**.
Follow instructions in [Configure an NFS client for Azure NetApp Files](configur
* [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md) * [Configure ADDS LDAP over TLS for Azure NetApp Files](configure-ldap-over-tls.md)
+* [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md)
* [Troubleshoot SMB or dual-protocol volumes](troubleshoot-dual-protocol-volumes.md) * [Troubleshoot LDAP volume issues](troubleshoot-ldap-volumes.md)
azure-resource-manager Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-graph-samples.md
+
+ Title: Azure Resource Graph sample queries for Azure Resource Manager
+description: Sample Azure Resource Graph queries for Azure Resource Manager showing use of resource types and tables to access Azure Resource Manager related resources and properties.
Last updated : 07/21/2021+++
+# Azure Resource Graph sample queries for Azure Resource Manager
+
+This page is a collection of [Azure Resource Graph](../../governance/resource-graph/overview.md)
+sample queries for Azure Resource Manager. For a complete list of Azure Resource Graph samples, see
+[Resource Graph samples by Category](../../governance/resource-graph/samples/samples-by-category.md)
+and [Resource Graph samples by Table](../../governance/resource-graph/samples/samples-by-table.md).
+
+## Sample queries for tags
++
+## Next steps
+
+- Learn more about the [query language](../../governance/resource-graph/concepts/query-language.md).
+- Learn more about how to [explore resources](../../governance/resource-graph/concepts/explore-resources.md).
+- See samples of [Starter language queries](../../governance/resource-graph/samples/starter.md).
+- See samples of [Advanced language queries](../../governance/resource-graph/samples/advanced.md).
azure-resource-manager Tag Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-policies.md
Title: Policies for tagging resources
-description: Describes the Azure Policies that you can assign to ensure tag compliance.
+ Title: Policy definitions for tagging resources
+description: Describes the Azure Policy definitions that you can assign to ensure tag compliance.
Previously updated : 03/20/2020 Last updated : 07/21/2021
-# Assign policies for tag compliance
+# Assign policy definitions for tag compliance
-You use [Azure Policy](../../governance/policy/overview.md) to enforce tagging rules and conventions. By creating a policy, you avoid the scenario of resources being deployed to your subscription that don't have the expected tags for your organization. Instead of manually applying tags or searching for resources that aren't compliant, you create a policy that automatically applies the needed tags during deployment. Tags can also now be applied to existing resources with the new [Modify](../../governance/policy/concepts/effects.md#modify) effect and a [remediation task](../../governance/policy/how-to/remediate-resources.md). The following section shows example policies for tags.
+You use [Azure Policy](../../governance/policy/overview.md) to enforce tagging rules and conventions. By creating a policy, you avoid the scenario of resources being deployed to your subscription that don't have the expected tags for your organization. Instead of manually applying tags or searching for resources that aren't compliant, you create a policy that automatically applies the needed tags during deployment. Tags can also now be applied to existing resources with the new [Modify](../../governance/policy/concepts/effects.md#modify) effect and a [remediation task](../../governance/policy/how-to/remediate-resources.md). The following section shows example policy definitions for tags.
-## Policies
+## Policy definitions
[!INCLUDE [Tag policies](../../../includes/policy/reference/bycat/policies-tags.md)]
azure-sql Sql Server To Sql Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-overview.md
For more assistance, see the following resources that were developed for real-wo
|Asset |Description | |||
-|[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and an application/database remediation level for a workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform decision process for target platforms.|
+|[Data workload assessment model and tool](https://www.microsoft.com/download/details.aspx?id=103130)| This tool provides suggested "best fit" target platforms, cloud readiness, and an application/database remediation level for a workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform decision process for target platforms.|
|[DBLoader utility](https://github.com/microsoft/DataMigrationTeam/tree/master/DBLoader%20Utility)|You can use DBLoader to load data from delimited text files into SQL Server. This Windows console utility uses the SQL Server native client bulk-load interface. The interface works on all versions of SQL Server, along with Azure SQL Database.|
-|[Bulk database creation with PowerShell](https://github.com/Microsoft/DataMigrationTeam/tree/master/Bulk%20Database%20Creation%20with%20PowerShell)|You can use a set of three PowerShell scripts that create a resource group (create_rg.ps1), the [logical server in Azure](../../database/logical-servers.md) (create_sqlserver.ps1), and a SQL database (create_sqldb.ps1). The scripts include loop capabilities so you can iterate and create as many servers and databases as necessary.|
-|[Bulk schema deployment with MSSQL-Scripter and PowerShell](https://github.com/Microsoft/DataMigrationTeam/tree/master/Bulk%20Schema%20Deployment%20with%20MSSQL-Scripter%20&%20PowerShell)|This asset creates a resource group, creates one or multiple [logical servers in Azure](../../database/logical-servers.md) to host Azure SQL Database, exports every schema from an on-premises SQL Server instance (or multiple SQL Server 2005+ instances), and imports the schemas to Azure SQL Database.|
-|[Convert SQL Server Agent jobs into elastic database jobs](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Convert%20SQL%20Server%20Agent%20Jobs%20into%20Elastic%20Database%20Jobs)|This script migrates your source SQL Server Agent jobs to elastic database jobs.|
+|[Bulk database creation with PowerShell](https://www.microsoft.com/download/details.aspx?id=103107)|You can use a set of three PowerShell scripts that create a resource group (create_rg.ps1), the [logical server in Azure](../../database/logical-servers.md) (create_sqlserver.ps1), and a SQL database (create_sqldb.ps1). The scripts include loop capabilities so you can iterate and create as many servers and databases as necessary.|
+|[Bulk schema deployment with MSSQL-Scripter and PowerShell](https://www.microsoft.com/download/details.aspx?id=103032)|This asset creates a resource group, creates one or multiple [logical servers in Azure](../../database/logical-servers.md) to host Azure SQL Database, exports every schema from an on-premises SQL Server instance (or multiple SQL Server 2005+ instances), and imports the schemas to Azure SQL Database.|
+|[Convert SQL Server Agent jobs into elastic database jobs](https://www.microsoft.com/download/details.aspx?id=103123)|This script migrates your source SQL Server Agent jobs to elastic database jobs.|
|[Send emails from Azure SQL Database](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/AF%20SendMail)|This solution is an alternative to SendMail capability and is available for on-premises SQL Server. It uses Azure Functions and the SendGrid service to send emails from Azure SQL Database.|
-|[Utility to move on-premises SQL Server logins to Azure SQL Database](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/MoveLogins)|A PowerShell script can create a T-SQL command script to re-create logins and select database users from on-premises SQL Server to Azure SQL Database. The tool allows automatic mapping of Windows Server Active Directory accounts to Azure AD accounts, along with optionally migrating SQL Server native logins.|
-|[Perfmon data collection automation by using Logman](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Perfmon%20Data%20Collection%20Automation%20Using%20Logman)|You can use the Logman tool to collect Perfmon data (to help you understand baseline performance) and get migration target recommendations. This tool uses logman.exe to create the command that will create, start, stop, and delete performance counters set on a remote SQL Server instance.|
+|[Utility to move on-premises SQL Server logins to Azure SQL Database](https://www.microsoft.com/download/details.aspx?id=103111)|A PowerShell script can create a T-SQL command script to re-create logins and select database users from on-premises SQL Server to Azure SQL Database. The tool allows automatic mapping of Windows Server Active Directory accounts to Azure AD accounts, along with optionally migrating SQL Server native logins.|
+|[Perfmon data collection automation by using Logman](https://www.microsoft.com/download/details.aspx?id=103114)|You can use the Logman tool to collect Perfmon data (to help you understand baseline performance) and get migration target recommendations. This tool uses logman.exe to create the command that will create, start, stop, and delete performance counters set on a remote SQL Server instance.|
|[Database migration to Azure SQL Database by using BACPAC](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Database%20migrations%20-%20Benchmarks%20and%20Steps%20to%20Import%20to%20Azure%20SQL%20DB%20Single%20Database%20from%20BACPAC.pdf)|This white paper provides guidance and steps to help accelerate migrations from SQL Server to Azure SQL Database by using BACPAC files.| The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform. - ## Next steps - To start migrating your SQL Server databases to Azure SQL Database, see the [SQL Server to Azure SQL Database migration guide](sql-server-to-sql-database-guide.md).
azure-sql Sql Server To Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md
For more assistance, see the following resources that were developed for real-wo
|Asset |Description | |||
-|[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and an application/database remediation level for a workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform decision process for target platforms.|
+|[Data workload assessment model and tool](https://www.microsoft.com/download/details.aspx?id=103130)| This tool provides suggested "best fit" target platforms, cloud readiness, and an application/database remediation level for a workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform decision process for target platforms.|
|[DBLoader utility](https://github.com/microsoft/DataMigrationTeam/tree/master/DBLoader%20Utility)|You can use DBLoader to load data from delimited text files into SQL Server. This Windows console utility uses the SQL Server native client bulk-load interface. The interface works on all versions of SQL Server, along with Azure SQL Managed Instance.|
-|[Utility to move on-premises SQL Server logins to Azure SQL Managed Instance](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/MoveLogins)|A PowerShell script can create a T-SQL command script to re-create logins and select database users from on-premises SQL Server to Azure SQL Managed Instance. The tool allows automatic mapping of Windows Server Active Directory accounts to Azure AD accounts, along with optionally migrating SQL Server native logins.|
-|[Perfmon data collection automation by using Logman](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Perfmon%20Data%20Collection%20Automation%20Using%20Logman)|You can use the Logman tool to collect Perfmon data (to help you understand baseline performance) and get migration target recommendations. This tool uses logman.exe to create the command that will create, start, stop, and delete performance counters set on a remote SQL Server instance.|
+|[Utility to move on-premises SQL Server logins to Azure SQL Managed Instance](https://www.microsoft.com/download/details.aspx?id=103111)|A PowerShell script can create a T-SQL command script to re-create logins and select database users from on-premises SQL Server to Azure SQL Managed Instance. The tool allows automatic mapping of Windows Server Active Directory accounts to Azure AD accounts, along with optionally migrating SQL Server native logins.|
+|[Perfmon data collection automation by using Logman](https://www.microsoft.com/download/details.aspx?id=103114)|You can use the Logman tool to collect Perfmon data (to help you understand baseline performance) and get migration target recommendations. This tool uses logman.exe to create the command that will create, start, stop, and delete performance counters set on a remote SQL Server instance.|
|[Database migration to Azure SQL Managed Instance by restoring full and differential backups](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Database%20migrations%20to%20Azure%20SQL%20DB%20Managed%20Instance%20-%20%20Restore%20with%20Full%20and%20Differential%20backups.pdf)|This white paper provides guidance and steps to help accelerate migrations from SQL Server to Azure SQL Managed Instance if you have only full and differential backups (and no log backup capability).| The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
azure-sql Sql Server To Sql On Azure Vm Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md
For additional assistance, see the following resources that were developed for r
|Asset |Description | |||
-|[Data workload assessment model and tool](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.|
-|[Perfmon data collection automation using Logman](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Perfmon%20Data%20Collection%20Automation%20Using%20Logman)|A tool that collects Perform data to understand baseline performance that assists in the migration target recommendation. This tool that uses logman.exe to create the command that will create, start, stop, and delete performance counters set on a remote SQL Server.|
+|[Data workload assessment model and tool](https://www.microsoft.com/download/details.aspx?id=103130)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.|
+|[Perfmon data collection automation using Logman](https://www.microsoft.com/download/details.aspx?id=103114)|A tool that collects Perform data to understand baseline performance that assists in the migration target recommendation. This tool that uses logman.exe to create the command that will create, start, stop, and delete performance counters set on a remote SQL Server.|
|[SQL Server Deployment in Azure](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/SQL%20Server%20Deployment%20in%20Azure%20.pdf)|This guidance whitepaper assists in reviewing various options to move your SQL Server workloads to Azure including feature comparison, high availability and backup / storage considerations. | |[On-Premise SQL Server to Azure virtual machine](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/OnPremise%20SQL%20Server%20to%20Azure%20VM.pdf)|This whitepaper outlines the steps to backup and restore databases from on-premises SQL Server to SQL Server on Azure virtual machine using sample scripts.|
-|[Multiple-SQL-VM-VNet-ILB](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/ARM%20Templates/Multiple-SQL-VM-VNet-ILB)|This whitepaper outlines the steps to setup multiple Azure virtual machines in a SQL Server Always On Availability Group configuration.|
-|[Azure virtual machines supporting Ultra SSD per Region](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Find%20Azure%20VMs%20supporting%20Ultra%20SSD)|These PowerShell scripts provide a programmatic option to retrieve the list of regions that support Azure virtual machines supporting Ultra SSDs.|
+|[Multiple-SQL-VM-VNet-ILB](https://www.microsoft.com/download/details.aspx?id=103104)|This whitepaper outlines the steps to setup multiple Azure virtual machines in a SQL Server Always On Availability Group configuration.|
+|[Azure virtual machines supporting Ultra SSD per Region](https://www.microsoft.com/download/details.aspx?id=103105)|These PowerShell scripts provide a programmatic option to retrieve the list of regions that support Azure virtual machines supporting Ultra SSDs.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
azure-vmware Azure Vmware Solution Horizon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-vmware-solution-horizon.md
Here, we focus specifically on deploying Horizon on Azure VMware Solution. For g
With Horizon's introduction on Azure VMware Solution, there are now two Virtual Desktop Infrastructure (VDI) solutions on the Azure platform. The following diagram summarizes the key differences at a high level. Horizon 2006 and later versions on the Horizon 8 release line supports both on-premises deployment and Azure VMware Solution deployment. There are a few Horizon features that are supported on-premises but not on Azure VMware Solution. Other products in the Horizon ecosystem are also supported. For for information, see [feature parity and interoperability](https://kb.vmware.com/s/article/80850).
Given the Azure private cloud and SDDC max limit, we recommend a deployment arch
The connection from Azure Virtual Network to the Azure private clouds / SDDCs should be configured with ExpressRoute FastPath. The following diagram shows a basic Horizon pod deployment. ## Network connectivity to scale Horizon on Azure VMware Solution
This section lays out the network architecture at a high level with some common
### Single Horizon pod on Azure VMware Solution A single Horizon pod is the most straight forward deployment scenario because you deploy just one Horizon pod in the US East region. Since each private cloud and SDDC is estimated to handle 4,000 desktop sessions, you deploy the maximum Horizon pod size. You can plan the deployment of up to three private clouds/SDDCs.
A variation on the basic example might be to support connectivity for on-premise
The diagram shows how to support connectivity for on-premises resources. To connect to your corporate network to the Azure Virtual Network, you'll need an ExpressRoute circuit. You'll also need to connect your corporate network with each of the private cloud and SDDCs using ExpressRoute Global Reach. It allows the connectivity from the SDDC to the ExpressRoute circuit and on-premises resources. ### Multiple Horizon pods on Azure VMware Solution across multiple regions
You'll connect the Azure Virtual Network in each region to the private clouds/SD
The same principles apply if you deploy two Horizon pods in the same region. Make sure to deploy the second Horizon pod in a *separate Azure Virtual Network*. Just like the single pod example, you can connect your corporate network and on-premises pod to this multi-pod/region example using ExpressRoute and Global Reach. ## Size Azure VMware Solution hosts for Horizon deployments
azure-vmware Configure Dhcp Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-dhcp-azure-vmware-solution.md
Use a DHCP relay for any non-NSX-based DHCP service. For example, a VM running D
1. Select **Tier 1 Gateways**, select the vertical ellipsis on the Tier-1 gateway, and then select **Edit**.
- :::image type="content" source="./media/manage-dhcp/edit-tier-1-gateway-relay.png" alt-text="Screenshot showing how to edit the Tier-1 Gateway." border="true":::
+ :::image type="content" source="./media/manage-dhcp/edit-tier-1-gateway.png" alt-text="Screenshot showing how to edit the Tier-1 Gateway." border="true":::
1. Select **No IP Allocation Set** to define the IP address allocation.
- :::image type="content" source="./media/manage-dhcp/edit-ip-address-allocation.png" alt-text="Screenshot showing how to add a subnet to the Tier-1 Gateway." border="true":::
+ :::image type="content" source="./media/manage-dhcp/add-subnet.png" alt-text="Screenshot showing how to add a subnet to the Tier-1 Gateway." border="true":::
1. For **Type**, select **DHCP Server**.
azure-vmware Configure L2 Stretched Vmware Hcx Networks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-l2-stretched-vmware-hcx-networks.md
DHCP does not work for virtual machines (VMs) on the VMware HCX L2 stretch netwo
1. Take note of the destination network name.
- :::image type="content" source="media/manage-dhcp/hcx-find-destination-network.png" alt-text="Screenshot of a network extension in VMware vSphere Client" lightbox="media/manage-dhcp/hcx-find-destination-network.png":::
+ :::image type="content" source="media/manage-dhcp/hcx-find-destination-network.png" alt-text="Screenshot of a network extension in VMware vSphere Client." lightbox="media/manage-dhcp/hcx-find-destination-network.png":::
1. In NSX-T Manager, select **Networking** > **Segments** > **Segment Profiles**. 1. Select **Add Segment Profile** and then **Segment Security**.
- :::image type="content" source="media/manage-dhcp/add-segment-profile.png" alt-text="Screenshot of how to add a segment profile in NSX-T" lightbox="media/manage-dhcp/add-segment-profile.png":::
+ :::image type="content" source="media/manage-dhcp/add-segment-profile.png" alt-text="Screenshot of how to add a segment profile in NSX-T." lightbox="media/manage-dhcp/add-segment-profile.png":::
1. Provide a name and a tag, and then set the **BPDU Filter** toggle to ON and all the DHCP toggles to OFF.
- :::image type="content" source="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png" alt-text="Screenshot showing the BPDU Filter toggled on and the DHCP toggles off" lightbox="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png":::
+ :::image type="content" source="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png" alt-text="Screenshot showing the BPDU Filter toggled on and the DHCP toggles off." lightbox="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png":::
- :::image type="content" source="media/manage-dhcp/edit-segment-security.png" alt-text="Screenshot of the Segment Security field" lightbox="media/manage-dhcp/edit-segment-security.png":::
+ :::image type="content" source="media/manage-dhcp/edit-segment-security.png" alt-text="Screenshot of the Segment Security field." lightbox="media/manage-dhcp/edit-segment-security.png":::
azure-vmware Fix Deployment Provisioning Failures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/fix-deployment-provisioning-failures.md
To create a support request for an Azure VMware Solution deployment or provision
1. In the Azure portal, select the **Help** icon, and then select **New support request**.
- :::image type="content" source="media/fix-deployment-provisioning-failures/open-sr-on-avs.png" alt-text="Screenshot of the New support request pane in the Azure portal.":::
+ :::image type="content" source="media/fix-deployment-provisioning-failures/open-support-request.png" alt-text="Screenshot of the New support request pane in the Azure portal.":::
1. Enter or select the required information:
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
In this step of the quick start, you'll connect Azure VMware Solution to your on
## Create an ExpressRoute auth key in the on-premises ExpressRoute circuit
+The circuit owner creates an authorization, which creates an authorization key to be used by a circuit user to connect their virtual network gateways to the ExpressRoute circuit. An authorization is valid for only one connection.
+
+> [!NOTE]
+> Each connection requires a separate authorization.
+ 1. From the **ExpressRoute circuits** blade, under Settings, select **Authorizations**. 1. Enter the name for the authorization key and select **Save**.
In this step of the quick start, you'll connect Azure VMware Solution to your on
Once created, the new key appears in the list of authorization keys for the circuit.
-1. Make a note of the authorization key and the ExpressRoute ID. You'll use them in the next step to complete the peering.
+1. Copy the authorization key and the ExpressRoute ID. You'll use them in the next step to complete the peering.
## Peer private cloud to on-premises Now that you've created an authorization key for the private cloud ExpressRoute circuit, you can peer it with your on-premises ExpressRoute circuit. The peering is done from the on-premises ExpressRoute circuit in the **Azure portal**. You'll use the resource ID (ExpressRoute circuit ID) and authorization key of your private cloud ExpressRoute circuit to finish the peering.
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
# Speech Service release notes
+## Text-to-speech 2021-June release
+
+**Speech Studio updates**
+
+- **Custom Neural Voice**: Custom Neural Voice training extended to support South East Asia. New features released to suport data uploading status checking.
+- **Audio Content Creation**: Released a new feature to support custom lexicon. With this feature, users can easily create their lexicon files and define the customized pronunciation for their audio output.
+ ## Text-to-speech 2021-May release **New languages and voices added for neural TTS**
More samples have been added and are constantly being updated. For the latest se
## Cognitive Services Speech SDK 0.2.12733: 2018-May release
-This release is the first public preview release of the Cognitive Services Speech SDK.
+This release is the first public preview release of the Cognitive Services Speech SDK.
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
Previously updated : 04/07/2021 Last updated : 07/16/2021
For the usage with [Speech SDK](speech-sdk.md) and/or [Speech-to-text REST API f
<sup>2</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increasing-online-transcription-concurrent-request-limit).<br/> ### Text-to-Speech Quotas and limits per Speech resource
-In the table below Parameters without "Adjustable" row are **not** adjustable for all price tiers.
-
-| Quota | Free (F0)<sup>3</sup> | Standard (S0) |
-|--||--|
-| **Max number of Transactions per Second (TPS) for Standard and Neural voices** | 200<sup>4</sup> | 200<sup>4</sup> |
-| **Concurrent Request limit for Custom voice** | | |
-| Default value | 10 | 10 |
-| Adjustable | No<sup>5</sup> | Yes<sup>5</sup> |
-| **HTTP-specific quotas** | | |
-| Max Audio length produced per request | 10 min | 10 min |
-| Max number of distinct `<voice>` tags in SSML | 50 | 50 |
-| **Websocket specific quotas** | | |
-| Max Audio length produced per turn | 10 min | 10 min |
-| Max SSML Message size per turn | 64 KB | 64 KB |
+In the tables below Parameters without "Adjustable" row are **not** adjustable for all price tiers.
+
+#### General
+| Quota | Free (F0)<sup>3</sup> | Standard (S0) |
+|--|--|--|
+| **Max number of Transactions per Second (TPS) per Speech resource** | | |
+| Real-time API. Standard, Neural, Custom, and Custom Neural voices | 200<sup>4</sup> | 200<sup>4</sup> |
+| Adjustable | No<sup>4</sup> | No<sup>4</sup> |
+| **HTTP-specific quotas** | | |
+| Max Audio length produced per request | 10 min | 10 min |
+| Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 |
+| **Websocket specific quotas** | | |
+| Max Audio length produced per turn | 10 min | 10 min |
+| Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 |
+| Max SSML Message size per turn | 64 KB | 64 KB |
+
+#### Long Audio API
+
+| Quota | Free (F0)<sup>3</sup> | Standard (S0) |
+|--|--|--|
+| Min text length | N/A | 400 characters for plain text; 400 [billable characters](text-to-speech.md#pricing-note) for SSML |
+| Max text length | N/A | 10000 paragraphs |
+| Start time | N/A | 10 tasks or 10000 characters accumulated |
+
+#### Custom Neural Voice and Custom Voice<sup>6</sup>
+
+| Quota | Free (F0)<sup>3</sup> | Standard (S0) |
+|--|--|--|
+| Max number of Transactions per Second (TPS) per Speech resource | [See above](#general) | [See above](#general) |
+| Max number of data sets per Speech resource | 10 | 500 |
+| Max number of simultaneous dataset upload per Speech resource | 2 | 5 |
+| Max data file size for data import per dataset | 2 GB | 2 GB |
+| Upload of long audios or audios without script | No | Yes |
+| Max number of simultaneous model trainings per Speech resource | 1 (Custom Voice<sup>6</sup> only) | 3 |
+| Max number of custom endpoints per Speech resource | 1 (Custom Voice<sup>6</sup> only) | 50 |
+| **Concurrent Request limit for Custom Neural voice** | | |
+| Default value | N/A | 10 |
+| Adjustable | N/A | Yes<sup>5</sup> |
+| **Concurrent Request limit for Custom voice<sup>6</sup>** | | |
+| Default value | 10 | 10 |
+| Adjustable | No<sup>5</sup> | Yes<sup>5</sup> |
<sup>3</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/> <sup>4</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices) and [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).<br/>
-<sup>5</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increasing-transcription-concurrent-request-limit-for-custom-voice).<br/>
+<sup>5</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increasing-concurrent-request-limit-for-custom-neural-and-custom-voices).<br/>
+<sup>6</sup> Custom Voice is being deprecated and not available for newly created Speech resources. See [additional information](how-to-custom-voice.md#migrate-to-custom-neural-voice).<br/>
## Detailed description, Quota adjustment, and best practices
-Before requesting a quota increase (where applicable) ensure that it is necessary. Speech service is using autoscaling technologies to bring the required computational resources in "on-demand" mode and at the same time to keep the customer costs low by not maintaining an excessive amount of hardware capacity. Every time your application receives a Response Code 429 ("Too many requests") while your workload is within the defined limits (see [Quotas and Limits quick reference](#quotas-and-limits-quick-reference)) the most likely explanation is that the Service is scaling up to your demand and did not reach the required scale yet, thus does not immediately have enough resources to serve the request. This state is usually transient and should not last long.
+Before requesting a quota increase (where applicable), ensure that it is necessary. Speech service is using autoscaling technologies to bring the required computational resources in "on-demand" mode and at the same time to keep the customer costs low by not maintaining an excessive amount of hardware capacity. Every time your application receives a Response Code 429 ("Too many requests") while your workload is within the defined limits (see [Quotas and Limits quick reference](#quotas-and-limits-quick-reference)) the most likely explanation is that the Service is scaling up to your demand and did not reach the required scale yet, thus does not immediately have enough resources to serve the request. This state is usually transient and should not last long.
### General best practices to mitigate throttling during autoscaling To minimize issues related to throttling (Response Code 429), we recommend using the following techniques:
To minimize issues related to throttling (Response Code 429), we recommend using
*Example.* Your application is using Text-to-Speech and your current workload is 5 TPS (transactions per second). The next second you increase the load to 20 TPS (that is four times more). The Service immediately starts scaling up to fulfill the new load, but likely it will not be able to do it within a second, so some of the requests will get Response Code 429. - Test different load increase patterns - See [Speech-to-Text example](#speech-to-text-example-of-a-workload-pattern-best-practice)-- Create additional Speech resources in the same or different Regions and distribute the workload among them using "Round Robin" technique. This is especially important for **Text-to-Speech TPS (transactions per second)** parameter, which is set as 200 per Speech Resource and can not be adjusted
+- Create additional Speech resources in the same or different Regions and distribute the workload among them using "Round Robin" technique. This is especially important for **Text-to-Speech TPS (transactions per second)** parameter, which is set as 200 per Speech Resource and cannot be adjusted
The next sections describe specific cases of adjusting quotas.<br/>
-Jump to [Text-to-Speech. Increasing Transcription Concurrent Request limit for Custom voice](#text-to-speech-increasing-transcription-concurrent-request-limit-for-custom-voice)
+Jump to [Text-to-speech: increasing concurrent request limit for Custom Neural and Custom Voices](#text-to-speech-increasing-concurrent-request-limit-for-custom-neural-and-custom-voices)
### Speech-to-text: increasing online transcription concurrent request limit
-By default the number of concurrent requests is limited to 100 per Speech resource (Base model) and to 100 per Custom endpoint (Custom model). For Standard pricing tier this amount can be increased. Before submitting the request, ensure you are familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).
+By default the number of concurrent requests is limited to 100 per Speech resource (Base model) and to 100 per Custom endpoint (Custom model). For the Standard pricing tier, this amount can be increased. Before submitting the request, ensure you are familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).
>[!NOTE] > If you use custom models, please be aware, that one Speech resource may be associated with many custom endpoints hosting many custom model deployments. Each Custom endpoint has the default number of concurrent request limit (100) set by creation. If you need to adjust it, you need to make the adjustment of each custom endpoint **separately**. Please also note, that the value of the number of concurrent request limit for the base model of a Speech resource has **no** effect to the custom endpoints associated with this resource.
Let us suppose that a Speech resource has the Concurrent Request limit set to 30
Generally, it is highly recommended to test the workload and the workload patterns before going to production.
-### Text-to-speech: increasing transcription concurrent request limit for Custom Voice
-By default the number of concurrent requests for a Custom Voice endpoint is limited to 10. For Standard pricing tier this amount can be increased. Before submitting the request, ensure you are familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).
+### Text-to-speech: increasing concurrent request limit for Custom Neural and Custom Voices
+By default the number of concurrent requests for Custom Neural Voice and Custom Voice endpoints is limited to 10. For the Standard pricing tier, this amount can be increased. Before submitting the request, ensure you are familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).
Increasing the Concurrent Request limit does **not** directly affect your costs. Speech service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests.
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/concepts.md
# Chat concepts - Azure Communication Services Chat SDKs can be used to add real-time text chat to your applications. This page summarizes key Chat concepts and capabilities. See the [Communication Services Chat SDK Overview](./sdk-features.md) to learn more about specific SDK languages and capabilities.
This way, the message history will contain both original and translated messages
> [Get started with chat](../../quickstarts/chat/get-started.md) The following documents may be interesting to you: -- Familiarize yourself with the [Chat SDK](sdk-features.md)
+- Familiarize yourself with the [Chat SDK](sdk-features.md)
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/sdk-features.md
# Chat SDK overview - Azure Communication Services Chat SDKs can be used to add rich, real-time chat to your applications. ## Chat SDK capabilities
communication-services Identity Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/identity-model.md
If you cache access tokens to a backing store, we recommend using encryption. An
* For an introduction to access token management, see [Create and manage access tokens](../quickstarts/access-tokens.md). * For an introduction to authentication, see [Authenticate to Azure Communication Services](./authentication.md).
-* For an introduction to data residency and privacy, see [Region availability and data residency](./privacy.md).
+* For an introduction to data residency and privacy, see [Region availability and data residency](./privacy.md).
+* To learn how to quickly create identities for testing, see the [quick-create identity quickstart](../quickstarts/identity/quick-create-identity.md).
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/sms-faq.md
In the United States, Azure Communication Services does not check for landline n
## Can I send messages to multiple recipients?
-Yes, you can make one request with multiple recipients. Follow this [quickstart](../../quickstarts/telephony-sms/send.md?pivots=programming-language-csharp) to send messages to multiple recipients.
+Yes, you can make one request with multiple recipients. Follow this [quickstart](../../quickstarts/telephony-sms/send.md?pivots=programming-language-csharp) to send messages to multiple recipients.
+
+## I received a HTTP Status 202 from the Send SMS API but the SMS didn't reach my phone, what do I do now?
+
+The 202 returned by the service means that your message has been queued to be sent and not delivered. Use this [quickstart](../../quickstarts/telephony-sms/handle-sms-events.md) to subscribe to delivery report events and troubleshoot. Once the events are configured, inspect the "deliveryStatus" field of your delivery report to verify delivery success/failure.
communication-services Call Automation Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/call-automation-apis.md
# Call Automation overview++ Call Automation APIs enable you to access voice and video calling capabilities from **services**. You can use these APIs to create service applications that drive automated outbound reminder calls for appointments or provide proactive notifications for events like power outages or wildfires. Service applications that join a call can monitor updates such as participants joining or leaving, allowing you to implement rich reporting and logging capabilities. ![in and out-of-call apps](../media/call-automation-apps.png)
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/call-recording.md
# Calling Recording overview > [!NOTE] > Call Recording is currently only available for Communication Services resources created in the US region.
communication-services Meeting Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/meeting-interop.md
zone_pivot_groups: acs-web-ios
# Quickstart: Join your chat app to a Teams meeting - > [!IMPORTANT] > To enable/disable [Teams tenant interoperability](../../concepts/teams-interop.md), complete [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR21ouQM6BHtHiripswZoZsdURDQ5SUNQTElKR0VZU0VUU1hMOTBBMVhESS4u).
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles: - Check out our [chat hero sample](../../samples/chat-hero-sample.md)-- Learn more about [how chat works](../../concepts/chat/concepts.md)
+- Learn more about [how chat works](../../concepts/chat/concepts.md)
communication-services Quick Create Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/identity/quick-create-identity.md
+
+ Title: Quickstart - Quickly create Azure Communication Services identities for testing
+
+description: Learn how to use the Identities & Access Tokens tool in the Azure portal to use with samples and for troubleshooting.
++++ Last updated : 07/19/2021++++
+# Quickstart: Quickly create Azure Communication Services access tokens for testing
+
+In the [Azure portal](https://portal.azure.com) Communication Services extension, you can generate a Communication Services identity and access token. This lets you skip creating an authentication service, which makes it easier for you to test the sample apps and simple development scenarios. This feature is intended for small-scale validation and testing and should not be used for production scenarios. For production code, refer to the [creating access tokens quickstart](../access-tokens.md).
+
+The tool showcases the behavior of the ```Identity SDK``` in a simple user experience. Tokens and identities that are created through this tool follow the same behaviors and rules as if they were created using the ```Identity SDK```. For example, access tokens expire after 24 hours.
+
+## Prerequisites
+
+- An [Azure Communication Services resource](../create-communication-resource.md)
+
+## Create the access tokens
+
+In the [Azure portal](https://portal.azure.com), navigate to the **Identities & User Access Tokens** blade within your Communication Services resource.
+
+Choose the scope of the access tokens. You can select none, one, or multiple. Click **Generate**.
+
+You'll see an identity and corresponding user access token generated. You can copy these strings and use them in the [sample apps](https://docs.microsoft.com/azure/communication-services/samples/overview) and other testing scenarios.
+
+## Next steps
++
+You may also want to:
+
+ - [Learn about authentication](../../concepts/authentication.md)
+ - [Learn about client and server architecture](../../concepts/client-and-server-architecture.md)
communication-services Service Principal From Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/identity/service-principal-from-cli.md
-# Authorize access with Azure Active Directory to your communication resource in your development environment
+# Quickstart: Authenticate using Azure Active Directory (Azure CLI)
The Azure Identity SDK provides Azure Active Directory (Azure AD) token authentication support for Azure SDK packages. The latest versions of the Azure Communication Services SDKs for .NET, Java, Python, and JavaScript integrate with the Azure Identity library to provide a simple and secure means to acquire an OAuth 2.0 token for authorization of Azure Communication Services requests. An advantage of the Azure Identity SDK is that it enables you to use the same code to authenticate across multiple services whether your application is running in the development environment or in Azure.
-The Azure Identity SDK can authenticate with many methods. In Development we'll be using a service principal tied to a registered application, with credentials stored in Environnement Variables this is suitable for testing and development.
+The Azure Identity SDK can authenticate with many methods. In Development we'll be using a service principal tied to a registered application, with credentials stored in Environnment Variables this is suitable for testing and development.
## Prerequisites
Once these variables have been set, you should be able to use the DefaultAzureCr
You may also want to: -- [Learn more about Azure Identity library](/dotnet/api/overview/azure/identity-readme)
+- [Learn more about Azure Identity library](/dotnet/api/overview/azure/identity-readme)
communication-services Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/identity/service-principal.md
zone_pivot_groups: acs-js-csharp-java-python
-# Use Azure Active Directory with Communication Services
+# Quickstart: Authenticate using Azure Active Directory
+ Get started with Azure Communication Services by using Azure Active Directory. The Communication Services Identity and SMS SDKs support Azure Active Directory (Azure AD) authentication. This quickstart shows you how to authorize access to the Identity and SMS SDKs from an Azure environment that supports Active Directory. It also describes how to test your code in a development environment by creating a service principal for your work.
This quickstart shows you how to authorize access to the Identity and SMS SDKs f
- [Creating user access tokens](../../quickstarts/access-tokens.md) - [Send an SMS message](../../quickstarts/telephony-sms/send.md) - [Learn more about SMS](../../concepts/telephony-sms/concepts.md)
+- [Quickly create an identity for testing](./quick-create-identity.md).
+
communication-services Relay Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/relay-token.md
Title: Quickstart - Get a network relay token
+ Title: Quickstart - Access TURN relays
description: Learn how to retrieve a STUN/TURN token using Azure Communication Services
zone_pivot_groups: acs-js-csharp
-# Quickstart: Get a network relay token
+# Quickstart: Access TURN relays
[!INCLUDE [Public Preview Notice](../includes/public-preview-include.md)]
-This quickstart shows you how to retrieve a network relay token to access Azure Communication Services TURN servers
+This quickstart shows you how to retrieve a network relay token to access Azure Communication Services TURN servers.
## Prerequisites
communication-services Get Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/get-phone-number.md
Title: Quickstart - Manage Phone Numbers using Azure Communication Services
+ Title: Quickstart - Get and manage phone numbers using Azure Communication Services
description: Learn how to manage phone numbers using Azure Communication Services
zone_pivot_groups: acs-azp-java-net-python-csharp-js
-# Quickstart: Manage Phone Numbers
+
+# Quickstart: Get and manage phone numbers
[!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)]
communication-services Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/logic-app.md
To add the **Send SMS** action as a new step in your workflow by using the Azure
:::image type="content" source="./media/logic-app/select-send-sms-action.png" alt-text="Screenshot that shows the Logic App Designer and the Azure Communication Services connector with the Send SMS action selected."::: 1. Now create a connection to your Communication Services resource.-
- 1. Provide a name for the connection.
-
- 1. Select your Azure Communication Services resource.
-
- 1. Select **Create**.
-
- :::image type="content" source="./media/logic-app/send-sms-configuration.png" alt-text="Screenshot that shows the Send SMS action configuration with sample information.":::
+ 1. Within the same subscription:
+
+ 1. Provide a name for the connection.
+
+ 1. Select your Azure Communication Services resource.
+
+ 1. Select **Create**.
+
+ :::image type="content" source="./media/logic-app/send-sms-configuration.png" alt-text="Screenshot that shows the Send SMS action configuration with sample information.":::
+
+ 1. Using the connection string from your Communication Services resource:
+
+ 1. Provide a name for the connection.
+
+ 1. Select ConnectionString Authentication from the drop down options.
+
+ 1. Enter the connection string of your Communication Services resource.
+
+ 1. Select **Create**.
+
+ :::image type="content" source="./media/logic-app/connection-string-auth.png" alt-text="Screenshot that shows the Connection String Authentication configuration.":::
+
+ 1. Using Service Principal ([Refer Services Principal Creation](../identity/service-principal-from-cli.md)):
+ 1. Provide a name for the connection.
+
+ 1. Select Service principal (Azure AD application) Authentication from the drop down options.
+
+ 1. Enter the Tenant ID, Client ID & Client Secret of your Service Principal.
+
+ 1. Enter the Communication Services Endpoint URL value of your Communication Services resource.
+
+ 1. Select **Create**.
+
+ :::image type="content" source="./media/logic-app/service-principal-auth.png" alt-text="Screenshot that shows the Service Principal Authentication configuration.":::
1. In the **Send SMS** action, provide the following information:
communication-services Call Automation Api Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/call-automation-api-sample.md
zone_pivot_groups: acs-csharp-java
-# Call Automation API Quickstart
+
+# Quickstart: Use the call automation APIs
+++ Get started with Azure Communication Services by using the Communication Services Calling server SDKs to build an automated call routing solution. ::: zone pivot="programming-language-csharp"
communication-services Call Recording Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/call-recording-sample.md
zone_pivot_groups: acs-csharp-java # Call Recording API Quickstart++ This quickstart gets you started recording voice and video calls. This quickstart assumes you've already used the [Calling client SDK](get-started-with-video-calling.md) to build the end-user calling experience. Using the **Calling Server APIs and SDKs** you can enable and manage recordings. ::: zone pivot="programming-language-csharp"
communication-services Chat Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/chat-hero-sample.md
# Get started with the group chat hero sample - > [!IMPORTANT] > [This sample is available **on GitHub**.](https://github.com/Azure-Samples/communication-services-web-chat-hero)
container-registry Container Registry Helm Repos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-helm-repos.md
Title: Store Helm charts description: Learn how to store Helm charts for your Kubernetes applications using repositories in Azure Container Registry Previously updated : 04/15/2021 Last updated : 07/19/2021 # Push and pull Helm charts to an Azure container registry To quickly manage and deploy applications for Kubernetes, you can use the [open-source Helm package manager][helm]. With Helm, application packages are defined as [charts](https://helm.sh/docs/topics/charts/), which are collected and stored in a [Helm chart repository](https://helm.sh/docs/topics/chart_repository/).
-This article shows you how to host Helm charts repositories in an Azure container registry, using Helm 3 commands. In many scenarios, you would build and upload your own charts for the applications you develop. For more information on how to build your own Helm charts, see the [Chart Template Developer's Guide][develop-helm-charts]. You can also store an existing Helm chart from another Helm repo.
+This article shows you how to host Helm charts repositories in an Azure container registry, using Helm 3 commands and storing charts as [OCI artifacts](container-registry-image-formats.md#oci-artifacts). In many scenarios, you would build and upload your own charts for the applications you develop. For more information on how to build your own Helm charts, see the [Chart Template Developer's Guide][develop-helm-charts]. You can also store an existing Helm chart from another Helm repo.
## Helm 3 or Helm 2?
-To store, manage, and install Helm charts, you use a Helm client and the Helm CLI. Major releases of the Helm client include Helm 3 and Helm 2. For details on the version differences, see the [version FAQ](https://helm.sh/docs/faq/).
+To store, manage, and install Helm charts, you use commands in the Helm CLI. Major Helm releases include Helm 3 and Helm 2. For details on the version differences, see the [version FAQ](https://helm.sh/docs/faq/).
Helm 3 should be used to host Helm charts in Azure Container Registry. With Helm 3, you:
-* Can create one or more Helm repositories in an Azure container registry
-* Store Helm 3 charts in a registry as [OCI artifacts](container-registry-image-formats.md#oci-artifacts). Azure Container Registry provides GA support for [OCI artifacts](container-registry-oci-artifacts.md), including Helm charts.
-* Authenticate with your registry using the `helm registry login` command.
-* Use `helm chart` commands in the Helm CLI to push, pull, and manage Helm charts in a registry
+* Can store and manage Helm charts in repositories in an Azure container registry
+* Store Helm charts in your registry as [OCI artifacts](container-registry-image-formats.md#oci-artifacts). Azure Container Registry provides GA support for OCI artifacts, including Helm charts.
+* Authenticate with your registry using the `helm registry login` or `az acr login` command.
+* Use `helm chart` commands to push, pull, and manage Helm charts in a registry
* Use `helm install` to install charts to a Kubernetes cluster from a local repository cache.+
+### Feature support
+
+Azure Container Registry supports specific Helm chart management features depending on whether you are using Helm 3 (current) or Helm 2 (deprecated).
+
+| Feature | Helm 2 | Helm 3 |
+| - | - | - |
+| Manage charts using `az acr helm` commands | :heavy_check_mark: | |
+| Store charts as OCI artifacts | | :heavy_check_mark: |
+| Manage charts using `az acr repository` commands and the **Repositories** blade in Azure portal| | :heavy_check_mark: |
++ > [!NOTE]
-> As of Helm 3, [az acr helm][az-acr-helm] commands for use with the Helm 2 client are being deprecated. A minimum of 3 months' notice will be provided in advance of command removal. If you've previously deployed Helm 2 charts, see [Migrating Helm v2 to v3](https://helm.sh/docs/topics/v2_v3_migration/).
+> As of Helm 3, [az acr helm][az-acr-helm] commands for use with the Helm 2 client are being deprecated. A minimum of 3 months' notice will be provided in advance of command removal.
+
+### Chart version compatibility
+
+The following Helm [chart versions](https://helm.sh/docs/topics/charts/#the-apiversion-field) can be stored in Azure Container Registry and are installable by the Helm 2 and Helm 3 clients.
+
+| Version | Helm 2 | Helm 3 |
+| - | - | - |
+| apiVersion v1 | :heavy_check_mark: | :heavy_check_mark: |
+| apiVersion v2 | | :heavy_check_mark: |
+
+### Migrate from Helm 2 to Helm 3
+
+If you've previously stored and deployed charts using Helm 2 and Azure Container Registry, we recommend migrating to Helm 3. See:
+
+* [Migrating Helm 2 to 3](https://helm.sh/docs/topics/v2_v3_migration/) in the Helm documentation.
+* [Migrate your registry to store Helm OCI artifacts](#migrate-your-registry-to-store-helm-oci-artifacts), later in this article
## Prerequisites
Use the `helm version` command to verify that you have installed Helm 3:
helm version ```
-Set the following environment variable to enable OCI support in the Helm 3 client. Currently, this support is experimental.
+Set the following environment variable to enable OCI support in the Helm 3 client. Currently, this support is experimental and subject to change.
```console export HELM_EXPERIMENTAL_OCI=1
For more about creating and running this example, see [Getting Started](https://
Change directory to the `hello-world` subdirectory. Then, run `helm chart save` to save a copy of the chart locally and also create an alias with the fully qualified name of the registry (all lowercase) and the target repository and tag.
-In the following example, the registry name is *mycontainerregistry*, the target repo is *hello-world*, and the target chart tag is *v1*, but substitute values for your environment:
+In the following example, the registry name is *mycontainerregistry*, the target repo is *helm/hello-world*, and the target chart tag is *0.1.0*. To successfully pull dependencies, the target chart image name and tag must match the name and version in `Chart.yaml`.
```console cd ..
-helm chart save . hello-world:v1
-helm chart save . mycontainerregistry.azurecr.io/helm/hello-world:v1
+helm chart save . hello-world:0.1.0
+helm chart save . mycontainerregistry.azurecr.io/helm/hello-world:0.1.0
``` Run `helm chart list` to confirm you saved the charts in the local registry cache. Output is similar to: ```console REF NAME VERSION DIGEST SIZE CREATED
-hello-world:v1 hello-world 0.1.0 5899db0 3.2 KiB 2 minutes
-mycontainerregistry.azurecr.io/helm/hello-world:v1 hello-world 0.1.0 5899db0 3.2 KiB 2 minutes
+hello-world:0.1.0 hello-world 0.1.0 5899db0 3.2 KiB 2 minutes
+mycontainerregistry.azurecr.io/helm/hello-world:0.1.0 hello-world 0.1.0 5899db0 3.2 KiB 2 minutes
``` ## Authenticate with the registry
-Run the `helm registry login` command in the Helm 3 CLI to [authenticate with the registry](container-registry-authentication.md) using credentials appropriate for your scenario.
+Run `helm registry login` to authenticate with the registry. You may pass [registry credentials](container-registry-authentication.md) appropriate for your scenario, such as service principal credentials, or a repository-scoped token.
For example, create an Azure Active Directory [service principal with pull and push permissions](container-registry-auth-service-principal.md#create-a-service-principal) (AcrPush role) to the registry. Then supply the service principal credentials to `helm registry login`. The following example supplies the password using an environment variable:
echo $spPassword | helm registry login mycontainerregistry.azurecr.io \
--password-stdin ```
+> [!TIP]
+> You can also login to the registry with your [individual Azure AD identity](container-registry-authentication.md?tabs=azure-cli#individual-login-with-azure-ad) to push and pull Helm charts.
+ ## Push chart to registry Run the `helm chart push` command in the Helm 3 CLI to push the chart to the fully qualified target repository: ```console
-helm chart push mycontainerregistry.azurecr.io/helm/hello-world:v1
+helm chart push mycontainerregistry.azurecr.io/helm/hello-world:0.1.0
``` After a successful push, output is similar to: ```output The push refers to repository [mycontainerregistry.azurecr.io/helm/hello-world]
-ref: mycontainerregistry.azurecr.io/helm/hello-world:v1
+ref: mycontainerregistry.azurecr.io/helm/hello-world:0.1.0
digest: 5899db028dcf96aeaabdadfa5899db025899db025899db025899db025899db02 size: 3.2 KiB name: hello-world
Output, abbreviated in this example, shows a `configMediaType` of `application/v
"lastUpdateTime": "2020-03-20T18:11:37.7167893Z", "mediaType": "application/vnd.oci.image.manifest.v1+json", "tags": [
- "v1"
+ "0.1.0"
] ``` ## Pull chart to local cache
-To install a Helm chart to Kubernetes, the chart must be in the local cache. In this example, first run `helm chart remove` to remove the existing local chart named `mycontainerregistry.azurecr.io/helm/hello-world:v1`:
+To install a Helm chart to Kubernetes, the chart must be in the local cache. In this example, first run `helm chart remove` to remove the existing local chart named `mycontainerregistry.azurecr.io/helm/hello-world:0.1.0`:
```console
-helm chart remove mycontainerregistry.azurecr.io/helm/hello-world:v1
+helm chart remove mycontainerregistry.azurecr.io/helm/hello-world:0.1.0
``` Run `helm chart pull` to download the chart from the Azure container registry to your local cache: ```console
-helm chart pull mycontainerregistry.azurecr.io/helm/hello-world:v1
+helm chart pull mycontainerregistry.azurecr.io/helm/hello-world:0.1.0
``` ## Export Helm chart
helm chart pull mycontainerregistry.azurecr.io/helm/hello-world:v1
To work further with the chart, export it to a local directory using `helm chart export`. For example, export the chart you pulled to the `install` directory: ```console
-helm chart export mycontainerregistry.azurecr.io/helm/hello-world:v1 \
+helm chart export mycontainerregistry.azurecr.io/helm/hello-world:0.1.0 \
--destination ./install ```
helm uninstall myhelmtest
To delete a chart from the container registry, use the [az acr repository delete][az-acr-repository-delete] command. Run the following command and confirm the operation when prompted: ```azurecli
-az acr repository delete --name mycontainerregistry --image helm/hello-world:v1
+az acr repository delete --name mycontainerregistry --image helm/hello-world:0.1.0
+```
+
+## Migrate your registry to store Helm OCI artifacts
+
+If you previously set up your Azure container registry as a chart repository using Helm 2 and the `az acr helm` commands, we recommend that you [upgrade][helm-install] to the Helm 3 client. Then, follow these steps to store the charts as OCI artifacts in your registry.
+
+> [!IMPORTANT]
+> * After you complete migration from a Helm 2-style (index.yaml-based) chart repository to OCI artifact repositories, use the Helm CLI and `az acr repository` commands to manage the charts. See previous sections in this article.
+> * The Helm OCI artifact repositories are not discoverable using Helm commands such as `helm search` and `helm repo list`. For more information about Helm commands used to store charts as OCI artifacts, see the [Helm documentation](https://helm.sh/docs/topics/registries/).
+
+### Enable OCI support
+
+Ensure that you are using the Helm 3 client:
+
+```console
+helm version
+```
+
+Enable OCI support in the Helm 3 client. Currently, this support is experimental and subject to change.
+
+```console
+export HELM_EXPERIMENTAL_OCI=1
+```
+
+### List current charts
+
+List the charts currently stored in the registry, here named *myregistry*:
+
+```console
+helm search repo myregistry
+```
+
+Output shows the charts and chart versions:
+
+```
+NAME CHART VERSION APP VERSION DESCRIPTION
+myregistry/ingress-nginx 3.20.1 0.43.0 Ingress controller for Kubernetes...
+myregistry/wordpress 9.0.3 5.3.2 Web publishing platform for building...
+[...]
+```
+
+### Save charts as OCI artifacts
+
+For each chart in the repo, pull the chart locally, and save it as an OCI artifact. Example:
+
+```console
+helm pull myregisry/ingress-nginx --untar
+cd ingress-nginx
+helm chart save . myregistry.azurecr.io/ingress-nginx:3.20.1
+```
+
+### Push charts to registry
+
+Login to the registry:
+
+```azurecli
+az acr login --name myregistry
+```
+
+Push each chart to the registry:
+
+```console
+helm chart push myregistry.azurecr.io/ingress-nginx:3.20.1
+```
+
+After pushing a chart, confirm it is stored in the registry:
+
+```azurecli
+az acr repository list --name myregistry
+```
+
+After pushing all of the charts, optionally remove the Helm 2-style chart repository from the registry. Doing so reduces storage in your registry:
+
+```console
+helm repo remove myregistry
``` ## Next steps
az acr repository delete --name mycontainerregistry --image helm/hello-world:v1
[helm]: https://helm.sh/ [helm-install]: https://helm.sh/docs/intro/install/ [develop-helm-charts]: https://helm.sh/docs/chart_template_guide/
-[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
<!-- LINKS - internal --> [azure-cli-install]: /cli/azure/install-azure-cli [aks-quickstart]: ../aks/kubernetes-walkthrough.md [acr-bestpractices]: container-registry-best-practices.md
-[az-configure]: /cli/azure/reference-index#az_configure
[az-acr-login]: /cli/azure/acr#az_acr_login [az-acr-helm]: /cli/azure/acr/helm [az-acr-repository]: /cli/azure/acr/repository [az-acr-repository-show]: /cli/azure/acr/repository#az_acr_repository_show [az-acr-repository-delete]: /cli/azure/acr/repository#az_acr_repository_delete
-[az-acr-repository-show-tags]: /cli/azure/acr/repository#az_acr_repository_show_tags
[az-acr-repository-show-manifests]: /cli/azure/acr/repository#az_acr_repository_show_manifests [acr-tasks]: container-registry-tasks-overview.md
container-registry Container Registry Image Formats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-image-formats.md
Azure Container Registry supports images that meet the [Open Container Initiativ
## OCI artifacts
-Azure Container Registry supports the [OCI Distribution Specification](https://github.com/opencontainers/distribution-spec), a vendor-neutral, cloud-agnostic spec to store, share, secure, and deploy container images and other content types (artifacts). The specification allows a registry to store a wide range of artifacts in addition to container images. You use tooling appropriate to the artifact to push and pull artifacts. For an example, see [Push and pull an OCI artifact using an Azure container registry](container-registry-oci-artifacts.md).
+Azure Container Registry supports the [OCI Distribution Specification](https://github.com/opencontainers/distribution-spec), a vendor-neutral, cloud-agnostic spec to store, share, secure, and deploy container images and other content types (artifacts). The specification allows a registry to store a wide range of artifacts in addition to container images. You use tooling appropriate to the artifact to push and pull artifacts. For examples, see:
+
+* [Push and pull an OCI artifact using an Azure container registry](container-registry-oci-artifacts.md)
+* [Push and pull Helm charts to an Azure container registry](container-registry-helm-repos.md)
To learn more about OCI artifacts, see the [OCI Registry as Storage (ORAS)](https://github.com/deislabs/oras) repo and the [OCI Artifacts](https://github.com/opencontainers/artifacts) repo on GitHub.
container-registry Container Registry Tutorial Quick Task https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-tutorial-quick-task.md
Title: Tutorial - Quick container image build description: In this tutorial, you learn how to build a Docker container image in Azure with Azure Container Registry Tasks (ACR Tasks), then deploy it to Azure Container Instances. Previously updated : 11/24/2020 Last updated : 07/20/2021 # Customer intent: As a developer or devops engineer, I want to quickly build container images in Azure, without having to install dependencies like Docker Engine, so that I can simplify my inner-loop development pipeline.
az keyvault create --resource-group $RES_GROUP --name $AKV_NAME
You now need to create a service principal and store its credentials in your key vault.
-Use the [az ad sp create-for-rbac][az-ad-sp-create-for-rbac] command to create the service principal, and [az keyvault secret set][az-keyvault-secret-set] to store the service principal's **password** in the vault:
+Use the [az ad sp create-for-rbac][az-ad-sp-create-for-rbac] command to create the service principal, and [az keyvault secret set][az-keyvault-secret-set] to store the service principal's **password** in the vault. Use Azure CLI version **2.25.0** or later for these commands:
```azurecli # Create service principal, store its password in AKV (the registry *password*)
Next, store the service principal's *appId* in the vault, which is the **usernam
az keyvault secret set \ --vault-name $AKV_NAME \ --name $ACR_NAME-pull-usr \
- --value $(az ad sp show --id http://$ACR_NAME-pull --query appId --output tsv)
+ --value $(az ad sp list --display-name $ACR_NAME-pull --query [].appId --output tsv)
``` You've created an Azure Key Vault and stored two secrets in it:
cosmos-db Change Feed Processor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/change-feed-processor.md
ms.devlang: dotnet Previously updated : 06/09/2021 Last updated : 07/20/2021
It's possible to initialize the change feed processor to read changes starting a
The change feed processor will be initialized for that specific date and time and start reading the changes that happened after.
+> [!NOTE]
+> Starting the change feed processor at a specific date and time is not supported in multi-region write accounts.
+ ### Reading from the beginning In other scenarios like data migrations or analyzing the entire history of a container, we need to read the change feed from **the beginning of that container's lifetime**. To do that, we can use `WithStartTime` on the builder extension, but passing `DateTime.MinValue.ToUniversalTime()`, which would generate the UTC representation of the minimum `DateTime` value, like so:
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-setup-rbac.md
description: Learn how to configure role-based access control with Azure Active
Previously updated : 07/02/2021 Last updated : 07/21/2021
The examples below use a service principal with a `ClientSecretCredential` insta
### In .NET
-The Azure Cosmos DB RBAC is currently supported in the `preview` version of the [.NET SDK V3](sql-api-sdk-dotnet-standard.md).
+The Azure Cosmos DB RBAC is currently supported in the [.NET SDK V3](sql-api-sdk-dotnet-standard.md).
```csharp TokenCredential servicePrincipal = new ClientSecretCredential(
cost-management-billing Review Individual Bill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/review-individual-bill.md
The **Usage Charges** section of your invoice shows the total value (cost) for e
In your CSV usage file, filter by *MeterName* for the corresponding Resource shown on you invoice. Then, sum the *Cost* value for items in the column. Here's an example that focuses on the meter name (P10 disks) that corresponds to the same line item on the invoice.
+To reconcile your reservation purchase charges, in your CSV usage file, filter by *ChargeType* as Purchase, it will show all the reservation purchases charges for the month. You can compare these charges by looking at *MeterName* and *MeterSubCategory* in the usage file to Resource and Type in your invoice respectively.
+ ![Usage file summed value for MeterName](./media/review-individual-bill/usage-file-usage-charge-resource.png) The summed *Cost* value should match precisely to the *usage charges* cost for the individual resource charged on your invoice.
data-factory Create Shared Self Hosted Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md
Remove-AzDataFactoryV2IntegrationRuntime `
-ResourceGroupName $ResourceGroupName ` -DataFactoryName $SharedDataFactoryName ` -Name $SharedIntegrationRuntimeName `
- -Links `
-LinkedDataFactoryName $LinkedDataFactoryName ```
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-sink.md
Previously updated : 04/06/2021 Last updated : 07/20/2021 # Sink transformation in mapping data flow
You can group sinks together by applying the same order number for a series of s
## Error row handling
-When writing to databases, certain rows of data may fail due to constraints set by the destination. By default, a data flow run will fail on the first error it gets. In certain connectors, you can choose to **Continue on error** that allows your data flow to complete even if individual rows have errors. Currently, this capability is only available in Azure SQL Database. For more information, see [error row handling in Azure SQL DB](connector-azure-sql-database.md#error-row-handling).
+When writing to databases, certain rows of data may fail due to constraints set by the destination. By default, a data flow run will fail on the first error it gets. In certain connectors, you can choose to **Continue on error** that allows your data flow to complete even if individual rows have errors. Currently, this capability is only available in Azure SQL Database and Synapse. For more information, see [error row handling in Azure SQL DB](connector-azure-sql-database.md#error-row-handling).
Below is a video tutorial on how to use database error row handling automatically in your sink transformation.
data-factory Enable Aad Authentication Azure Ssis Ir https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/enable-aad-authentication-azure-ssis-ir.md
Azure SQL Database supports creating a database with an Azure AD user. First, yo
You can use an existing Azure AD group or create a new one using Azure AD PowerShell.
-1. Install the [Azure AD PowerShell](/powershell/azure/active-directory/install-adv2.md) module.
+1. Install the [Azure AD PowerShell](/powershell/azure/active-directory/install-adv2) module.
2. Sign in using `Connect-AzureAD`, run the following cmdlet to create a group, and save it in a variable:
You can [Configure and manage Azure AD authentication for Azure SQL Database](.
### Create a contained user in Azure SQL Database representing the Azure AD group
-For this next step, you need [SSMS](/sql/ssms/download-sql-server-management-studio-ssms.md).
+For this next step, you need [SSMS](/sql/ssms/download-sql-server-management-studio-ssms).
1. Start SSMS.
Follow the steps in [Provision an Azure AD administrator for Azure SQL Managed I
### Add the specified system/user-assigned managed identity for your ADF as a user in Azure SQL Managed Instance
-For this next step, you need [SSMS](/sql/ssms/download-sql-server-management-studio-ssms.md).
+For this next step, you need [SSMS](/sql/ssms/download-sql-server-management-studio-ssms).
1. Start SSMS.
To provision your Azure-SSIS IR with PowerShell, do the following things:
When you run SSIS packages on Azure-SSIS IR, you can use Azure AD authentication with the specified system/user-assigned managed identity for your ADF to connect to various Azure resources. Currently we support Azure AD authentication with the specified system/user-assigned managed identity for your ADF on the following connection managers. -- [OLEDB Connection Manager](/sql/integration-services/connection-manager/ole-db-connection-manager.md#managed-identities-for-azure-resources-authentication)
+- [OLEDB Connection Manager](/sql/integration-services/connection-manager/ole-db-connection-manager#managed-identities-for-azure-resources-authentication)
-- [ADO.NET Connection Manager](/sql/integration-services/connection-manager/ado-net-connection-manager.md#managed-identities-for-azure-resources-authentication)
+- [ADO.NET Connection Manager](/sql/integration-services/connection-manager/ado-net-connection-manager#managed-identities-for-azure-resources-authentication)
-- [Azure Storage Connection Manager](/sql/integration-services/connection-manager/azure-storage-connection-manager.md#managed-identities-for-azure-resources-authentication)
+- [Azure Storage Connection Manager](/sql/integration-services/connection-manager/azure-storage-connection-manager#managed-identities-for-azure-resources-authentication)
data-factory Self Hosted Integration Runtime Proxy Ssis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/self-hosted-integration-runtime-proxy-ssis.md
Finally, you download and install the latest version of self-hosted IR, as well
### Enable Windows authentication for on-premises tasks
-If on-premises staging tasks and Execute SQL/Process Tasks on your self-hosted IR require Windows authentication, you must also [configure Windows authentication feature on your Azure-SSIS IR](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth.md).
+If on-premises staging tasks and Execute SQL/Process Tasks on your self-hosted IR require Windows authentication, you must also [configure Windows authentication feature on your Azure-SSIS IR](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth).
Your on-premises staging tasks and Execute SQL/Process Tasks will be invoked with the self-hosted IR service account (*NT SERVICE\DIAHostService*, by default), and your data stores will be accessed with the Windows authentication account. Both accounts require certain security policies to be assigned to them. On the self-hosted IR machine, go to **Local Security Policy** > **Local Policies** > **User Rights Assignment**, and then do the following:
If you haven't already done so, create an Azure Blob Storage linked service in t
- For **Authentication method**, select **Account key**, **SAS URI**, **Service Principal**, **Managed Identity**, or **User-Assigned Managed Identity**. >[!TIP]
->If you select the **Service Principal** method, grant your service principal at least a *Storage Blob Data Contributor* role. For more information, see [Azure Blob Storage connector](connector-azure-blob-storage.md#linked-service-properties). If you select the **Managed Identity**/**User-Assigned Managed Identity** method, grant the specified system/user-assigned managed identity for your ADF a proper role to access Azure Blob Storage. For more information, see [Access Azure Blob Storage using Azure Active Directory (Azure AD) authentication with the specified system/user-assigned managed identity for your ADF](/sql/integration-services/connection-manager/azure-storage-connection-manager.md#managed-identities-for-azure-resources-authentication).
+>If you select the **Service Principal** method, grant your service principal at least a *Storage Blob Data Contributor* role. For more information, see [Azure Blob Storage connector](connector-azure-blob-storage.md#linked-service-properties). If you select the **Managed Identity**/**User-Assigned Managed Identity** method, grant the specified system/user-assigned managed identity for your ADF a proper role to access Azure Blob Storage. For more information, see [Access Azure Blob Storage using Azure Active Directory (Azure AD) authentication with the specified system/user-assigned managed identity for your ADF](/sql/integration-services/connection-manager/azure-storage-connection-manager#managed-identities-for-azure-resources-authentication).
![Prepare the Azure Blob storage-linked service for staging](media/self-hosted-integration-runtime-proxy-ssis/shir-azure-blob-storage-linked-service.png)
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
# Forward alert information
-You can send alert information to partners who are integrating with Azure Defender for IoT, to syslog servers, to email addresses, and more. Working with forwarding rules lets you quickly deliver alert information to security stakeholders.
+You can send alert information to partners who are integrating with Azure Defender for IoT, to syslog servers, to email addresses, and more. Working with forwarding rules lets you quickly deliver alert information to security stakeholders.
+
+Define criteria by which to trigger a forwarding rule. Working with forwarding rule criteria helps pinpoint and manage the volume of information sent from the sensor to external systems.
Syslog and other default forwarding actions are delivered with your system. More forwarding actions might become available when you integrate with partner vendors, such as Microsoft Azure Sentinel, ServiceNow, or Splunk. :::image type="content" source="media/how-to-work-with-alerts-sensor/alert-information-screen.png" alt-text="Alert information.":::
-Defender for IoT administrators have permission to use forwarding rules.
+Defender for IoT administrators has permission to use forwarding rules.
## About forwarded alert information
Relevant information is sent to partner systems when forwarding rules are create
:::image type="content" source="media/how-to-work-with-alerts-sensor/create-forwarding-rule-screen.png" alt-text="Create a Forwarding Rule icon.":::
-1. Enter a name for the forwarding rule.
+1. Enter a name for the forwarding rule.
1. Select the severity level.
-1. Select any protocols to apply.
-
-1. Select which engines the rule should apply to.
-
-1. Select an action to apply, and fill in any parameters needed for the selected action.
-
-1. Add another action if desired.
-
-1. Select **Submit**.
-
-**To create a forwarding rule on the management console**:
-
-1. Sign in to the sensor.
-
-1. Select **Forwarding** on the side menu.
-
-1. Select the :::image type="icon" source="../media/how-to-work-with-alerts-sensor/plus-add-icon.png" border="false"::: icon.
+ This is the minimum incident to forward, in terms of severity level. For example, if you select **Minor**, minor alerts and any alert above this severity level will be forwarded. Levels are predefined.
-1. In the Create Forwarding Rule window, enter a name for the rule
-
- :::image type="content" source="../media/how-to-work-with-alerts-sensor/management-console-create-forwarding-rule.png" alt-text="Enter a meaningful name in the name field of the Create Forwarding Rule window.":::
-
-1. Select the severity level from the drop-down menu.
-
1. Select any protocols to apply.
-1. Select which engines the rule should apply to.
-
-1. Select the checkbox if you want the forwarding to rule to report system notifications.
-
-1. Select the checkbox if you want the forwarding to rule to report alert notifications.
+ Only trigger the forwarding rule if the traffic detected was running over specific protocols. Select the required protocols from the drop-down list or choose them all.
-1. Select **Add** to add an action to apply. Fill in any parameters needed for the selected action.
-
-1. Add another action if desired.
-
-1. Select **Save**.
-
-### Forwarding rule criteria
-
-Define criteria by which to trigger a forwarding rule. Working with forwarding rule criteria helps pinpoint and manage the volume of information sent from the sensor to external systems. The following options are available:
-
-**Protocols**: Only trigger the forwarding rule if the traffic detected was running over specific protocols. Select the required protocols from the drop-down list or choose them all.
+1. Select which engines the rule should apply to.
-**Engines**: Select the required engines or choose them all. Alerts from selected engines will be sent.
+ Select the required engines, or choose them all. Alerts from selected engines will be sent.
-**Severity levels**: This is the minimum incident to forward, in terms of severity level. For example, if you select **Minor**, minor alerts and any alert above this severity level will be forwarded. Levels are predefined.
+1. Select an action to apply, and fill in any parameters needed for the selected action.
-### Forwarding rule actions
+ Forwarding rule actions instruct the sensor to forward alert information to partner vendors or servers. You can create multiple actions for each forwarding rule.
-Forwarding rule actions instruct the sensor to forward alert information to partner vendors or servers. You can create multiple actions for each forwarding rule.
+1. Add another action if desired.
-In addition to the forwarding actions delivered with your system, other actions might become available when you integrate with partner vendors.
+1. Select **Submit**.
### Email address action
After you enter all the information, select **Submit**.
### Webhook server action
-Send alert information to a webhook server. Working with webhook servers lets you set up integrations that subscribe to alert events with Defender for IoT. When an alert event is triggered, the management console sends a HTTP POST payload to the webhook's configured URL. Webhooks can be used to update an external SIEM system, SOAR systems, Incident management systems, etc.
+Send alert information to a webhook server. Working with webhook servers lets you set up integrations that subscribe to alert events with Defender for IoT. When an alert event is triggered, the management console sends an HTTP POST payload to the webhook's configured URL. Webhooks can be used to update an external SIEM system, SOAR systems, Incident management systems, etc.
**To define to a webhook action:**
Test the connection between the sensor and the partner server that's defined in
## Edit and delete forwarding rules
-To edit a forwarding rule:
+**To edit a forwarding rule**:
- On the **Forwarding Rule** screen, select **Edit** under the **More** drop-down menu. Make the desired changes and select **Submit**.
-To remove a forwarding rule:
+**To remove a forwarding rule**:
- On the **Forwarding Rule** screen, select **Remove** under the **More** drop-down menu. In the **Warning** dialog box, select **OK**.
defender-for-iot How To Work With Alerts On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-work-with-alerts-on-premises-management-console.md
Title: Work with alerts on the on-premises management console description: Use the on-premises management console to get an enterprise view of recent threats in your network and better understand how sensor users are handling them. Previously updated : 12/06/2020 Last updated : 07/13/2021
The alert presents the following information:
- A link to the alert in the sensor that detected it. -- An alert UUID. The UUID consists of the alert ID that's associated with the alert event detected on the sensor, separated by a hyphen and followed by a unique system ID number.
+- An alert UUID. The UUID consists of the alert ID that's associated with the alert event detected on the sensor, separated by a hyphen, and followed by a unique system ID number.
**On-premises management console Alert UUID**
Working with UUIDs ensures that each alert displayed in the on-premises manageme
> [!NOTE] > By default, UUIDs are displayed in the following partner systems when forwarding rules are defined: ArcSight, syslog servers, QRadar, Sentinel, and NetWitness. No setup is required.
-To view alert information:
+**To view alert information**:
- From the alert list, select an alert. :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/alert-information.png" alt-text="Screenshot of alert information.":::
-To view the alert in the sensor:
+**To view the alert in the sensor**:
- Select **OPEN SENSOR** from the alert.
-To view the devices in a zone map:
+**To view the devices in a zone map**:
- To view the device map with a focus on the alerted device and all the devices connected to it, select **SHOW DEVICES**.
Several options are available for managing alert events from the on-premises man
- Mute and unmute alert events.
-To learn more about learning, acknowledging and muting alert events, see the sensor [Manage alert events](how-to-manage-the-alert-event.md) article.
+To learn more about learning, acknowledging, and muting alert events, see the sensor [Manage alert events](how-to-manage-the-alert-event.md) article.
## Export alert information
-Export alert information to a .csv file. You can export information of all alerts detected or export information based on the filtered view.The following information is exported:
+Export alert information to a .csv file. You can export information of all alerts detected or export information based on the filtered view. The following information is exported:
- Source Address - Destination Address
Export alert information to a .csv file. You can export information of all alert
- Acknowledged status - PCAP availability
-To export:
+**To export alerts**:
-1. Select Alerts from the side menu.
-1. Select Export.
-1. Select Export Extended Alerts to export alert information in separate rows for each alert that covers multiple devices. When Export Extended Alerts is selected, the .csv file will create a duplicate row of the alert with the unique items in each row. Using this option makes it easier to investigate exported alert events.
+1. Select **Alerts** from the side menu.
+
+1. Select **Export**.
+
+1. Select **Export Extended Alerts** to export alert information in separate rows for each alert that covers multiple devices. When Export Extended Alerts is selected, the .csv file will create a duplicate row of the alert with the unique items in each row. Using this option makes it easier to investigate exported alert events.
+
+## Create forwarding rules
+
+**To create a forwarding rule on the management console**:
+
+1. Sign in to the sensor.
+
+1. Select **Forwarding** on the side menu.
+
+1. Select the :::image type="icon" source="media/how-to-work-with-alerts-on-premises-management-console/plus-add-icon.png" border="false"::: icon.
+
+1. In the Create Forwarding Rule window, enter a name for the rule
+
+ :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/management-console-create-forwarding-rule.png" alt-text="Enter a meaningful name in the field of the Create Forwarding Rule window.":::
+
+ Define criteria by which to trigger a forwarding rule. Working with forwarding rule criteria helps pinpoint and manage the volume of information sent from the sensor to external systems.
+
+1. Select the severity level from the drop-down menu.
+
+ This is the minimum incident to forward, in terms of severity level. For example, if you select **Minor**, minor alerts and any alert above this severity level will be forwarded. Levels are predefined.
+
+1. Select any protocols to apply.
+
+ Only trigger the forwarding rule if the traffic detected was running over specific protocols. Select the required protocols from the drop-down list or choose them all.
+
+1. Select which engines the rule should apply to.
+
+
+ Select the required engines, or choose them all. Alerts from selected engines will be sent.
+
+1. Select the checkbox if you want the forwarding to rule to report system notifications.
+
+1. Select the checkbox if you want the forwarding to rule to report alert notifications.
+
+1. Select **Add** to add an action to apply. Fill in any parameters needed for the selected action.
+
+ Forwarding rule actions instruct the sensor to forward alert information to partner vendors or servers. You can create multiple actions for each forwarding rule.
+
+1. Add another action if desired.
+
+1. Select **Save**.
+
+You can learn more [About forwarded alert information](how-to-forward-alert-information-to-partners.md#about-forwarded-alert-information). You can also [Test forwarding rules](how-to-forward-alert-information-to-partners.md#test-forwarding-rules), or [Edit and delete forwarding rules](how-to-forward-alert-information-to-partners.md#edit-and-delete-forwarding-rules). You can also learn more about[Forwarding rules and alert exclusion rules](how-to-forward-alert-information-to-partners.md#forwarding-rules-and-alert-exclusion-rules).
## Create alert exclusion rules
In addition to working with exclusion rules, you can suppress alerts by muting t
### Create exclusion rules
-To create exclusion rules:
+**To create exclusion rules**:
1. From the left pane of the on-premises management console, select **Alert Exclusion**. Define a new exclusion rule by selecting the **Add** icon :::image type="icon" source="media/how-to-work-with-alerts-on-premises-management-console/add-icon.png" border="false"::: in the upper-right corner of the window that opens. The **Create Exclusion Rule** dialog box opens. :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/create-alert-exclusion-view.png" alt-text="Create an alert exclusion by filling in the information here.":::
-2. Enter a rule name in the **Name** field. The name can't contain quotes (`"`).
+1. Enter a rule name in the **Name** field. The name can't contain quotes (`"`).
-3. In the **By Time Zone/Period** section, enter a time period within a specific time zone. Use this feature when an exclusion rule is created for a specific time period in one time zone, but should be implemented at the same time in other time zones. For example, you might need to apply an exclusion rule between 8:00 AM and 10:00 AM in three different time zones. In this case, create three separate exclusion rules that use the same time period and the relevant time zone.
+1. In the **By Time Zone/Period** section, enter a time period within a specific time zone. Use this feature when an exclusion rule is created for a specific time period in one time zone, but should be implemented at the same time in other time zones. For example, you might need to apply an exclusion rule between 8:00 AM and 10:00 AM in three different time zones. In this case, create three separate exclusion rules that use the same time period and the relevant time zone.
-4. Select **ADD**. During the exclusion period, no alerts are created on the connected sensors.
+1. Select **ADD**. During the exclusion period, no alerts are created on the connected sensors.
:::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/by-the-time-period.png" alt-text="Screenshot of the By Time Period view.":::
-5. In the **By Device Address** section, define the:
+1. In the **By Device Address** section, define the:
- Device IP address, MAC address, or subnet address that you want to exclude. - Traffic direction for the excluded devices, source, and destination.
-6. Select **ADD**.
+1. Select **ADD**.
-7. In the **By Alert Title** section, start typing the alert title. From the drop-down list, select the alert title or titles to be excluded.
+1. In the **By Alert Title** section, start typing the alert title. From the drop-down list, select the alert title or titles to be excluded.
:::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/alert-title.png" alt-text="Screenshot of the By Alert Title view.":::
-8. Select **ADD**.
+1. Select **ADD**.
-9. In the **By Sensor Name** section, start typing the sensor name. From the drop-down list, select the sensor or sensors that you want to exclude.
+1. In the **By Sensor Name** section, start typing the sensor name. From the drop-down list, select the sensor or sensors that you want to exclude.
-10. Select **ADD**.
+1. Select **ADD**.
-11. Select **SAVE**. The new rule appears in the list of rules.
+1. Select **SAVE**. The new rule appears in the list of rules.
You can suppress alerts by either muting them or creating alert exclusion rules. This section describes potential use cases for both features.
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-apis-sdks.md
To use the SDK, include the NuGet package **Azure.DigitalTwins.Core** with your
* [Azure.DigitalTwins.Core](https://www.nuget.org/packages/Azure.DigitalTwins.Core): The package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true). * [Azure.Identity](https://www.nuget.org/packages/Azure.Identity): The library that provides tools to help with authentication against Azure.
-For a detailed walk-through of using the APIs in practice, see the [Tutorial: Code a client app](tutorial-code.md).
+For a detailed walk-through of using the APIs in practice, see [Code a client app](tutorial-code.md).
### Serialization helpers
The available helper classes are:
The following list provides more detail and general guidelines for using the APIs and SDKs.
-* You can use an HTTP REST-testing tool like Postman to make direct calls to the Azure Digital Twins APIs. For more information about this process, see [How-to: Make API requests with Postman](how-to-use-postman.md).
+* You can use an HTTP REST-testing tool like Postman to make direct calls to the Azure Digital Twins APIs. For more information about this process, see [Make API requests with Postman](how-to-use-postman.md).
* To use the SDK, instantiate the `DigitalTwinsClient` class. The constructor requires credentials that can be obtained with different kinds of authentication methods in the `Azure.Identity` package. For more on `Azure.Identity`, see its [namespace documentation](/dotnet/api/azure.identity?view=azure-dotnet&preserve-view=true). * You may find the `InteractiveBrowserCredential` useful while getting started, but there are several other options, including credentials for [managed identity](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true), which you'll likely use to authenticate [Azure functions set up with MSI](../app-service/overview-managed-identity.md?tabs=dotnet) against Azure Digital Twins. For more about `InteractiveBrowserCredential`, see its [class documentation](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true).
-* Requests to the Azure Digital Twins APIs require a user or service principal that is a part of the same [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) tenant where the Azure Digital Twins instance exists. To prevent malicious scanning of Azure Digital Twins endpoints, requests with access tokens from outside the originating tenant will be returned a "404 Sub-Domain not found" error message. This error will be returned *even if* the user or service principal was given an Azure Digital Twins Data Owner or Azure Digital Twins Data Reader role through [Azure AD B2B](../active-directory/external-identities/what-is-b2b.md) collaboration. For information on how to achieve access across multiple tenants, see [How-to: Write app authentication code](how-to-authenticate-client.md#authenticate-across-tenants).
+* Requests to the Azure Digital Twins APIs require a user or service principal that is a part of the same [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) tenant where the Azure Digital Twins instance exists. To prevent malicious scanning of Azure Digital Twins endpoints, requests with access tokens from outside the originating tenant will be returned a "404 Sub-Domain not found" error message. This error will be returned *even if* the user or service principal was given an Azure Digital Twins Data Owner or Azure Digital Twins Data Reader role through [Azure AD B2B](../active-directory/external-identities/what-is-b2b.md) collaboration. For information on how to achieve access across multiple tenants, see [Write app authentication code](how-to-authenticate-client.md#authenticate-across-tenants).
* All service API calls are exposed as member functions on the `DigitalTwinsClient` class. * All service functions exist in synchronous and asynchronous versions. * All service functions throw an exception for any return status of 400 or above. Make sure you wrap calls into a `try` section, and catch at least `RequestFailedExceptions`. For more about this type of exception, see its [reference documentation](/dotnet/api/azure.requestfailedexception?view=azure-dotnet&preserve-view=true).
From here, you can view the metrics for your instance and create custom views.
## Next steps See how to make direct requests to the APIs using Postman:
-* [How-to: Make API requests with Postman](how-to-use-postman.md)
+* [Make API requests with Postman](how-to-use-postman.md)
Or, practice using the .NET SDK by creating a client app with this tutorial:
-* [Tutorial: Code a client app](tutorial-code.md)
+* [Code a client app](tutorial-code.md)
digital-twins Concepts Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-azure-digital-twins-explorer.md
Here is a view of the explorer window, showing models and twins that have been p
The visual interface is a great tool for exploring and understanding the shape of your graph and model set, as well as making pointed, ad hoc changes to individual twins and relationships.
-This article contains more information about the Azure Digital Twins Explorer, including its use cases and an overview of its features. For detailed steps on using each feature, see [How-to: Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
+This article contains more information about the Azure Digital Twins Explorer, including its use cases and an overview of its features. For detailed steps on using each feature, see [Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
## When to use Azure Digital Twins Explorer is a visual tool designed for users who want to explore their twin graph, and modify twins and relationships in the context of their graph. Developers may find this tool especially useful in the following scenarios:
-* **Exploration**: Use the explorer to learn about Azure Digital Twins and the way it represents your real-world environment. Import sample models and graphs that you can view and edit to familiarize yourself with the service. For guided steps to get started using Azure Digital Twins Explorer, see [Quickstart: Get started with Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md).
-* **Development**: Use the explorer to view and validate your twin graph, as well as investigate specific properties of models, twins, and relationships. Make ad hoc modifications to your graph and its data. For detailed instructions on how to use each feature, see [How-to: Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
+* **Exploration**: Use the explorer to learn about Azure Digital Twins and the way it represents your real-world environment. Import sample models and graphs that you can view and edit to familiarize yourself with the service. For guided steps to get started using Azure Digital Twins Explorer, see [Get started with Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md).
+* **Development**: Use the explorer to view and validate your twin graph, as well as investigate specific properties of models, twins, and relationships. Make ad hoc modifications to your graph and its data. For detailed instructions on how to use each feature, see [Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
The explorer's main purpose is to help you visualize and understand your graph, and update your graph as needed. For large-scale solutions and for work that should be repeated or automated, consider using the [APIs and SDKs](./concepts-apis-sdks.md) to interact with your instance through code instead.
The sections of the explorer are as follows:
:::image type="content" source="media/concepts-azure-digital-twins-explorer/azure-digital-twins-explorer-panels.png" alt-text="Screenshot of Azure Digital Twins Explorer, with a highlight around each of the panels described above." lightbox="media/concepts-azure-digital-twins-explorer/azure-digital-twins-explorer-panels.png":::
-For detailed instructions on how to use each feature, see [How-to: Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
+For detailed instructions on how to use each feature, see [Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
## How to contribute
Azure Digital Twins Explorer is a free tool for interacting with the Azure Digit
## Next steps
-Learn how to use Azure Digital Twins Explorer's features in detail: [How-to: Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
+Learn how to use Azure Digital Twins Explorer's features in detail: [Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
digital-twins Concepts Data Explorer Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-data-explorer-plugin.md
The plugin works by calling the [Azure Digital Twins query API](/rest/api/digita
>[!IMPORTANT]
->The user of the plugin must be granted the **Azure Digital Twins Data Reader** role or the **Azure Digital Twins Data Owner** role, as the user's Azure AD token is used to authenticate. Information on how to assign this role can be found in [Concepts: Security for Azure Digital Twins solutions](concepts-security.md#authorization-azure-roles-for-azure-digital-twins).
+>The user of the plugin must be granted the **Azure Digital Twins Data Reader** role or the **Azure Digital Twins Data Owner** role, as the user's Azure AD token is used to authenticate. Information on how to assign this role can be found in [Security for Azure Digital Twins solutions](concepts-security.md#authorization-azure-roles-for-azure-digital-twins).
For more information on using the plugin, see the [Kusto documentation for the azure_digital_twins_query_request plugin](/azure/data-explorer/kusto/query/azure-digital-twins-query-request-plugin).
To see example queries and complete a walkthrough with sample data, see [Azure D
## Using Azure Data Explorer IoT data with Azure Digital Twins There are various ways to ingest IoT data into Azure Data Explorer. Here are two that you might use when using Azure Data Explorer with Azure Digital Twins:
-* Historize digital twin property values to Azure Data Explorer with an Azure function that handles twin change events and writes the twin data to Azure Data Explorer, similar to the process used in [How-to: Integrate with Azure Time Series Insights](how-to-integrate-time-series-insights.md). This path will be suitable for customers who use telemetry data to bring their digital twins to life.
+* Historize digital twin property values to Azure Data Explorer with an Azure function that handles twin change events and writes the twin data to Azure Data Explorer, similar to the process used in [Integrate with Azure Time Series Insights](how-to-integrate-time-series-insights.md). This path will be suitable for customers who use telemetry data to bring their digital twins to life.
* [Ingest IoT data directly into your Azure Data Explorer cluster from IoT Hub](/azure/data-explorer/ingest-data-iot-hub) or from other sources. Then, the Azure Digital Twins graph will be used to contextualize the time series data using joint Azure Digital Twins/Azure Data Explorer queries. This path may be suitable for direct-ingestion workloads. ### Mapping data across Azure Data Explorer and Azure Digital Twins
For instance, if you want to represent a property with three fields for roll, pi
* View sample queries using the plugin, including a walkthrough that runs the queries in an example scenario: [Azure Digital Twins query plugin for Azure Data Explorer: Sample queries and walkthrough](https://github.com/Azure-Samples/azure-digital-twins-getting-started/tree/main/adt-adx-queries)
-* Read about another strategy for analyzing historical data in Azure Digital Twins: [How-to: Integrate with Azure Time Series Insights](how-to-integrate-time-series-insights.md)
+* Read about another strategy for analyzing historical data in Azure Digital Twins: [Integrate with Azure Time Series Insights](how-to-integrate-time-series-insights.md)
digital-twins Concepts Data Ingress Egress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-data-ingress-egress.md
Azure Digital Twins can be driven with data and events from any serviceΓÇö[IoT H
Instead of having a built-in IoT Hub behind the scenes, Azure Digital Twins allows you to "bring your own" IoT Hub to use with the service. You can use an existing IoT Hub you currently have in production, or deploy a new one to be used for this purpose. This gives you full access to all of the device management capabilities of IoT Hub.
-To ingest data from any source into Azure Digital Twins, use an [Azure function](../azure-functions/functions-overview.md). Learn more about this pattern in [How-to: Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md), or try it out yourself in the Azure Digital Twins [Tutorial: Connect an end-to-end solution](tutorial-end-to-end.md).
+To ingest data from any source into Azure Digital Twins, use an [Azure function](../azure-functions/functions-overview.md). Learn more about this pattern in [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md), or try it out yourself in the Azure Digital Twins [Connect an end-to-end solution](tutorial-end-to-end.md).
-You can also learn how to connect Azure Digital Twins to a Logic Apps trigger in [How-to: Integrate with Logic Apps](how-to-integrate-logic-apps.md).
+You can also learn how to connect Azure Digital Twins to a Logic Apps trigger in [Integrate with Logic Apps](how-to-integrate-logic-apps.md).
## Data egress services
Azure Digital Twins can send data to connected **endpoints**. Supported endpoint
* [Event Grid](../event-grid/overview.md) * [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md)
-Endpoints are attached to Azure Digital Twins using management APIs or the Azure portal. Learn more about how to attach an endpoint to Azure Digital Twins in [How-to: Manage endpoints and routes](how-to-manage-routes.md).
+Endpoints are attached to Azure Digital Twins using management APIs or the Azure portal. Learn more about how to attach an endpoint to Azure Digital Twins in [Manage endpoints and routes](how-to-manage-routes.md).
There are many other services where you may want to ultimately direct your data, such as [Azure Storage](../storage/common/storage-introduction.md), [Azure Maps](../azure-maps/about-azure-maps.md), [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), or [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). To send your data to services like these, attach the destination service to an endpoint.
-For example, if you are also using Azure Maps and want to correlate location with your Azure Digital Twins [twin graph](concepts-twins-graph.md), you can use Azure Functions with Event Grid to establish communication between all the services in your deployment. Learn more about this in [How-to: Use Azure Digital Twins to update an Azure Maps indoor map](how-to-integrate-maps.md)
+For example, if you are also using Azure Maps and want to correlate location with your Azure Digital Twins [twin graph](concepts-twins-graph.md), you can use Azure Functions with Event Grid to establish communication between all the services in your deployment. Learn more about this in [Use Azure Digital Twins to update an Azure Maps indoor map](how-to-integrate-maps.md)
-You can also learn how to route data in a similar way to Time Series Insights, in [How-to: Integrate with Time Series Insights](how-to-integrate-time-series-insights.md).
+You can also learn how to route data in a similar way to Time Series Insights, in [Integrate with Time Series Insights](how-to-integrate-time-series-insights.md).
## Next steps Learn more about endpoints and routing events to external
-* [Concepts: Routing Azure Digital Twins events](concepts-route-events.md)
+* [Routing Azure Digital Twins events](concepts-route-events.md)
See how to set up Azure Digital Twins to ingest data from IoT Hub:
-* [How-to: Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md)
+* [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md)
digital-twins Concepts Event Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-event-notifications.md
Here is an example telemetry message body:
## Next steps Learn about delivering events to different destinations, using endpoints and routes:
-* [Concepts: Event routes](concepts-route-events.md)
+* [Event routes](concepts-route-events.md)
digital-twins Concepts High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-high-availability-disaster-recovery.md
For best practices on HA/DR, see the following Azure guidance on this topic:
Read more about getting started with Azure Digital Twins solutions: * [What is Azure Digital Twins?](overview.md)
-* [Quickstart: Get started with Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md)
+* [Get started with Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md)
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-models.md
This section describes the current set of samples in more detail.
### Model uploader
-Once you are finished creating, extending, or selecting your models, you can upload them to your Azure Digital Twins instance to make them available for use in your solution. This is done using the [Azure Digital Twins APIs](concepts-apis-sdks.md), as described in [How-to: Manage DTDL models](how-to-manage-model.md#upload-models).
+Once you are finished creating, extending, or selecting your models, you can upload them to your Azure Digital Twins instance to make them available for use in your solution. This is done using the [Azure Digital Twins APIs](concepts-apis-sdks.md), as described in [Manage DTDL models](how-to-manage-model.md#upload-models).
However, if you have many models to uploadΓÇöor if they have many interdependencies that would make ordering individual uploads complicatedΓÇöyou can use the [Azure Digital Twins Model Uploader sample](https://github.com/Azure/opendigitaltwins-tools/tree/master/ADTTools#uploadmodels) to upload many models at once. Follow the instructions provided with the sample to configure and use this project to upload models into your own instance.
Once you have uploaded models into your Azure Digital Twins instance, you can vi
## Next steps
-* Learn about creating models based on industry-standard ontologies: [Concepts: What is an ontology?](concepts-ontologies.md)
+* Learn about creating models based on industry-standard ontologies: [What is an ontology?](concepts-ontologies.md)
-* Dive deeper into managing models with API operations: [How-to: Manage DTDL models](how-to-manage-model.md)
+* Dive deeper into managing models with API operations: [Manage DTDL models](how-to-manage-model.md)
-* Learn about how models are used to create digital twins: [Concepts: Digital twins and the twin graph](concepts-twins-graph.md)
+* Learn about how models are used to create digital twins: [Digital twins and the twin graph](concepts-twins-graph.md)
digital-twins Concepts Ontologies Adopt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies-adopt.md
You can also read more about the partnerships and approach for energy grids in t
## Next steps
-* Learn more about extending industry-standard ontologies to meet your specifications: [Concepts: Extending industry ontologies](concepts-ontologies-extend.md).
+* Learn more about extending industry-standard ontologies to meet your specifications: [Extending industry ontologies](concepts-ontologies-extend.md).
* Or, continue on the path for developing models based on ontologies: [Using ontology strategies in a model development path](concepts-ontologies.md#using-ontology-strategies-in-a-model-development-path).
digital-twins Concepts Ontologies Convert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies-convert.md
This converter was used to translate the [Real Estate Core Ontology](https://doc
## Next steps
-* Learn more about extending industry-standard ontologies to meet your specifications: [Concepts: Extending industry ontologies](concepts-ontologies-extend.md).
+* Learn more about extending industry-standard ontologies to meet your specifications: [Extending industry ontologies](concepts-ontologies-extend.md).
* Or, continue on the path for developing models based on ontologies: [Using ontology strategies in a model development path](concepts-ontologies.md#using-ontology-strategies-in-a-model-development-path).
digital-twins Concepts Ontologies Extend https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies-extend.md
A portion of the hierarchy looks like the diagram below.
:::image type="content" source="media/concepts-ontologies-extend/real-estate-core-original.png" alt-text="Diagram illustrating part of the RealEstateCore space hierarchy. It shows elements for Space, Room, ConferenceRoom, and Office.":::
-For more information about the RealEstateCore ontology, see [Concepts: Adopting industry-standard ontologies](concepts-ontologies-adopt.md#realestatecore-smart-building-ontology).
+For more information about the RealEstateCore ontology, see [Adopting industry-standard ontologies](concepts-ontologies-adopt.md#realestatecore-smart-building-ontology).
## Extending the RealEstateCore space hierarchy
digital-twins Concepts Ontologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies.md
There are three possible strategies for integrating industry-standard ontologies
| Strategy | Description | Resources | | | | |
-| **Adopt** | You can start your solution with an open-source DTDL ontology that has been built on widely adopted industry standards. You can either use these model sets out-of-the-box, or extend them with your own additions for a customized solution. | [Concepts:&nbsp;Adopting&nbsp;industry&nbsp;standard ontologies](concepts-ontologies-adopt.md)<br><br>[Concepts:&nbsp;Extending&nbsp;ontologies](concepts-ontologies-extend.md) |
-| **Convert** | If you already have existing models represented in another standard format, you can convert them to DTDL to use them with Azure Digital Twins. | [Concepts:&nbsp;Converting&nbsp;ontologies](concepts-ontologies-convert.md)<br><br>[Concepts:&nbsp;Extending&nbsp;ontologies](concepts-ontologies-extend.md) |
-| **Author** | You can always develop your own custom DTDL models from scratch, using any applicable industry standards as inspiration. | [Concepts: DTDL models](concepts-models.md) |
+| **Adopt** | You can start your solution with an open-source DTDL ontology that has been built on widely adopted industry standards. You can either use these model sets out-of-the-box, or extend them with your own additions for a customized solution. | [Adopting&nbsp;industry&nbsp;standard ontologies](concepts-ontologies-adopt.md)<br><br>[Extending&nbsp;ontologies](concepts-ontologies-extend.md) |
+| **Convert** | If you already have existing models represented in another standard format, you can convert them to DTDL to use them with Azure Digital Twins. | [Converting&nbsp;ontologies](concepts-ontologies-convert.md)<br><br>[Extending&nbsp;ontologies](concepts-ontologies-extend.md) |
+| **Author** | You can always develop your own custom DTDL models from scratch, using any applicable industry standards as inspiration. | [DTDL models](concepts-models.md) |
### Using ontology strategies in a model development path
After this, you should be able to use your models in your Azure Digital Twins in
## Next steps Read more about the strategies of adopting, converting, and authoring ontologies:
-* [Concepts: Adopting industry-standard ontologies](concepts-ontologies-adopt.md)
-* [Concepts: Converting ontologies](concepts-ontologies-convert.md)
-* [How to: Manage DTDL models](how-to-manage-model.md)
+* [Adopting industry-standard ontologies](concepts-ontologies-adopt.md)
+* [Converting ontologies](concepts-ontologies-convert.md)
+* [Manage DTDL models](how-to-manage-model.md)
-Or, learn about how models are used to create digital twins: [Concepts: Digital twins and the twin graph](concepts-twins-graph.md).
+Or, learn about how models are used to create digital twins: [Digital twins and the twin graph](concepts-twins-graph.md).
digital-twins Concepts Query Language https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-query-language.md
Recall that the center of Azure Digital Twins is the [twin graph](concepts-twins
This graph can be queried to get information about the digital twins and relationships it contains. These queries are written in a custom SQL-like query language, referred to as the **Azure Digital Twins query language**. This language is similar to the [IoT Hub query language](../iot-hub/iot-hub-devguide-query-language.md) with many comparable features.
-This article describes the basics of the query language and its capabilities. For more detailed examples of query syntax and how to run query requests, see [How-to: Query the twin graph](how-to-query-graph.md).
+This article describes the basics of the query language and its capabilities. For more detailed examples of query syntax and how to run query requests, see [Query the twin graph](how-to-query-graph.md).
## About the queries
When writing queries for Azure Digital Twins, keep the following considerations
## Next steps
-Learn how to write queries and see client code examples in [How-to: Query the twin graph](how-to-query-graph.md).
+Learn how to write queries and see client code examples in [Query the twin graph](how-to-query-graph.md).
digital-twins Concepts Query Units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-query-units.md
The following code snippet demonstrates how you can extract the query charges in
To learn more about querying Azure Digital Twins, visit:
-* [Concepts: Query language](concepts-query-language.md)
-* [How-to: Query the twin graph](how-to-query-graph.md)
+* [Query language](concepts-query-language.md)
+* [Query the twin graph](how-to-query-graph.md)
* [Query API reference documentation](/rest/api/digital-twins/dataplane/query/querytwins) You can find Azure Digital Twins query-related limits in [Azure Digital Twins service limits](reference-service-limits.md).
digital-twins Concepts Route Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-route-events.md
Alternatively, the event message also contains the ID of the source twin that se
The compute resource also needs to establish security and access permissions independently.
-To walk through the process of setting up an Azure function to process digital twin events, see [How-to: Set up an Azure function for processing data](how-to-create-azure-function.md).
+To walk through the process of setting up an Azure function to process digital twin events, see [Set up an Azure function for processing data](how-to-create-azure-function.md).
## Create an endpoint
Before setting the dead-letter location, you must have a storage account with a
To learn more about SAS tokens, see: [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../storage/common/storage-sas-overview.md)
-To learn how to set up an endpoint with dead-lettering, see [How-to: Manage endpoints and routes in Azure Digital Twins](how-to-manage-routes.md#create-an-endpoint-with-dead-lettering).
+To learn how to set up an endpoint with dead-lettering, see [Manage endpoints and routes in Azure Digital Twins](how-to-manage-routes.md#create-an-endpoint-with-dead-lettering).
### Types of event messages
Different types of events in IoT Hub and Azure Digital Twins produce different t
## Next steps See how to set up and manage an event route:
-* [How-to: Manage endpoints and routes](how-to-manage-routes.md)
+* [Manage endpoints and routes](how-to-manage-routes.md)
Or, see how to use Azure Functions to route events within Azure Digital Twins:
-* [How-to: Set up an Azure function for processing data](how-to-create-azure-function.md)
+* [Set up an Azure function for processing data](how-to-create-azure-function.md)
digital-twins Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-security.md
You can assign roles in two ways:
* via the access control (IAM) pane for Azure Digital Twins in the Azure portal (see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md)) * via CLI commands to add or remove a role
-For more detailed steps on how to do this, try it out in the Azure Digital Twins [Tutorial: Connect an end-to-end solution](tutorial-end-to-end.md).
+For more detailed steps on how to do this, try it out in the Azure Digital Twins [Connect an end-to-end solution](tutorial-end-to-end.md).
For more information about how built-in roles are defined, see [Understand role definitions](../role-based-access-control/role-definitions.md) in the Azure RBAC documentation. For information about creating Azure custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md).
Azure supports two types of managed identities: system-assigned and user-assigne
You can use a system-assigned managed identity for your Azure Digital Instance to authenticate to a [custom-defined endpoint](concepts-route-events.md#create-an-endpoint). Azure Digital Twins supports system-assigned identity-based authentication to endpoints for [Event Hub](../event-hubs/event-hubs-about.md) and [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) destinations, and to an [Azure Storage Container](../storage/blobs/storage-blobs-introduction.md) endpoint for [dead-letter events](concepts-route-events.md#dead-letter-events). [Event Grid](../event-grid/overview.md) endpoints are currently not supported for managed identities.
-For instructions on how to enable a system-managed identity for Azure Digital Twins and use it to route events, see [How-to: Route events with a managed identity](how-to-route-with-managed-identity.md).
+For instructions on how to enable a system-managed identity for Azure Digital Twins and use it to route events, see [Route events with a managed identity](how-to-route-with-managed-identity.md).
## Private network access with Azure Private Link (preview)
The private endpoint uses an IP address from your Azure VNet address space. Netw
Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure, as well as avoid data exfiltration from your VNet.
-For instructions on how to set up Private Link for Azure Digital Twins, see [How-to: Enable private access with Private Link (preview)](./how-to-enable-private-link.md).
+For instructions on how to set up Private Link for Azure Digital Twins, see [Enable private access with Private Link (preview)](./how-to-enable-private-link.md).
### Design considerations
To resolve this error, you can do one of the following actions:
## Next steps
-* See these concepts in action in [How-to: Set up an instance and authentication](how-to-set-up-instance-portal.md).
+* See these concepts in action in [Set up an instance and authentication](how-to-set-up-instance-portal.md).
-* See how to interact with these concepts from client application code in [How-to: Write app authentication code](how-to-authenticate-client.md).
+* See how to interact with these concepts from client application code in [Write app authentication code](how-to-authenticate-client.md).
* Read more about [Azure RBAC](../role-based-access-control/overview.md).
digital-twins Concepts Twins Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-twins-graph.md
In an Azure Digital Twins solution, the entities in your environment are represe
## Digital twins
-Before you can create a digital twin in your Azure Digital Twins instance, you need to have a *model* uploaded to the service. A model describes the set of properties, telemetry messages, and relationships that a particular twin can have, among other things. For the types of information that are defined in a model, see [Concepts: Custom models](concepts-models.md).
+Before you can create a digital twin in your Azure Digital Twins instance, you need to have a *model* uploaded to the service. A model describes the set of properties, telemetry messages, and relationships that a particular twin can have, among other things. For the types of information that are defined in a model, see [Custom models](concepts-models.md).
After creating and uploading a model, your client app can create an instance of the type; this is a digital twin. For example, after creating a model of Floor, you may create one or several digital twins that use this type (like a Floor-type twin called GroundFloor, another called Floor2, etc.).
Here is an example of a relationship formatted as a JSON object:
## Next steps See how to manage graph elements with Azure Digital Twin APIs:
-* [How-to: Manage digital twins](how-to-manage-twin.md)
-* [How-to: Manage the twin graph with relationships](how-to-manage-graph.md)
+* [Manage digital twins](how-to-manage-twin.md)
+* [Manage the twin graph with relationships](how-to-manage-graph.md)
Or, learn about querying the Azure Digital Twins twin graph for information:
-* [Concepts: Query language](concepts-query-language.md)
+* [Query language](concepts-query-language.md)
digital-twins How To Authenticate Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-authenticate-client.md
After you [set up an Azure Digital Twins instance and authentication](how-to-set
Azure Digital Twins performs authentication using [Azure AD Security Tokens based on OAUTH 2.0](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims). To authenticate your SDK, you'll need to get a bearer token with the right permissions to Azure Digital Twins, and pass it along with your API calls.
-This article describes how to obtain credentials using the `Azure.Identity` client library. While this article shows code examples in C#, such as what you'd write for the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true), you can use a version of `Azure.Identity` regardless of what SDK you're using (for more on the SDKs available for Azure Digital Twins, see [Concepts: Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md)).
+This article describes how to obtain credentials using the `Azure.Identity` client library. While this article shows code examples in C#, such as what you'd write for the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true), you can use a version of `Azure.Identity` regardless of what SDK you're using (for more on the SDKs available for Azure Digital Twins, see [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md)).
## Prerequisites
-First, complete the setup steps in [How-to: Set up an instance and authentication](how-to-set-up-instance-portal.md). This will ensure that you have an Azure Digital Twins instance and that your user has access permissions. After that setup, you are ready to write client app code.
+First, complete the setup steps in [Set up an instance and authentication](how-to-set-up-instance-portal.md). This will ensure that you have an Azure Digital Twins instance and that your user has access permissions. After that setup, you are ready to write client app code.
To proceed, you will need a client app project in which you write your code. If you don't already have a client app project set up, create a basic project in your language of choice to use with this tutorial.
In an Azure function, you can use the managed identity credentials like this:
The [InteractiveBrowserCredential](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true) method is intended for interactive applications and will bring up a web browser for authentication. You can use this instead of `DefaultAzureCredential` in cases where you require interactive authentication.
-To use the interactive browser credentials, you will need an **app registration** that has permissions to the Azure Digital Twins APIs. For steps on how to set up this app registration, see [How-to: Create an app registration](./how-to-create-app-registration-portal.md). Once the app registration is set up, you'll need...
+To use the interactive browser credentials, you will need an **app registration** that has permissions to the Azure Digital Twins APIs. For steps on how to set up this app registration, see [Create an app registration](./how-to-create-app-registration-portal.md). Once the app registration is set up, you'll need...
* [the app registration's Application (client) ID](./how-to-create-app-registration-portal.md#collect-client-id-and-tenant-id) * [the app registration's Directory (tenant) ID](./how-to-create-app-registration-portal.md#collect-client-id-and-tenant-id) * [the Azure Digital Twins instance's URL](how-to-set-up-instance-portal.md#verify-success-and-collect-important-values)
Here is an example of the code to create an authenticated SDK client using `Inte
#### Other notes about authenticating Azure Functions
-See [How-to: Set up an Azure function for processing data](how-to-create-azure-function.md) for a more complete example that explains some of the important configuration choices in the context of functions.
+See [Set up an Azure function for processing data](how-to-create-azure-function.md) for a more complete example that explains some of the important configuration choices in the context of functions.
Also, to use authentication in a function, remember to: * [Enable managed identity](../app-service/overview-managed-identity.md?tabs=dotnet) * Use [environment variables](/sandbox/functions-recipes/environment-variables?tabs=csharp) as appropriate
-* Assign permissions to the functions app that enable it to access the Digital Twins APIs. For more information on Azure Functions processes, see [How-to: Set up an Azure function for processing data](how-to-create-azure-function.md).
+* Assign permissions to the functions app that enable it to access the Digital Twins APIs. For more information on Azure Functions processes, see [Set up an Azure function for processing data](how-to-create-azure-function.md).
## Authenticate across tenants
If the highlighted authentication scenarios above do not cover the needs of your
## Next steps Read more about how security works in Azure Digital Twins:
-* [Concepts: Security for Azure Digital Twins solutions](concepts-security.md)
+* [Security for Azure Digital Twins solutions](concepts-security.md)
Or, now that authentication is set up, move on to creating and managing models in your instance:
-* [How-to: Manage DTDL models](how-to-manage-model.md)
+* [Manage DTDL models](how-to-manage-model.md)
digital-twins How To Create App Registration Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-app-registration-cli.md
For more information about app registration and its different setup options, see
In this article, you set up an Azure AD app registration that can be used to authenticate client applications with the Azure Digital Twins APIs. Next, read about authentication mechanisms, including one that uses app registrations and others that do not:
-* [How-to: Write app authentication code](how-to-authenticate-client.md)
+* [Write app authentication code](how-to-authenticate-client.md)
digital-twins How To Create App Registration Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-app-registration-portal.md
[!INCLUDE [digital-twins-create-app-registration-selector.md](../../includes/digital-twins-create-app-registration-selector.md)]
-When working with an Azure Digital Twins instance, it is common to interact with that instance through client applications, such as the custom client app built in the [Tutorial: Code a client app](tutorial-code.md). Those applications need to authenticate with Azure Digital Twins in order to interact with it, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) **app registration**.
+When working with an Azure Digital Twins instance, it is common to interact with that instance through client applications, such as the custom client app built in [Code a client app](tutorial-code.md). Those applications need to authenticate with Azure Digital Twins in order to interact with it, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) **app registration**.
This is not required for all authentication scenarios. However, if you are using an authentication strategy or code sample that does require an app registration, this article shows you how to set one up using the [Azure portal](https://portal.azure.com). It also covers how to [collect important values](#collect-important-values) that you'll need in order to use the app registration to authenticate.
For more information about app registration and its different setup options, see
In this article, you set up an Azure AD app registration that can be used to authenticate client applications with the Azure Digital Twins APIs. Next, read about authentication mechanisms, including one that uses app registrations and others that do not:
-* [How-to: Write app authentication code](how-to-authenticate-client.md)
+* [Write app authentication code](how-to-authenticate-client.md)
digital-twins How To Enable Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-enable-private-link.md
This section describes how to turn on Private Link while setting up an Azure Dig
The Private Link options are located in the **Networking** tab of instance setup.
-1. Begin setting up an Azure Digital Twins instance in the Azure portal. For instructions, see [How-to: Set up an instance and authentication](how-to-set-up-instance-portal.md).
+1. Begin setting up an Azure Digital Twins instance in the Azure portal. For instructions, see [Set up an instance and authentication](how-to-set-up-instance-portal.md).
1. When you reach the **Networking** tab of instance setup, you can enable private endpoints by selecting the **Private endpoint** option for the **Connectivity method**. This will add a section called **Private endpoint connections** where you can configure the details of your private endpoint. Select the **+ Add** button to continue.
digital-twins How To Ingest Iot Hub Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-ingest-iot-hub-data.md
This how-to document walks through the process for writing a function that can i
Before continuing with this example, you'll need to set up the following resources as prerequisites: * **An IoT hub**. For instructions, see the *Create an IoT Hub* section of this [IoT Hub quickstart](../iot-hub/quickstart-send-telemetry-cli.md).
-* **An Azure Digital Twins instance** that will receive your device telemetry. For instructions, see [How-to: Set up an Azure Digital Twins instance and authentication](./how-to-set-up-instance-portal.md).
+* **An Azure Digital Twins instance** that will receive your device telemetry. For instructions, see [Set up an Azure Digital Twins instance and authentication](./how-to-set-up-instance-portal.md).
This article also uses **Visual Studio**. You can download the latest version from [Visual Studio Downloads](https://visualstudio.microsoft.com/downloads/).
Select the _Create_ button to create the event subscription.
## Send simulated IoT data
-To test your new ingress function, use the device simulator from [Tutorial: Connect an end-to-end solution](./tutorial-end-to-end.md). That tutorial is driven by this [Azure Digital Twins end-to-end sample project written in C#](/samples/azure-samples/digital-twins-samples/digital-twins-samples). You'll be using the **DeviceSimulator** project in that repository.
+To test your new ingress function, use the device simulator from [Connect an end-to-end solution](./tutorial-end-to-end.md). That tutorial is driven by this [Azure Digital Twins end-to-end sample project written in C#](/samples/azure-samples/digital-twins-samples/digital-twins-samples). You'll be using the **DeviceSimulator** project in that repository.
In the end-to-end tutorial, complete the following steps: 1. [Register the simulated device with IoT Hub](./tutorial-end-to-end.md#register-the-simulated-device-with-iot-hub)
To see the value change, repeatedly run the query command above.
## Next steps Read about data ingress and egress with Azure Digital Twins:
-* [Concepts: Data ingress and egress](concepts-data-ingress-egress.md)
+* [Data ingress and egress](concepts-data-ingress-egress.md)
digital-twins How To Ingest Opcua Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-ingest-opcua-data.md
For this example, you'll use a single model and a single twin instance to match
### Create Azure Digital Twins instance
-First, deploy a new Azure Digital Twins instance, using the guidance in [How-to: Set up an instance and authentication](how-to-set-up-instance-portal.md).
+First, deploy a new Azure Digital Twins instance, using the guidance in [Set up an instance and authentication](how-to-set-up-instance-portal.md).
### Upload model and create twin
digital-twins How To Integrate Azure Signalr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-azure-signalr.md
The solution described in this article allows you to push digital twin telemetry
Here are the prerequisites you should complete before proceeding:
-* Before integrating your solution with Azure SignalR Service in this article, you should complete the Azure Digital Twins [Tutorial: Connect an end-to-end solution](tutorial-end-to-end.md), because this how-to article builds on top of it. The tutorial walks you through setting up an Azure Digital Twins instance that works with a virtual IoT device to trigger digital twin updates. This how-to article will connect those updates to a sample web app using Azure SignalR Service.
+* Before integrating your solution with Azure SignalR Service in this article, you should complete the Azure Digital Twins [Connect an end-to-end solution](tutorial-end-to-end.md), because this how-to article builds on top of it. The tutorial walks you through setting up an Azure Digital Twins instance that works with a virtual IoT device to trigger digital twin updates. This how-to article will connect those updates to a sample web app using Azure SignalR Service.
* You'll need the following values from the tutorial: - Event grid topic
You'll be attaching Azure SignalR Service to Azure Digital Twins through the pat
## Download the sample applications First, download the required sample apps. You will need both of the following:
-* [Azure Digital Twins end-to-end samples](/samples/azure-samples/digital-twins-samples/digital-twins-samples/): This sample contains an *AdtSampleApp* that holds two Azure functions for moving data around an Azure Digital Twins instance (you can learn about this scenario in more detail in [Tutorial: Connect an end-to-end solution](tutorial-end-to-end.md)). It also contains a *DeviceSimulator* sample application that simulates an IoT device, generating a new temperature value every second.
+* [Azure Digital Twins end-to-end samples](/samples/azure-samples/digital-twins-samples/digital-twins-samples/): This sample contains an *AdtSampleApp* that holds two Azure functions for moving data around an Azure Digital Twins instance (you can learn about this scenario in more detail in [Connect an end-to-end solution](tutorial-end-to-end.md)). It also contains a *DeviceSimulator* sample application that simulates an IoT device, generating a new temperature value every second.
- If you haven't already downloaded the sample as part of the tutorial in [Prerequisites](#prerequisites), [navigate to the sample](/samples/azure-samples/digital-twins-samples/digital-twins-samples/) and select the *Browse code* button underneath the title. This will take you to the GitHub repo for the samples, which you can download as a .zip by selecting the *Code* button and *Download ZIP*. :::image type="content" source="media/includes/download-repo-zip.png" alt-text="Screenshot of the digital-twins-samples repo on GitHub and the steps for downloading it as a zip." lightbox="media/includes/download-repo-zip.png":::
digital-twins How To Integrate Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-logic-apps.md
You also need to complete the following items as part of prerequisite setup. The
This article uses Logic Apps to update a twin in your Azure Digital Twins instance. To proceed, you should add at least one twin in your instance.
-You can add twins using the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins), the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true), or the [Azure Digital Twins CLI](/cli/azure/dt?view=azure-cli-latest&preserve-view=true). For detailed steps on how to create twins using these methods, see [How-to: Manage digital twins](how-to-manage-twin.md).
+You can add twins using the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins), the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true), or the [Azure Digital Twins CLI](/cli/azure/dt?view=azure-cli-latest&preserve-view=true). For detailed steps on how to create twins using these methods, see [Manage digital twins](how-to-manage-twin.md).
You will need the **Twin ID** of a twin in your instance that you've created.
Now that your logic app has been created, the twin update event you defined in t
You can query your twin via your method of choice (such as a [custom client app](tutorial-command-line-app.md), the [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md), the [SDKs and APIs](concepts-apis-sdks.md), or the [CLI](concepts-cli.md)).
-For more about querying your Azure Digital Twins instance, see [How-to: Query the twin graph](how-to-query-graph.md).
+For more about querying your Azure Digital Twins instance, see [Query the twin graph](how-to-query-graph.md).
## Next steps In this article, you created a logic app that regularly updates a twin in your Azure Digital Twins instance with a patch that you provided. You can try out selecting other APIs in the custom connector to create Logic Apps for a variety of actions on your instance.
-To read more about the APIs operations available and the details they require, visit [Concepts: Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md).
+To read more about the APIs operations available and the details they require, visit [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md).
digital-twins How To Integrate Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-maps.md
This how-to will cover:
### Prerequisites
-* Follow the Azure Digital Twins [Tutorial: Connect an end-to-end solution](./tutorial-end-to-end.md).
+* Follow the Azure Digital Twins in [Connect an end-to-end solution](./tutorial-end-to-end.md).
* You'll be extending this twin with an additional endpoint and route. You will also be adding another function to your function app from that tutorial.
-* Follow the Azure Maps [Tutorial: Use Azure Maps Creator to create indoor maps](../azure-maps/tutorial-creator-indoor-maps.md) to create an Azure Maps indoor map with a *feature stateset*.
+* Follow the Azure Maps in [Use Azure Maps Creator to create indoor maps](../azure-maps/tutorial-creator-indoor-maps.md) to create an Azure Maps indoor map with a *feature stateset*.
* [Feature statesets](../azure-maps/creator-indoor-maps.md#feature-statesets) are collections of dynamic properties (states) assigned to dataset features such as rooms or equipment. In the Azure Maps tutorial above, the feature stateset stores room status that you will be displaying on a map. * You will need your feature *stateset ID* and Azure Maps *subscription key*.
First, you'll create a route in Azure Digital Twins to forward all twin update e
## Create a route and filter to twin update notifications
-Azure Digital Twins instances can emit twin update events whenever a twin's state is updated. The Azure Digital Twins [Tutorial: Connect an end-to-end solution](./tutorial-end-to-end.md) linked above walks through a scenario where a thermometer is used to update a temperature attribute attached to a room's twin. You'll be extending that solution by subscribing to update notifications for twins, and using that information to update your maps.
+Azure Digital Twins instances can emit twin update events whenever a twin's state is updated. The Azure Digital Twins [Connect an end-to-end solution](./tutorial-end-to-end.md) linked above walks through a scenario where a thermometer is used to update a temperature attribute attached to a room's twin. You'll be extending that solution by subscribing to update notifications for twins, and using that information to update your maps.
This pattern reads from the room twin directly, rather than the IoT device, which gives you the flexibility to change the underlying data source for temperature without needing to update your mapping logic. For example, you can add multiple thermometers or set this room to share a thermometer with another room, all without needing to update your map logic.
This pattern reads from the room twin directly, rather than the IoT device, whic
## Create a function to update maps
-You're going to create an **Event Grid-triggered function** inside your function app from the end-to-end tutorial ([Tutorial: Connect an end-to-end solution](./tutorial-end-to-end.md)). This function will unpack those notifications and send updates to an Azure Maps feature stateset to update the temperature of one room.
+You're going to create an **Event Grid-triggered function** inside your function app from the end-to-end tutorial ([Connect an end-to-end solution](./tutorial-end-to-end.md)). This function will unpack those notifications and send updates to an Azure Maps feature stateset to update the temperature of one room.
See the following document for reference info: [Azure Event Grid trigger for Azure Functions](../azure-functions/functions-bindings-event-grid-trigger.md).
az functionapp config appsettings set --name <your-App-Service-function-app-name
To see live-updating temperature, follow the steps below:
-1. Begin sending simulated IoT data by running the **DeviceSimulator** project from the Azure Digital Twins [Tutorial: Connect an end-to-end solution](tutorial-end-to-end.md). The instructions for this are in the [Configure and run the simulation](././tutorial-end-to-end.md#configure-and-run-the-simulation) section.
+1. Begin sending simulated IoT data by running the **DeviceSimulator** project from the Azure Digital Twins [Connect an end-to-end solution](tutorial-end-to-end.md). The instructions for this are in the [Configure and run the simulation](././tutorial-end-to-end.md#configure-and-run-the-simulation) section.
2. Use [the Azure Maps Indoor module](../azure-maps/how-to-use-indoor-module.md) to render your indoor maps created in Azure Maps Creator.
- 1. Copy the HTML from the [Example: Use the Indoor Maps Module](../azure-maps/how-to-use-indoor-module.md#example-use-the-indoor-maps-module) section of the indoor maps [Tutorial: Use the Azure Maps Indoor Maps module](../azure-maps/how-to-use-indoor-module.md) to a local file.
+ 1. Copy the HTML from the [Example: Use the Indoor Maps Module](../azure-maps/how-to-use-indoor-module.md#example-use-the-indoor-maps-module) section of the indoor maps in [Use the Azure Maps Indoor Maps module](../azure-maps/how-to-use-indoor-module.md) to a local file.
1. Replace the *subscription key*, *tilesetId*, and *statesetID* in the local HTML file with your values. 1. Open that file in your browser.
Depending on the configuration of your topology, you will be able to store these
To read more about managing, upgrading, and retrieving information from the twins graph, see the following references:
-* [How-to: Manage digital twins](./how-to-manage-twin.md)
-* [How-to: Query the twin graph](./how-to-query-graph.md)
+* [Manage digital twins](./how-to-manage-twin.md)
+* [Query the twin graph](./how-to-query-graph.md)
digital-twins How To Integrate Time Series Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-time-series-insights.md
The solution described in this article will allow you to gather and analyze hist
## Prerequisites Before you can set up a relationship with Time Series Insights, you'll need to set up the following resources:
-* An **Azure Digital Twins instance**. For instructions, see [How-to: Set up an Azure Digital Twins instance and authentication](./how-to-set-up-instance-portal.md).
+* An **Azure Digital Twins instance**. For instructions, see [Set up an Azure Digital Twins instance and authentication](./how-to-set-up-instance-portal.md).
* A **model and a twin in the Azure Digital Twins instance**. You'll need to update twin's information a few times to see that data tracked in Time Series Insights. For instructions, see the [Add a model and twin](how-to-ingest-iot-hub-data.md#add-a-model-and-twin) section of the *Ingest telemetry from IoT Hub* article. > [!TIP]
-> In this article, the changing digital twin values that are viewed in Time Series Insights are updated manually for simplicity. However, if you want to complete this article with live simulated data, you can set up an Azure function that updates digital twins based on IoT telemetry events from a simulated device. For instructions, follow [How to: Ingest IoT Hub data](how-to-ingest-iot-hub-data.md), including the final steps to run the device simulator and validate that the data flow works.
+> In this article, the changing digital twin values that are viewed in Time Series Insights are updated manually for simplicity. However, if you want to complete this article with live simulated data, you can set up an Azure function that updates digital twins based on IoT telemetry events from a simulated device. For instructions, follow [Ingest IoT Hub data](how-to-ingest-iot-hub-data.md), including the final steps to run the device simulator and validate that the data flow works.
> > Later, look for another TIP to show you where to start running the device simulator and have your Azure functions update the twins automatically, instead of sending manual digital twin update commands.
You will be attaching Time Series Insights to Azure Digital Twins through the pa
## Create event hub namespace
-Before creating the event hubs, you'll first create an event hub namespace that will receive events from your Azure Digital Twins instance. You can either use the Azure CLI instructions below, or use the Azure portal: [Quickstart: Create an event hub using Azure portal](../event-hubs/event-hubs-create.md). To see what regions support event hubs, visit [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-hubs).
+Before creating the event hubs, you'll first create an event hub namespace that will receive events from your Azure Digital Twins instance. You can either use the Azure CLI instructions below, or use the Azure portal by following [Create an event hub using Azure portal](../event-hubs/event-hubs-create.md). To see what regions support event hubs, visit [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-hubs).
```azurecli-interactive az eventhubs namespace create --name <name-for-your-Event-Hubs-namespace> --resource-group <your-resource-group> --location <region>
az functionapp config appsettings set --settings "EventHubAppSetting-TSI=<your-t
## Create and connect a Time Series Insights instance
-In this section, you'll set up Time Series Insights instance to receive data from your time series hub. For more details about this process, see [Tutorial: Set up an Azure Time Series Insights Gen2 PAYG environment](../time-series-insights/tutorial-set-up-environment.md). Follow the steps below to create a time series insights environment.
+In this section, you'll set up Time Series Insights instance to receive data from your time series hub. For more details about this process, see [Set up an Azure Time Series Insights Gen2 PAYG environment](../time-series-insights/tutorial-set-up-environment.md). Follow the steps below to create a time series insights environment.
1. In the [Azure portal](https://portal.azure.com), search for *Time Series Insights environments*, and select the **Create** button. Choose the following options to create the time series environment.
If you allow a simulation to run for much longer, your visualization will look s
The digital twins are stored by default as a flat hierarchy in Time Series Insights, but they can be enriched with model information and a multi-level hierarchy for organization. To learn more about this process, read:
-* [Tutorial: Define and apply a model](../time-series-insights/tutorial-set-up-environment.md#define-and-apply-a-model)
+* [Define and apply a model](../time-series-insights/tutorial-set-up-environment.md#define-and-apply-a-model)
You can write custom logic to automatically provide this information using the model and graph data already stored in Azure Digital Twins. To read more about managing, upgrading, and retrieving information from the twins graph, see the following references:
-* [How-to: Manage a digital twin](./how-to-manage-twin.md)
-* [How-to: Query the twin graph](./how-to-query-graph.md)
+* [Manage a digital twin](./how-to-manage-twin.md)
+* [Query the twin graph](./how-to-query-graph.md)
digital-twins How To Manage Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-graph.md
The heart of Azure Digital Twins is the [twin graph](concepts-twins-graph.md) re
Once you have a working [Azure Digital Twins instance](how-to-set-up-instance-portal.md) and have set up [authentication](how-to-authenticate-client.md) code in your client app, you can create, modify, and delete digital twins and their relationships in an Azure Digital Twins instance.
-This article focuses on managing relationships and the graph as a whole; to work with individual digital twins, see [How-to: Manage digital twins](how-to-manage-twin.md).
+This article focuses on managing relationships and the graph as a whole; to work with individual digital twins, see [Manage digital twins](how-to-manage-twin.md).
## Prerequisites
This custom function can now be called to create a _contains_ relationship in th
If you wish to create multiple relationships, you can repeat calls to the same method, passing different relationship types into the argument.
-For more information on the helper class `BasicRelationship`, see [Concepts: Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md#serialization-helpers).
+For more information on the helper class `BasicRelationship`, see [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md#serialization-helpers).
### Create multiple relationships between twins
Here's the console output of the program:
## Next steps Learn about querying an Azure Digital Twins twin graph:
-* [Concepts: Query language](concepts-query-language.md)
-* [How-to: Query the twin graph](how-to-query-graph.md)
+* [Query language](concepts-query-language.md)
+* [Query the twin graph](how-to-query-graph.md)
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-model.md
Azure Digital Twins doesn't prevent this state, so be careful to patch twins app
## Next steps See how to create and manage digital twins based on your models:
-* [How-to: Manage digital twins](how-to-manage-twin.md)
+* [Manage digital twins](how-to-manage-twin.md)
digital-twins How To Manage Routes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes.md
This article walks you through the process of creating endpoints and routes usin
## Prerequisites * You'll need an **Azure account**, which [can be set up for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
-* You'll need an **Azure Digital Twins instance** in your Azure subscription. If you don't have an instance already, you can create one using the steps in [How-to: Set up an instance and authentication](how-to-set-up-instance-portal.md). Have the following values from setup handy to use later in this article:
+* You'll need an **Azure Digital Twins instance** in your Azure subscription. If you don't have an instance already, you can create one using the steps in [Set up an instance and authentication](how-to-set-up-instance-portal.md). Have the following values from setup handy to use later in this article:
- Instance name - Resource group
To create a new endpoint, go to your instance's page in the [Azure portal](https
1. Finish creating your endpoint by selecting _Save_. >[!IMPORTANT]
-> In order to successfully use identity-based authentication for your endpoint, you'll need to create a managed identity for your instance by following the steps in [How-to: Route events with a managed identity](how-to-route-with-managed-identity.md).
+> In order to successfully use identity-based authentication for your endpoint, you'll need to create a managed identity for your instance by following the steps in [Route events with a managed identity](how-to-route-with-managed-identity.md).
After creating your endpoint, you can verify that the endpoint was successfully created by checking the notification icon in the top Azure portal bar:
When an endpoint can't deliver an event within a certain time period or after tr
You can set up the necessary storage resources using the [Azure portal](https://ms.portal.azure.com/#home) or the [Azure Digital Twins CLI](/cli/azure/dt?view=azure-cli-latest&preserve-view=true). However, to create an endpoint with dead-lettering enabled, you'll need use the [Azure Digital Twins CLI](/cli/azure/dt?view=azure-cli-latest&preserve-view=true) or [control plane APIs](concepts-apis-sdks.md#overview-control-plane-apis).
-To learn more about dead-lettering, see [Concepts: Event routes](concepts-route-events.md#dead-letter-events). For instructions on how to set up an endpoint with dead-lettering, continue through the rest of this section.
+To learn more about dead-lettering, see [Event routes](concepts-route-events.md#dead-letter-events). For instructions on how to set up an endpoint with dead-lettering, continue through the rest of this section.
#### Set up storage resources
Here is an example of a dead-letter message for a [twin create notification](con
## Create an event route
-To actually send data from Azure Digital Twins to an endpoint, you'll need to define an **event route**. These routes let developers wire up event flow, throughout the system and to downstream services. A single route can allow multiple notifications and event types to be selected. Read more about event routes in [Concepts: Routing Azure Digital Twins events](concepts-route-events.md).
+To actually send data from Azure Digital Twins to an endpoint, you'll need to define an **event route**. These routes let developers wire up event flow, throughout the system and to downstream services. A single route can allow multiple notifications and event types to be selected. Read more about event routes in [Routing Azure Digital Twins events](concepts-route-events.md).
**Prerequisite**: You need to create endpoints as described earlier in this article before you can move on to creating a route. You can proceed to creating an event route once your endpoints are finished setting up.
When finished, select the _Save_ button to create your event route.
Routes can be managed using the [az dt route](/cli/azure/dt/route?view=azure-cli-latest&preserve-view=true) commands for the Azure Digital Twins CLI.
-For more information about using the CLI and what commands are available, see [Concepts: Azure Digital Twins CLI command set](concepts-cli.md).
+For more information about using the CLI and what commands are available, see [Azure Digital Twins CLI command set](concepts-cli.md).
# [.NET SDK](#tab/sdk2)
Here are the supported route filters.
| Data schema | DTDL model ID | `dataschema = '<model-dtmi-ID>'` | **For telemetry**: The data schema is the model ID of the twin or the component that emits the telemetry. For example, `dtmi:example:com:floor4;2` <br>**For notifications (create/delete)**: Data schema can be accessed in the notification body at `$body.$metadata.$model`. <br>**For notifications (update)**: Data schema can be accessed in the notification body at `$body.modelId`| | Content type | Content type of data value | `datacontenttype = '<content-type>'` | The content type is `application/json` | | Spec version | The version of the event schema you are using | `specversion = '<version>'` | The version must be `1.0`. This indicates the CloudEvents schema version 1.0 |
-| Notification body | Reference any property in the `data` field of a notification | `$body.<property>` | See [Concepts: Event notifications](concepts-event-notifications.md) for examples of notifications. Any property in the `data` field can be referenced using `$body`
+| Notification body | Reference any property in the `data` field of a notification | `$body.<property>` | See [Event notifications](concepts-event-notifications.md) for examples of notifications. Any property in the `data` field can be referenced using `$body`
The following data types are supported as values returned by references to the data above:
From the portal homepage, search for your Azure Digital Twins instance to pull u
From here, you can view the metrics for your instance and create custom views.
-For more on viewing Azure Digital Twins metrics, see [How-to: View metrics with Azure Monitor](troubleshoot-metrics.md).
+For more on viewing Azure Digital Twins metrics, see [View metrics with Azure Monitor](troubleshoot-metrics.md).
## Next steps Read about the different types of event messages you can receive:
-* [Concepts: Event notifications](concepts-event-notifications.md)
+* [Event notifications](concepts-event-notifications.md)
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-twin.md
Entities in your environment are represented by [digital twins](concepts-twins-graph.md). Managing your digital twins may include creation, modification, and removal.
-This article focuses on managing digital twins; to work with relationships and the [twin graph](concepts-twins-graph.md) as a whole, see [How-to: Manage the twin graph with relationships](how-to-manage-graph.md).
+This article focuses on managing digital twins; to work with relationships and the [twin graph](concepts-twins-graph.md) as a whole, see [Manage the twin graph with relationships](how-to-manage-graph.md).
> [!TIP] > All SDK functions come in synchronous and asynchronous versions.
The model and any initial property values are provided through the `initData` pa
You can initialize the properties of a twin at the time that the twin is created.
-The twin creation API accepts an object that is serialized into a valid JSON description of the twin properties. See [Concepts: Digital twins and the twin graph](concepts-twins-graph.md) for a description of the JSON format for a twin.
+The twin creation API accepts an object that is serialized into a valid JSON description of the twin properties. See [Digital twins and the twin graph](concepts-twins-graph.md) for a description of the JSON format for a twin.
First, you can create a data object to represent the twin and its property data. You can create a parameter object either manually, or by using a provided helper class. Here is an example of each.
Only properties that have been set at least once are returned when you retrieve
>[!TIP] >The `displayName` for a twin is part of its model metadata, so it will not show when getting data for the twin instance. To see this value, you can [retrieve it from the model](how-to-manage-model.md#retrieve-models).
-To retrieve multiple twins using a single API call, see the query API examples in [How-to: Query the twin graph](how-to-query-graph.md).
+To retrieve multiple twins using a single API call, see the query API examples in [Query the twin graph](how-to-query-graph.md).
Consider the following model (written in [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/tree/master/DTDL)) that defines a Moon:
The defined properties of the digital twin are returned as top-level properties
- Synchronization status for each writable property. This is most useful for devices, where it's possible that the service and the device have diverging statuses (for example, when a device is offline). Currently, this property only applies to physical devices connected to IoT Hub. With the data in the metadata section, it is possible to understand the full status of a property, as well as the last modified timestamps. For more information about sync status, see this [IoT Hub tutorial](../iot-hub/tutorial-device-twins.md) on synchronizing device state. - Service-specific metadata, like from IoT Hub or Azure Digital Twins.
-You can read more about the serialization helper classes like `BasicDigitalTwin` in [Concepts: Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md#serialization-helpers).
+You can read more about the serialization helper classes like `BasicDigitalTwin` in [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md#serialization-helpers).
## View all digital twins
Here is an example of the code to delete twins and their relationships. The `Del
### Delete all digital twins
-For an example of how to delete all twins at once, download the sample app used in the [Tutorial: Explore the basics with a sample client app](tutorial-command-line-app.md). The *CommandLoop.cs* file does this in a `CommandDeleteAllTwins()` function.
+For an example of how to delete all twins at once, download the sample app used in the [Explore the basics with a sample client app](tutorial-command-line-app.md). The *CommandLoop.cs* file does this in a `CommandDeleteAllTwins()` function.
## Runnable digital twin code sample
Here is the console output of the above program:
## Next steps See how to create and manage relationships between your digital twins:
-* [How-to: Manage the twin graph with relationships](how-to-manage-graph.md)
+* [Manage the twin graph with relationships](how-to-manage-graph.md)
digital-twins How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-move-regions.md
Next, you'll complete the "move" of your instance by creating a new instance in
### Create a new instance
-First, create a new instance of Azure Digital Twins in your target region. Follow the steps in [How-to: Set up an instance and authentication](how-to-set-up-instance-portal.md). Keep these pointers in mind:
+First, create a new instance of Azure Digital Twins in your target region. Follow the steps in [Set up an instance and authentication](how-to-set-up-instance-portal.md). Keep these pointers in mind:
* You can keep the same name for the new instance *if* it's in a different resource group. If you need to use the same resource group that contains your original instance, your new instance will need its own distinct name. * Enter the new target region when prompted for a location.
These views confirm that your models, twins, and graph were re-uploaded to the n
If you have endpoints or routes in your original instance, you'll need to re-create them in your new instance. If you don't have any endpoints or routes in your original instance or you don't want to move them to the new instance, you can skip to the [next section](#relink-connected-resources).
-Otherwise, follow the steps in [How-to: Manage endpoints and routes](how-to-manage-routes.md) using the new instance. Keep these pointers in mind:
+Otherwise, follow the steps in [Manage endpoints and routes](how-to-manage-routes.md) using the new instance. Keep these pointers in mind:
* You do *not* need to re-create the Event Grid, Event Hubs, or Service Bus resource that you're using for the endpoint. For more information, see the "Prerequisites" section in the endpoint instructions. You just need to re-create the endpoint on the Azure Digital Twins instance. * You can reuse endpoint and route names because they're scoped to a different instance.
The exact resources you need to edit depends on your scenario, but here are some
* Time Series Insights. * Azure Maps. * IoT Hub Device Provisioning Service.
-* Personal or company apps outside of Azure, such as the client app created in [Tutorial: Code a client app](tutorial-code.md), that connect to the instance and call Azure Digital Twins APIs.
+* Personal or company apps outside of Azure, such as the client app created in [Code a client app](tutorial-code.md), that connect to the instance and call Azure Digital Twins APIs.
* Azure AD app registrations do *not* need to be re-created. If you're using an [app registration](./how-to-create-app-registration-portal.md) to connect to the Azure Digital Twins APIs, you can reuse the same app registration with your new instance. After you finish this step, your new instance in the target region should be a copy of the original instance.
digital-twins How To Parse Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-parse-models.md
The following code shows an example of how to use the parser library to reflect
## Next steps Once you are done writing your models, see how to upload them (and do other management operations) with the DigitalTwinsModels APIs:
-* [How-to: Manage DTDL models](how-to-manage-model.md)
+* [Manage DTDL models](how-to-manage-model.md)
digital-twins How To Provision Using Device Provisioning Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-provision-using-device-provisioning-service.md
For more information about the _provision_ and _retire_ stages, and to better un
## Prerequisites Before you can set up the provisioning, you'll need to set up the following:
-* an **Azure Digital Twins instance**. Follow the instructions in [How-to: Set up an instance and authentication](how-to-set-up-instance-portal.md) to create an Azure digital twins instance. Gather the instance's **_host name_** in the Azure portal ([instructions](how-to-set-up-instance-portal.md#verify-success-and-collect-important-values)).
+* an **Azure Digital Twins instance**. Follow the instructions in [Set up an instance and authentication](how-to-set-up-instance-portal.md) to create an Azure digital twins instance. Gather the instance's **_host name_** in the Azure portal ([instructions](how-to-set-up-instance-portal.md#verify-success-and-collect-important-values)).
* an **IoT hub**. For instructions, see the "Create an IoT Hub" section of [the IoT Hub quickstart](../iot-hub/quickstart-send-telemetry-cli.md).
-* an [Azure function](../azure-functions/functions-overview.md) that updates digital twin information based on IoT Hub data. Follow the instructions in [How to: Ingest IoT hub data](how-to-ingest-iot-hub-data.md) to create this Azure function. Gather the function **_name_** to use it in this article.
+* an [Azure function](../azure-functions/functions-overview.md) that updates digital twin information based on IoT Hub data. Follow the instructions in [Ingest IoT hub data](how-to-ingest-iot-hub-data.md) to create this Azure function. Gather the function **_name_** to use it in this article.
This sample also uses a **device simulator** that includes provisioning using the Device Provisioning Service. The device simulator is located here: [Azure Digital Twins and IoT Hub Integration Sample](/samples/azure-samples/digital-twins-iothub-integration/adt-iothub-provision-sample/). Get the sample project on your machine by navigating to the sample link and selecting the **Browse code** button underneath the title. This will take you to the GitHub repo for the sample, which you can download as a .zip file by selecting the **Code** button and **Download ZIP**.
The following sections walk through the steps to set up this auto-provision devi
When a new device is provisioned using Device Provisioning Service, a new twin for that device can be created in Azure Digital Twins with the same name as the registration ID.
-Create a Device Provisioning Service instance, which will be used to provision IoT devices. You can either use the Azure CLI instructions below, or use the Azure portal: [Quickstart: Set up the IoT Hub Device Provisioning Service with the Azure portal](../iot-dps/quick-setup-auto-provision.md).
+Create a Device Provisioning Service instance, which will be used to provision IoT devices. You can either use the Azure CLI instructions below, or use the Azure portal by following [Set up the IoT Hub Device Provisioning Service with the Azure portal](../iot-dps/quick-setup-auto-provision.md).
The following Azure CLI command will create a Device Provisioning Service. You'll need to specify a Device Provisioning Service name, resource group, and region. To see what regions support Device Provisioning Service, visit [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=iot-hub). The command can be run in [Cloud Shell](https://shell.azure.com), or locally if you have the [Azure CLI installed on your machine](/cli/azure/install-azure-cli).
The device simulator is a thermostat-type device that uses the model with this I
[!INCLUDE [digital-twins-thermostat-model-upload.md](../../includes/digital-twins-thermostat-model-upload.md)]
-For more information about models, refer to [How-to: Manage models](how-to-manage-model.md#upload-models).
+For more information about models, refer to [Manage models](how-to-manage-model.md#upload-models).
#### Configure and run the simulator
Then, delete the project sample folder you downloaded from your local machine.
The digital twins created for the devices are stored as a flat hierarchy in Azure Digital Twins, but they can be enriched with model information and a multi-level hierarchy for organization. To learn more about this concept, read:
-* [Concepts: Digital twins and the twin graph](concepts-twins-graph.md)
+* [Digital twins and the twin graph](concepts-twins-graph.md)
For more information about using HTTP requests with Azure functions, see:
For more information about using HTTP requests with Azure functions, see:
You can write custom logic to automatically provide this information using the model and graph data already stored in Azure Digital Twins. To read more about managing, upgrading, and retrieving information from the twins graph, see the following:
-* [How-to: Manage a digital twin](how-to-manage-twin.md)
-* [How-to: Query the twin graph](how-to-query-graph.md)
+* [Manage a digital twin](how-to-manage-twin.md)
+* [Query the twin graph](how-to-query-graph.md)
digital-twins How To Query Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-query-graph.md
# Query the Azure Digital Twins twin graph
-This article offers query examples and instructions for using the **Azure Digital Twins query language** to query your [twin graph](concepts-twins-graph.md) for information. (For an introduction to the query language, see [Concepts: Query language](concepts-query-language.md).)
+This article offers query examples and instructions for using the **Azure Digital Twins query language** to query your [twin graph](concepts-twins-graph.md) for information. (For an introduction to the query language, see [Query language](concepts-query-language.md).)
The article contains sample queries that illustrate the query language structure and common query operations for digital twins. It also describes how to run your queries after you've written them, using the Azure Digital Twins [Query API](/rest/api/digital-twins/dataplane/query) or an [SDK](concepts-apis-sdks.md#overview-data-plane-apis).
digital-twins How To Route With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-route-with-managed-identity.md
Here are the minimum roles that an identity needs to access an endpoint, dependi
| Azure Service Bus | Azure Service Bus Data Sender | | Azure storage container | Storage Blob Data Contributor |
-For more about endpoints, routes, and the types of destinations supported for routing in Azure Digital Twins, see [Concepts: Event routes](concepts-route-events.md).
+For more about endpoints, routes, and the types of destinations supported for routing in Azure Digital Twins, see [Event routes](concepts-route-events.md).
### Assign the role
digital-twins How To Set Up Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-cli.md
This article covers the steps to **set up a new Azure Digital Twins instance**, including creating the instance and setting up authentication. After completing this article, you will have an Azure Digital Twins instance ready to start programming against. This version of this article goes through these steps manually, one by one, using the CLI.
-* To go through these steps manually using the Azure portal, see the portal version of this article: [How-to: Set up an instance and authentication (portal)](how-to-set-up-instance-portal.md).
-* To run through an automated setup using a deployment script sample, see the scripted version of this article: [How-to: Set up an instance and authentication (scripted)](how-to-set-up-instance-scripted.md).
+* To go through these steps manually using the Azure portal, see the portal version of this article in [Set up an instance and authentication (portal)](how-to-set-up-instance-portal.md).
+* To run through an automated setup using a deployment script sample, see the scripted version of this article in [Set up an instance and authentication (scripted)](how-to-set-up-instance-scripted.md).
[!INCLUDE [digital-twins-setup-steps.md](../../includes/digital-twins-setup-steps.md)]
You now have an Azure Digital Twins instance ready to go, and have assigned perm
Test out individual REST API calls on your instance using the Azure Digital Twins CLI commands: * [az dt reference](/cli/azure/dt?view=azure-cli-latest&preserve-view=true)
-* [Concepts: Azure Digital Twins CLI command set](concepts-cli.md)
+* [Azure Digital Twins CLI command set](concepts-cli.md)
Or, see how to connect a client application to your instance with authentication code:
-* [How-to: Write app authentication code](how-to-authenticate-client.md)
+* [Write app authentication code](how-to-authenticate-client.md)
digital-twins How To Set Up Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-portal.md
This article covers the steps to **set up a new Azure Digital Twins instance**, including creating the instance and setting up authentication. After completing this article, you will have an Azure Digital Twins instance ready to start programming against. This version of this article goes through these steps manually, one by one, using the Azure portal. The Azure portal is a web-based, unified console that provides an alternative to command-line tools.
-* To go through these steps manually using the CLI, see the CLI version of this article: [How-to: Set up an instance and authentication (CLI)](how-to-set-up-instance-cli.md).
-* To run through an automated setup using a deployment script sample, see the scripted version of this article: [How-to: Set up an instance and authentication (scripted)](how-to-set-up-instance-scripted.md).
+* To go through these steps manually using the CLI, see the CLI version of this article in [Set up an instance and authentication (CLI)](how-to-set-up-instance-cli.md).
+* To run through an automated setup using a deployment script sample, see the scripted version of this article in [Set up an instance and authentication (scripted)](how-to-set-up-instance-scripted.md).
[!INCLUDE [digital-twins-setup-steps.md](../../includes/digital-twins-setup-steps.md)]
This version of this article goes through these steps manually, one by one, usin
Here are the additional options you can configure during setup, using the other tabs in the **Create Resource** process.
-* **Networking**: In this tab, you can enable private endpoints with [Azure Private Link](../private-link/private-link-overview.md) to eliminate public network exposure to your instance. For instructions, see [How-to: Enable private access with Private Link (preview)](./how-to-enable-private-link.md?tabs=portal#add-a-private-endpoint-during-instance-creation).
-* **Advanced**: In this tab, you can enable a system-managed identity for your instance that can be used when forwarding events to [endpoints](concepts-route-events.md). For more information about using system-managed identities with Azure Digital Twins, see [Concepts: Security for Azure Digital Twins solutions](concepts-security.md#managed-identity-for-accessing-other-resources-preview).
+* **Networking**: In this tab, you can enable private endpoints with [Azure Private Link](../private-link/private-link-overview.md) to eliminate public network exposure to your instance. For instructions, see [Enable private access with Private Link (preview)](./how-to-enable-private-link.md?tabs=portal#add-a-private-endpoint-during-instance-creation).
+* **Advanced**: In this tab, you can enable a system-managed identity for your instance that can be used when forwarding events to [endpoints](concepts-route-events.md). For more information about using system-managed identities with Azure Digital Twins, see [Security for Azure Digital Twins solutions](concepts-security.md#managed-identity-for-accessing-other-resources-preview).
* **Tags**: In this tab, you can add tags to your instance to help you organize it among your Azure resources. For more about Azure resource tags, see [Tag resources, resource groups, and subscriptions for logical organization](../azure-resource-manager/management/tag-resources.md). ### Verify success and collect important values
You now have an Azure Digital Twins instance ready to go, and have assigned perm
Test out individual REST API calls on your instance using the Azure Digital Twins CLI commands: * [az dt reference](/cli/azure/dt?view=azure-cli-latest&preserve-view=true)
-* [Concepts: Azure Digital Twins CLI command set](concepts-cli.md)
+* [Azure Digital Twins CLI command set](concepts-cli.md)
Or, see how to connect a client application to your instance with authentication code:
-* [How-to: Write app authentication code](how-to-authenticate-client.md)
+* [Write app authentication code](how-to-authenticate-client.md)
digital-twins How To Set Up Instance Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-powershell.md
Digital Twins instance ready to start programming against.
This version of this article goes through these steps manually, one by one, using [Azure PowerShell](/powershell/azure/new-azureps-module-az).
-* To go through these steps manually using the Azure portal, see the portal version of this article: [How-to: Set up an instance and authentication (portal)](how-to-set-up-instance-portal.md).
-* To run through an automated setup using a deployment script sample, see the scripted version of this article: [How-to: Set up an instance and authentication (scripted)](how-to-set-up-instance-scripted.md).
+* To go through these steps manually using the Azure portal, see the portal version of this article in [Set up an instance and authentication (portal)](how-to-set-up-instance-portal.md).
+* To run through an automated setup using a deployment script sample, see the scripted version of this article in [Set up an instance and authentication (scripted)](how-to-set-up-instance-scripted.md).
[!INCLUDE [digital-twins-setup-steps.md](../../includes/digital-twins-setup-steps.md)]
You now have an Azure Digital Twins instance ready to go, and have assigned perm
## Next steps See how to connect a client application to your instance with authentication code:
-* [How-to: Write app authentication code](how-to-authenticate-client.md)
+* [Write app authentication code](how-to-authenticate-client.md)
digital-twins How To Set Up Instance Scripted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-scripted.md
This article covers the steps to **set up a new Azure Digital Twins instance**, including creating the instance and setting up authentication. After completing this article, you will have an Azure Digital Twins instance ready to start programming against. This version of this article completes these steps by running an [automated deployment script sample](/samples/azure-samples/digital-twins-samples/digital-twins-samples/) that streamlines the process.
-* To view the manual CLI steps that the script runs through behind the scenes, see the CLI version of this article: [How-to: Set up an instance and authentication (CLI)](how-to-set-up-instance-cli.md).
-* To view the manual steps according to the Azure portal, see the portal version of this article: [How-to: Set up an instance and authentication (portal)](how-to-set-up-instance-portal.md).
+* To view the manual CLI steps that the script runs through behind the scenes, see the CLI version of this article in [Set up an instance and authentication (CLI)](how-to-set-up-instance-cli.md).
+* To view the manual steps according to the Azure portal, see the portal version of this article in [Set up an instance and authentication (portal)](how-to-set-up-instance-portal.md).
[!INCLUDE [digital-twins-setup-steps.md](../../includes/digital-twins-setup-steps.md)]
If verification was unsuccessful, you can also redo your own role assignment usi
Test out individual REST API calls on your instance using the Azure Digital Twins CLI commands: * [az dt reference](/cli/azure/dt?view=azure-cli-latest&preserve-view=true)
-* [Concepts: Azure Digital Twins CLI command set](concepts-cli.md)
+* [Azure Digital Twins CLI command set](concepts-cli.md)
Or, see how to connect a client application to your instance with authentication code:
-* [How-to: Write app authentication code](how-to-authenticate-client.md)
+* [Write app authentication code](how-to-authenticate-client.md)
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
Here is an example of a URL with the placeholder values filled in:
`https://explorer.digitaltwins.azure.net/?tid=00a000aa-00a0-00aa-0a0aa000aa00&eid=ADT-instance.api.wcus.digitaltwins.azure.net`
-For the recipient to view the instance in the resulting Azure Digital Twins Explorer window, they must log into their Azure account, and have **Azure Digital Twins Data Reader** access to the instance (you can read more about Azure Digital Twins roles in [Concepts: Security](concepts-security.md)). For the recipient to make changes to the graph and the data, they must have the **Azure Digital Twins Data Owner** role on the instance.
+For the recipient to view the instance in the resulting Azure Digital Twins Explorer window, they must log into their Azure account, and have **Azure Digital Twins Data Reader** access to the instance (you can read more about Azure Digital Twins roles in [Security](concepts-security.md)). For the recipient to make changes to the graph and the data, they must have the **Azure Digital Twins Data Owner** role on the instance.
### Link with a query
Clicking the settings cog in the top right corner allows the configuration of th
## Next steps Learn about writing queries for the Azure Digital Twins twin graph:
-* [Concepts: Query language](concepts-query-language.md)
-* [How-to: Query the twin graph](how-to-query-graph.md)
+* [Query language](concepts-query-language.md)
+* [Query the twin graph](how-to-query-graph.md)
digital-twins How To Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-postman.md
This article describes how to configure the [Postman REST client](https://www.ge
1. [Create your own collection from scratch](#create-your-own-collection). 1. [Add requests to your configured collection](#add-an-individual-request) and send them to the Azure Digital Twins APIs.
-Azure Digital Twins has two sets of APIs that you can work with: **data plane** and **control plane**. For more about the difference between these API sets, see [Concepts: Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md). This article contains information for both API sets.
+Azure Digital Twins has two sets of APIs that you can work with: **data plane** and **control plane**. For more about the difference between these API sets, see [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md). This article contains information for both API sets.
## Prerequisites
Otherwise, you can open an [Azure Cloud Shell](https://shell.azure.com) window i
>[!NOTE]
- > If you need to access your Azure Digital Twins instance using a service principal or user account that belongs to a different Azure Active Directory tenant from the instance, you'll need to request a **token** from the Azure Digital Twins instance's "home" tenant. For more information on this process, see [How-to: Write app authentication code](how-to-authenticate-client.md#authenticate-across-tenants).
+ > If you need to access your Azure Digital Twins instance using a service principal or user account that belongs to a different Azure Active Directory tenant from the instance, you'll need to request a **token** from the Azure Digital Twins instance's "home" tenant. For more information on this process, see [Write app authentication code](how-to-authenticate-client.md#authenticate-across-tenants).
3. Copy the value of `accessToken` in the result, and save it to use in the next section. This is your **token value** that you will provide to Postman to authorize your requests.
To proceed with an example query, this article will use the Query API (and its [
:::image type="content" source="media/how-to-use-postman/postman-request-body.png" alt-text="Screenshot of the new request's details in Postman, on the Body tab. It contains a raw JSON body with a query of 'SELECT * FROM DIGITALTWINS'." lightbox="media/how-to-use-postman/postman-request-body.png":::
- For more information about crafting Azure Digital Twins queries, see [How-to: Query the twin graph](how-to-query-graph.md).
+ For more information about crafting Azure Digital Twins queries, see [Query the twin graph](how-to-query-graph.md).
1. Check the reference documentation for any other fields that may be required for your type of request. For the Query API, all requirements have now been met in the Postman request, so this step is done. 1. Use the **Send** button to send your completed request.
You can also compare the response to the expected response data given in the ref
## Next steps
-To learn more about the Digital Twins APIs, read [Concepts: Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md), or view the [reference documentation for the REST APIs](/rest/api/azure-digitaltwins/).
+To learn more about the Digital Twins APIs, read [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md), or view the [reference documentation for the REST APIs](/rest/api/azure-digitaltwins/).
digital-twins How To Use Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-tags.md
Here is a query to get all entities that are small (value tag), and not red:
## Next steps Read more about designing and managing digital twin models:
-* [How-to: Manage DTDL models](how-to-manage-model.md)
+* [Manage DTDL models](how-to-manage-model.md)
Read more about querying the twin graph:
-* [How-to: Query the twin graph](how-to-query-graph.md)
+* [Query the twin graph](how-to-query-graph.md)
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/overview.md
You can view a list of **common IoT terms** and their uses across the Azure IoT
## Next steps
-* Dive into working with Azure Digital Twins in the quickstart: [Quickstart: Get started with Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md).
+* Dive into working with Azure Digital Twins in [Get started with Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md).
-* Or, start reading about Azure Digital Twins concepts with [Concepts: Custom models](concepts-models.md).
+* Or, start reading about Azure Digital Twins concepts with [Custom models](concepts-models.md).
digital-twins Quickstart Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/quickstart-azure-digital-twins-explorer.md
You may also want to delete the sample project folder from your local machine.
Next, continue on to the Azure Digital Twins tutorials to build out your own Azure Digital Twins scenario and interaction tools. > [!div class="nextstepaction"]
-> [Tutorial: Code a client app](tutorial-code.md)
+> [Code a client app](tutorial-code.md)
digital-twins Reference Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/reference-service-limits.md
To manage this, here are some recommendations for working with limits.
## Next steps Learn more about the current release of Azure Digital Twins in the service overview:
-* [Overview: What is Azure Digital Twins?](overview.md)
+* [What is Azure Digital Twins?](overview.md)
digital-twins Resources Compare Original Release https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/resources-compare-original-release.md
The chart below provides a side-by-side view of concepts that have changed betwe
| Topic | In original version | In current version | | | | | |
-| **Modeling**<br>*More flexible* | The original release was designed around smart spaces, so it came with a built-in vocabulary for buildings. | The current Azure Digital Twins is domain-agnostic. You can define your own custom vocabulary and custom models for your solution, to represent more kinds of environments in more flexible ways.<br><br>Learn more in [Concepts: Custom models](concepts-models.md). |
-| **Topology**<br>*More flexible*| The original release supported a tree data structure, tailored to smart spaces. Digital twins were connected with hierarchical relationships. | With the current release, your digital twins can be connected into arbitrary graph topologies, organized however you want. This gives you more flexibility to express the complex relationships of the real world.<br><br>Learn more in [Concepts: Digital twins and the twin graph](concepts-twins-graph.md). |
-| **Compute**<br>*Richer, more flexible* | In the original release, logic for processing events and telemetry was defined in JavaScript user-defined functions (UDFs). Debugging with UDFs was limited. | The current release has an open compute model: you provide custom logic by attaching external compute resources like [Azure Functions](../azure-functions/functions-overview.md). This lets you use a programming language of your choice, access custom code libraries without restriction, and take advantage of development and debugging resources that the external service may have.<br><br>Learn more in [How-to: Set up an Azure function for processing data](how-to-create-azure-function.md). |
-| **Device management with IoT Hub**<br>*More accessible* | The original release managed devices with an instance of [IoT Hub](../iot-hub/about-iot-hub.md) that was internal to the Azure Digital Twins service. This integrated hub was not fully accessible to developers. | In the current release, you "bring your own" IoT hub, by attaching an independently-created IoT Hub instance (along with any devices it already manages). This gives you full access to IoT Hub's capabilities and puts you in control of device management.<br><br>Learn more in [How-to: Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md). |
-| **Security**<br>*More standard* | The original release had pre-defined roles that you could use to manage access to your instance. | The current release integrates with the same [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) back-end service that other Azure services use. This may make it simpler to authenticate between other Azure services in your solution, like IoT Hub, Azure Functions, Event Grid, and more.<br>With RBAC, you can still use pre-defined roles, or you can build and configure custom roles.<br><br>Learn more in [Concepts: Security for Azure Digital Twins solutions](concepts-security.md). |
+| **Modeling**<br>*More flexible* | The original release was designed around smart spaces, so it came with a built-in vocabulary for buildings. | The current Azure Digital Twins is domain-agnostic. You can define your own custom vocabulary and custom models for your solution, to represent more kinds of environments in more flexible ways.<br><br>Learn more in [Custom models](concepts-models.md). |
+| **Topology**<br>*More flexible*| The original release supported a tree data structure, tailored to smart spaces. Digital twins were connected with hierarchical relationships. | With the current release, your digital twins can be connected into arbitrary graph topologies, organized however you want. This gives you more flexibility to express the complex relationships of the real world.<br><br>Learn more in [Digital twins and the twin graph](concepts-twins-graph.md). |
+| **Compute**<br>*Richer, more flexible* | In the original release, logic for processing events and telemetry was defined in JavaScript user-defined functions (UDFs). Debugging with UDFs was limited. | The current release has an open compute model: you provide custom logic by attaching external compute resources like [Azure Functions](../azure-functions/functions-overview.md). This lets you use a programming language of your choice, access custom code libraries without restriction, and take advantage of development and debugging resources that the external service may have.<br><br>Learn more in [Set up an Azure function for processing data](how-to-create-azure-function.md). |
+| **Device management with IoT Hub**<br>*More accessible* | The original release managed devices with an instance of [IoT Hub](../iot-hub/about-iot-hub.md) that was internal to the Azure Digital Twins service. This integrated hub was not fully accessible to developers. | In the current release, you "bring your own" IoT hub, by attaching an independently-created IoT Hub instance (along with any devices it already manages). This gives you full access to IoT Hub's capabilities and puts you in control of device management.<br><br>Learn more in [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md). |
+| **Security**<br>*More standard* | The original release had pre-defined roles that you could use to manage access to your instance. | The current release integrates with the same [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) back-end service that other Azure services use. This may make it simpler to authenticate between other Azure services in your solution, like IoT Hub, Azure Functions, Event Grid, and more.<br>With RBAC, you can still use pre-defined roles, or you can build and configure custom roles.<br><br>Learn more in [Security for Azure Digital Twins solutions](concepts-security.md). |
| **Scalability**<br>*Greater* | The original release had scale limitations for devices, messages, graphs, and scale units. Only one instance of Azure Digital Twins was supported per subscription. | The current release relies on a new architecture with improved scalability, and has greater compute power. It also supports 10 instances per region, per subscription.<br><br>See [Azure Digital Twins service limits](reference-service-limits.md) for details of the limits in the current release. | ## Service limits
For a list of Azure Digital Twins limits, see [Azure Digital Twins service limit
## Next steps
-* Dive into working with the current release in the quickstart: [Quickstart: Get started with Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md).
+* Dive into working with the current release in [Get started with Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md).
-* Or, start reading about key concepts with [Concepts: Custom models](concepts-models.md).
+* Or, start reading about key concepts with [Custom models](concepts-models.md).
digital-twins Troubleshoot Error 403 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-error-403.md
Next, select *API permissions* from the menu bar to verify that this app registr
#### Fix issues
-If any of this appears differently than described, follow the instructions on how to set up an app registration in [How-to: Create an app registration](./how-to-create-app-registration-portal.md).
+If any of this appears differently than described, follow the instructions on how to set up an app registration in [Create an app registration](./how-to-create-app-registration-portal.md).
## Next steps Read the setup steps for creating and authenticating a new Azure Digital Twins instance:
-* [How-to: Set up an instance and authentication (CLI)](how-to-set-up-instance-cli.md)
+* [Set up an instance and authentication (CLI)](how-to-set-up-instance-cli.md)
Read more about security and permissions on Azure Digital Twins:
-* [Concepts: Security for Azure Digital Twins solutions](concepts-security.md)
+* [Security for Azure Digital Twins solutions](concepts-security.md)
digital-twins Troubleshoot Error 404 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-error-404.md
If you're using the `DefaultAzureCredential` class in your code and you continue
## Next steps Read more about security and permissions on Azure Digital Twins:
-* [Concepts: Security for Azure Digital Twins solutions](concepts-security.md)
+* [Security for Azure Digital Twins solutions](concepts-security.md)
digital-twins Troubleshoot Error Azure Digital Twins Explorer Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-error-azure-digital-twins-explorer-authentication.md
When setting up and running the Azure Digital Twins Explorer application, attemp
This error might occur if your Azure account does not have the required Azure role-based access control (Azure RBAC) permissions set on your Azure Digital Twins instance. In order to access data in your instance, you must have the **Azure Digital Twins Data Reader** or **Azure Digital Twins Data Owner** role on the instance you are trying to read or manage, respectively.
-For more information about security and roles in Azure Digital Twins, see [Concepts: Security for Azure Digital Twins solutions](concepts-security.md).
+For more information about security and roles in Azure Digital Twins, see [Security for Azure Digital Twins solutions](concepts-security.md).
## Solutions
For more details about this role requirement and the assignment process, see the
## Next steps Read the setup steps for creating and authenticating a new Azure Digital Twins instance:
-* [How-to: Set up an instance and authentication (CLI)](how-to-set-up-instance-cli.md)
+* [Set up an instance and authentication (CLI)](how-to-set-up-instance-cli.md)
Read more about security and permissions on Azure Digital Twins:
-* [Concepts: Security for Azure Digital Twins solutions](concepts-security.md)
+* [Security for Azure Digital Twins solutions](concepts-security.md)
digital-twins Troubleshoot Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-known-issues.md
This article provides information about known issues associated with Azure Digit
## Missing role assignment after scripted setup
-**Issue description:** Some users may experience issues with the role assignment portion of [How-to: Set up an instance and authentication (scripted)](how-to-set-up-instance-scripted.md). The script doesn't indicate failure, but the *Azure Digital Twins Data Owner* role isn't successfully assigned to the user, and this issue will impact ability to create other resources down the road.
+**Issue description:** Some users may experience issues with the role assignment portion of [Set up an instance and authentication (scripted)](how-to-set-up-instance-scripted.md). The script doesn't indicate failure, but the *Azure Digital Twins Data Owner* role isn't successfully assigned to the user, and this issue will impact ability to create other resources down the road.
| Does this affect me? | Cause | Resolution | | | | |
This article provides information about known issues associated with Azure Digit
| Does this affect me? | Cause | Resolution | | | | |
-| The&nbsp;affected&nbsp;method&nbsp;is&nbsp;used&nbsp;in&nbsp;the&nbsp;following articles:<br><br>[Tutorial: Code a client app](tutorial-code.md)<br><br>[How-to: Write app authentication code](how-to-authenticate-client.md)<br><br>[Concepts: Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md) | Some users have had this issue with version **1.2.0** of the `Azure.Identity` library. | To resolve, update your applications to use a [later version](https://www.nuget.org/packages/Azure.Identity) of `Azure.Identity`. After updating the library version, the browser should load and authenticate as expected. |
+| The&nbsp;affected&nbsp;method&nbsp;is&nbsp;used&nbsp;in&nbsp;the&nbsp;following articles:<br><br>[Code a client app](tutorial-code.md)<br><br>[Write app authentication code](how-to-authenticate-client.md)<br><br>[Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md) | Some users have had this issue with version **1.2.0** of the `Azure.Identity` library. | To resolve, update your applications to use a [later version](https://www.nuget.org/packages/Azure.Identity) of `Azure.Identity`. After updating the library version, the browser should load and authenticate as expected. |
## Issue with default Azure credential authentication on Azure.Identity 1.3.0
This article provides information about known issues associated with Azure Digit
## Next steps Read more about security and permissions on Azure Digital Twins:
-* [Concepts: Security for Azure Digital Twins solutions](concepts-security.md)
+* [Security for Azure Digital Twins solutions](concepts-security.md)
digital-twins Troubleshoot Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-metrics.md
Metrics are enabled by default. You can view Azure Digital Twins metrics from th
## How to view Azure Digital Twins metrics
-1. Create an Azure Digital Twins instance. You can find instructions on how to set up an Azure Digital Twins instance in [How-to: Set up an instance and authentication](how-to-set-up-instance-portal.md).
+1. Create an Azure Digital Twins instance. You can find instructions on how to set up an Azure Digital Twins instance in [Set up an instance and authentication](how-to-set-up-instance-portal.md).
2. Find your Azure Digital Twins instance in the [Azure portal](https://portal.azure.com) (you can open the page for it by typing its name into the portal search bar).
digital-twins Tutorial Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-code.md
In the directory where you created your project, create a new .json file called
> If you're using Visual Studio for this tutorial, you may want to select the newly-created JSON file and set the *Copy to Output Directory* property in the Property inspector to *Copy if Newer* or *Copy Always*. This will enable Visual Studio to find the JSON file with the default path when you run the program with **F5** during the rest of the tutorial. > [!TIP]
-> There is a language-agnostic [DTDL Validator sample](/samples/azure-samples/dtdl-validator/dtdl-validator) that you can use to check model documents to make sure the DTDL is valid. It is built on the DTDL parser library, which you can read more about in [How-to: Parse and validate models](how-to-parse-models.md).
+> There is a language-agnostic [DTDL Validator sample](/samples/azure-samples/dtdl-validator/dtdl-validator) that you can use to check model documents to make sure the DTDL is valid. It is built on the DTDL parser library, which you can read more about in [Parse and validate models](how-to-parse-models.md).
Next, add some more code to *Program.cs* to upload the model you've just created into your Azure Digital Twins instance.
In this tutorial, you created a .NET console client application from scratch. Yo
Continue to the next tutorial to explore the things you can do with such a sample client app: > [!div class="nextstepaction"]
-> [Tutorial: Explore the basics with a sample client app](tutorial-command-line-app.md)
+> [Explore the basics with a sample client app](tutorial-command-line-app.md)
digital-twins Tutorial Command Line App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-command-line-app.md
[!INCLUDE [digital-twins-tutorial-selector.md](../../includes/digital-twins-tutorial-selector.md)]
-In this tutorial, you'll build a graph in Azure Digital Twins using models, twins, and relationships. The tool for this tutorial is the **sample command-line client application** for interacting with an Azure Digital Twins instance. The client app is similar to the one written in [Tutorial: Code a client app](tutorial-code.md).
+In this tutorial, you'll build a graph in Azure Digital Twins using models, twins, and relationships. The tool for this tutorial is the **sample command-line client application** for interacting with an Azure Digital Twins instance. The client app is similar to the one written in [Code a client app](tutorial-code.md).
You can use this sample to perform essential Azure Digital Twins actions such as uploading models, creating and modifying twins, and creating relationships. You can also look at the [code of the sample](https://github.com/Azure-Samples/digital-twins-samples/tree/master/) to learn about the Azure Digital Twins APIs, and practice implementing your own commands by modifying the sample project however you want.
CreateModels Room
``` As models cannot be overwritten, this will now return a service error.
-For the details on how to delete existing models, see [How-to: Manage DTDL models](how-to-manage-model.md).
+For the details on how to delete existing models, see [Manage DTDL models](how-to-manage-model.md).
```cmd/sh Response 409: Service request failed. Status: 409 (Conflict)
In this tutorial, you got started with Azure Digital Twins by building a graph i
Continue to the next tutorial to combine Azure Digital Twins with other Azure services to complete a data-driven, end-to-end scenario: > [!div class="nextstepaction"]
-> [Tutorial: Connect an end-to-end solution](tutorial-end-to-end.md)
+> [Connect an end-to-end solution](tutorial-end-to-end.md)
digital-twins Tutorial Command Line Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-command-line-cli.md
To get the files on your machine, use the navigation links above and copy the fi
To work with Azure Digital Twins in this article, you first need to **set up an Azure Digital Twins instance** and the required permissions for using it. If you already have an Azure Digital Twins instance set up from previous work, you can use that instance.
-Otherwise, follow the instructions in [How-to: Set up an instance and authentication](how-to-set-up-instance-cli.md). The instructions also contain steps to verify that you've completed each step successfully and are ready to move on to using your new instance.
+Otherwise, follow the instructions in [Set up an instance and authentication](how-to-set-up-instance-cli.md). The instructions also contain steps to verify that you've completed each step successfully and are ready to move on to using your new instance.
After you set up your Azure Digital Twins instance, make a note of the following values that you'll need to connect to the instance later: * the instance's **_host name_**
In this tutorial, you got started with Azure Digital Twins by building a graph i
Continue to the next tutorial to combine Azure Digital Twins with other Azure services to complete a data-driven, end-to-end scenario: > [!div class="nextstepaction"]
-> [Tutorial: Connect an end-to-end solution](tutorial-end-to-end.md)
+> [Connect an end-to-end solution](tutorial-end-to-end.md)
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-end-to-end.md
There are two settings that need to be set for the function app to access your A
#### Assign access role
-The first setting gives the function app the **Azure Digital Twins Data Owner** role in the Azure Digital Twins instance. This role is required for any user or function that wants to perform many data plane activities on the instance. You can read more about security and role assignments in [Concepts: Security for Azure Digital Twins solutions](concepts-security.md).
+The first setting gives the function app the **Azure Digital Twins Data Owner** role in the Azure Digital Twins instance. This role is required for any user or function that wants to perform many data plane activities on the instance. You can read more about security and role assignments in [Security for Azure Digital Twins solutions](concepts-security.md).
1. Use the following command to see the details of the system-managed identity for the function. Take note of the **principalId** field in the output.
In this tutorial, you created an end-to-end scenario that shows Azure Digital Tw
Next, start looking at the concept documentation to learn more about elements you worked with in the tutorial: > [!div class="nextstepaction"]
-> [Concepts: Custom models](concepts-models.md)
+> [Custom models](concepts-models.md)
dns Dns Reverse Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/dns-reverse-dns-for-azure-services.md
The technical ability to send email directly from an Azure deployment depends on
## Next steps * For more information on reverse DNS, see [reverse DNS lookup on Wikipedia](https://en.wikipedia.org/wiki/Reverse_DNS_lookup).
-* Learn how to [host the reverse lookup zone for your ISP-assigned IP range in Azure DNS](dns-reverse-dns-for-azure-services.md).
+* Learn how to [host the reverse lookup zone for your ISP-assigned IP range in Azure DNS](dns-reverse-dns-hosting.md).
event-grid System Topics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/system-topics.md
Title: System topics in Azure Event Grid description: Describes system topics in Azure Event Grid. Previously updated : 09/24/2020 Last updated : 07/19/2021 # System topics in Azure Event Grid
A system topic in Event Grid represents one or more events published by Azure se
## Azure services that support system topics Here is the current list of Azure services that support creation of system topics on them.
+- [Azure API Management](event-schema-api-management.md)
- [Azure App Configuration](event-schema-app-configuration.md) - [Azure App Service](event-schema-app-service.md) - [Azure Blob Storage](event-schema-blob-storage.md)
frontdoor Front Door Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-geo-filtering.md
You can configure a geo-filtering policy for your Front Door by using [Azure Pow
| BZ | Belize| | CA | Canada| | CD | Democratic Republic of the Congo|
+| CG | Republic of the Congo |
| CF | Central African Republic| | CH | Switzerland| | CI | Cote d'Ivoire|
frontdoor Front Door Url Rewrite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-url-rewrite.md
For example, if we read across the second row, it's saying that for incoming req
| www\.contoso.com/foo/ | /foo/\* | / | /fwd/ | /foo/ | /foo/bar/ | | www\.contoso.com/foo/**bar** | /foo/\* | /**bar** | /fwd/**bar** | /foo/**bar** | /foo/bar/**bar** |
+> [!NOTE]
+> Azure Front Door only supports URL rewrite from a static path to another static path. Preserve unmatched path is supported with Azure Front Door Standard/Premium SKU. See [preserve unmatched path](standard-premium/concept-rule-set-url-redirect-and-rewrite.md#preserve-unmatched-path) for more details.
+>
+ ## Optional settings There are additional optional settings you can also specify for any given routing rule settings:
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Policy description: Sample Azure Resource Graph queries for Azure Policy showing use of resource types and tables to access Azure Policy related resources and properties. Previously updated : 07/07/2021 Last updated : 07/21/2021
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/reference/supported-tables-resources.md
For sample queries for this table, see [Resource Graph sample queries for resour
- Sample query: [Key vaults with subscription name](../samples/samples-by-category.md#key-vaults-with-subscription-name) - Sample query: [Remove columns from results](../samples/samples-by-category.md#remove-columns-from-results) - Microsoft.Resources/subscriptions/resourceGroups (Resource groups)
- - Sample query: [Combine results from two queries into a single result](../samples/samples-by-category.md#combine-results-from-two-queries-into-a-single-result)
- Sample query: [Find storage accounts with a specific case-insensitive tag on the resource group](../samples/samples-by-category.md#find-storage-accounts-with-a-specific-case-insensitive-tag-on-the-resource-group) - Sample query: [Find storage accounts with a specific case-sensitive tag on the resource group](../samples/samples-by-category.md#find-storage-accounts-with-a-specific-case-sensitive-tag-on-the-resource-group)
governance Samples By Category https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/samples/samples-by-category.md
Title: List of sample Azure Resource Graph queries by category description: List sample queries for Azure Resource-Graph. Categories include Tags, Azure Advisor, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 07/07/2021 Last updated : 07/21/2021
Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature
[!INCLUDE [azure-resource-graph-samples-cat-azure-arc](../../../../includes/resource-graph/samples/bycat/azure-arc.md)]
-## Azure Arc enabled Kubernetes
+## Azure Arc-enabled Kubernetes
[!INCLUDE [azure-resource-graph-samples-cat-azure-arc-enabled-kubernetes](../../../../includes/resource-graph/samples/bycat/azure-arc-enabled-kubernetes.md)]
governance Samples By Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/samples/samples-by-table.md
Title: List of sample Azure Resource Graph queries by table description: List sample queries for Azure Resource-Graph. Tables include Resources, ResourceContainers, PolicyResources, and more. Previously updated : 07/07/2021 Last updated : 07/21/2021
iot-central Quick Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-configure-rules.md
The smartphone app sends telemetry that includes values from the accelerometer s
1. Enter **Phone turned over** as the rule name.
-1. In the **Target devices** section, select **Smartphone** as the **Device template**. This option filters the devices the rule applies to by device template type. You can add more filter criteria by selecting **+ Filter**.
+1. In the **Target devices** section, select **IoT Plug and Play mobile** as the **Device template**. This option filters the devices the rule applies to by device template type. You can add more filter criteria by selecting **+ Filter**.
1. In the **Conditions** section, you define what triggers your rule. Use the following information to define a single condition based on accelerometer z-axis telemetry. This rule uses aggregation so you receive a maximum of one email for each device every five minutes:
iot-edge How To Collect And Transport Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-collect-and-transport-metrics.md
All configuration for the metrics-collector is done using environment variables.
| `UploadTarget` | Controls whether metrics are sent directly to Azure Monitor over HTTPS or to IoT Hub as D2C messages. For more information, see [upload target](#upload-target). <br><br>Can be either **AzureMonitor** or **IoTMessage** <br><br> **Not required** <br><br> Default value: *AzureMonitor* | | `LogAnalyticsWorkspaceId` | [Log Analytics workspace ID](../azure-monitor/agents/log-analytics-agent.md#workspace-id-and-key). <br><br>**Required** only if *UploadTarget* is *AzureMonitor* <br><br>Default value: *none* | | `LogAnalyticsSharedKey` | [Log Analytics workspace key](../azure-monitor/agents/log-analytics-agent.md#workspace-id-and-key). <br><br>**Required** only if *UploadTarget* is *AzureMonitor* <br><br> Default value: *none* |
+| `ScrapeFrequencySecs` | Recurring time interval in seconds at which to collect and transport metrics.<br><br> Example: *600* <br><br> **Not required** <br><br> Default value: *300* |
| `MetricsEndpointsCSV` | Comma-separated list of endpoints to collect Prometheus metrics from. All module endpoints to collect metrics from must appear in this list.<br><br> Example: *http://edgeAgent:9600/metrics, http://edgeHub:9600/metrics, http://MetricsSpewer:9417/metrics* <br><br> **Not required** <br><br> Default value: *http://edgeHub:9600/metrics, http://edgeAgent:9600/metrics* | | `AllowedMetrics` | List of metrics to collect, all other metrics will be ignored. Set to an empty string to disable. For more information, see [allow and disallow lists](#allow-and-disallow-lists). <br><br>Example: *metricToScrape{quantile=0.99}[endpoint=http://MetricsSpewer:9417/metrics]*<br><br> **Not required** <br><br> Default value: *empty* | | `BlockedMetrics` | List of metrics to ignore. Overrides *AllowedMetrics*, so a metric will not be reported if it is included in both lists. For more information, see [allow and disallow lists](#allow-and-disallow-lists). <br><br> Example: *metricToIgnore{quantile=0.5}[endpoint=http://VeryNoisyModule:9001/metrics], docker_container_disk_write_bytes*<br><br> **Not required** <br><br>Default value: *empty* |
To enable monitoring in this scenario, the metrics-collector module can be confi
>[!TIP] >Remember to add an edgeHub route to deliver metrics messages from the collector module to IoT Hub. It looks like `FROM /messages/modules/replace-with-collector-module-name/* INTO $upstream`.
-This option does require extra setup to deliver metrics messages arriving at IoT Hub to the Log Analytics workspace. Without this set up, the other portions of the integration like [curated visualizations](how-to-explore-curated-visualizations.md) and [alerts](how-to-create-alerts.md) will not work.
+This option does require [extra setup](how-to-collect-and-transport-metrics.md#sample-cloud-workflow) to deliver metrics messages arriving at IoT Hub to the Log Analytics workspace. Without this set up, the other portions of the integration like [curated visualizations](how-to-explore-curated-visualizations.md) and [alerts](how-to-create-alerts.md) will not work.
>[!NOTE] >Be aware of additional costs with this option. Metrics messages will count against your IoT Hub message quota. You will also be charged for Log Analytics ingestion and cloud workflow resources.
iot-edge How To Visual Studio Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-visual-studio-develop-module.md
Title: Develop and debug modules in Visual Studio - Azure IoT Edge
description: Use Visual Studio with Azure IoT Tools to develop a C or C# IoT Edge module and push it from your IoT Hub to an IoT device, as configured by a deployment manifest. -+ Previously updated : 3/27/2020 Last updated : 07/19/2021
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
-You can turn your business logic into modules for Azure IoT Edge. This article shows you how to use Visual Studio 2019 as the main tool to develop and debug modules.
+This article shows you how to use Visual Studio 2019 to develop and debug Azure IoT Edge modules.
-The Azure IoT Edge Tools for Visual Studio provides the following benefits:
+The Azure IoT Edge Tools for Visual Studio extension provides the following benefits:
-- Create, edit, build, run, and debug Azure IoT Edge solutions and modules on your local development computer.-- Deploy your Azure IoT Edge solution to Azure IoT Edge device via Azure IoT Hub.-- Code your Azure IoT modules in C or C# while having all of the benefits of Visual Studio development.-- Manage Azure IoT Edge devices and modules with UI.
+* Create, edit, build, run, and debug IoT Edge solutions and modules on your local development computer.
+* Deploy your IoT Edge solution to an IoT Edge device via Azure IoT Hub.
+* Code your Azure IoT modules in C or C# while having all of the benefits of Visual Studio development.
+* Manage IoT Edge devices and modules with UI.
-This article shows you how to use the Azure IoT Edge Tools for Visual Studio 2019 to develop your IoT Edge modules. You also learn how to deploy your project to your Azure IoT Edge device. Currently, Visual Studio 2019 provides support for modules written in C and C#. The supported device architectures are Windows X64 and Linux X64 or ARM32. For more information about supported operating systems, languages, and architectures, see [Language and architecture support](module-development.md#language-and-architecture-support).
+This article shows you how to use the Azure IoT Edge Tools for Visual Studio 2019 to develop your IoT Edge modules. You also learn how to deploy your project to an IoT Edge device. Currently, Visual Studio 2019 provides support for modules written in C and C#. The supported device architectures are Windows X64 and Linux X64 or ARM32. For more information about supported operating systems, languages, and architectures, see [Language and architecture support](module-development.md#language-and-architecture-support).
## Prerequisites
-This article assumes that you use a computer or virtual machine running Windows as your development machine. On Windows computers, you can develop either Windows or Linux modules. To develop Windows modules, use a Windows computer running version 1809/build 17763 or newer. To develop Linux modules, use a Windows computer that meets the [requirements for Docker Desktop](https://docs.docker.com/docker-for-windows/install/#what-to-know-before-you-install).
+This article assumes that you use a machine running Windows as your development machine. On Windows computers, you can develop either Windows or Linux modules.
-Because this article uses Visual Studio 2019 as the main development tool, install Visual Studio. Make sure you include the **Azure development** and **Desktop development with C++** workloads in your Visual Studio 2019 installation. You can [Modify Visual Studio 2019](/visualstudio/install/modify-visual-studio?view=vs-2019&preserve-view=true) to add the required workloads.
+* To develop modules with **Windows containers**, use a Windows computer running version 1809/build 17763 or newer.
+* To develop modules with **Linux containers**, use a Windows computer that meets the [requirements for Docker Desktop](https://docs.docker.com/docker-for-windows/install/#what-to-know-before-you-install).
+
+Install Visual Studio on your development machine. Make sure you include the **Azure development** and **Desktop development with C++** workloads in your Visual Studio 2019 installation. You can [Modify Visual Studio 2019](/visualstudio/install/modify-visual-studio?view=vs-2019&preserve-view=true) to add the required workloads.
After your Visual Studio 2019 is ready, you also need the following tools and components: -- Download and install [Azure IoT Edge Tools](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) from the Visual Studio marketplace to create an IoT Edge project in Visual Studio 2019.
+* Download and install [Azure IoT Edge Tools](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) from the Visual Studio marketplace to create an IoT Edge project in Visual Studio 2019.
-> [!TIP]
-> If you are using Visual Studio 2017, please download and install [Azure IoT Edge Tools](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vsiotedgetools) for VS 2017 from the Visual Studio marketplace
+ > [!TIP]
+ > If you are using Visual Studio 2017, download and install [Azure IoT Edge Tools for VS 2017](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vsiotedgetools) from the Visual Studio marketplace
-- Download and install [Docker Community Edition](https://docs.docker.com/install/) on your development machine to build and run your module images. You'll need to set Docker CE to run in either Linux container mode or Windows container mode.
+* Download and install [Docker Community Edition](https://docs.docker.com/install/) on your development machine to build and run your module images. You'll need to set Docker CE to run in either Linux container mode or Windows container mode, depending on the type of modules you are developing.
-- Set up your local development environment to debug, run, and test your IoT Edge solution by installing the [Azure IoT EdgeHub Dev Tool](https://pypi.org/project/iotedgehubdev/). Install [Python (2.7/3.6+) and Pip](https://www.python.org/) and then install the **iotedgehubdev** package by running the following command in your terminal. Make sure your Azure IoT EdgeHub Dev Tool version is greater than 0.3.0.
+* Set up your local development environment to debug, run, and test your IoT Edge solution by installing the [Azure IoT EdgeHub Dev Tool](https://pypi.org/project/iotedgehubdev/). Install [Python (2.7/3.6+) and Pip](https://www.python.org/) and then install the **iotedgehubdev** package by running the following command in your terminal. Make sure your Azure IoT EdgeHub Dev Tool version is greater than 0.3.0.
```cmd pip install --upgrade iotedgehubdev ``` -- Clone the repository and install the Vcpkg library manager, and then install the **azure-iot-sdk-c package** for Windows.
+* Install the Vcpkg library manager, and then install the **azure-iot-sdk-c package** for Windows.
```cmd git clone https://github.com/Microsoft/vcpkg
After your Visual Studio 2019 is ready, you also need the following tools and co
vcpkg.exe --triplet x64-windows integrate install ``` -- [Azure Container Registry](../container-registry/index.yml) or [Docker Hub](https://docs.docker.com/docker-hub/repos/#viewing-repository-tags).
+* Create an instance of [Azure Container Registry](../container-registry/index.yml) or [Docker Hub](https://docs.docker.com/docker-hub/repos/#viewing-repository-tags) to store your module images.
> [!TIP] > You can use a local Docker registry for prototype and testing purposes instead of a cloud registry. -- To test your module on a device, you'll need an active IoT hub with at least one IoT Edge device. To use your computer as an IoT Edge device, follow the steps in the quickstart for [Linux](quickstart-linux.md) or [Windows](quickstart.md). If you are running IoT Edge daemon on your development machine, you might need to stop EdgeHub and EdgeAgent before you start development in Visual Studio.
+* To test your module on a device, you'll need an active IoT hub with at least one IoT Edge device. To quickly create an IoT Edge device for testing, follow the steps in the quickstart for [Linux](quickstart-linux.md) or [Windows](quickstart.md). If you are running IoT Edge daemon on your development machine, you might need to stop EdgeHub and EdgeAgent before you start development in Visual Studio.
### Check your tools version
After your Visual Studio 2019 is ready, you also need the following tools and co
1. After the update is complete, select **Close** and restart Visual Studio.
-### Create an Azure IoT Edge project
+## Create an Azure IoT Edge project
-The Azure IoT Edge project template in Visual Studio creates a project that can be deployed to Azure IoT Edge devices in Azure IoT Hub. First you create an Azure IoT Edge solution, and then you generate the first module in that solution. Each IoT Edge solution can contain more than one module.
+The IoT Edge project template in Visual Studio creates a solution that can be deployed to IoT Edge devices. First you create an Azure IoT Edge solution, and then you generate the first module in that solution. Each IoT Edge solution can contain more than one module.
> [!TIP] > The IoT Edge project structure created by Visual Studio is not the same as in Visual Studio Code.
-1. In Visual Studio new project dialog, search and select **Azure IoT Edge** project and click **Next**. In project configuration window, enter a name for your project and specify the location, and then select **Create**. The default project name is **AzureIoTEdgeApp1**.
+1. In Visual Studio, create a new project.
+
+1. On the **Create a new project** page, search for **Azure IoT Edge**. Select the project that matches the platform and architecture for your IoT Edge device, and click **Next**.
![Create New Project](./media/how-to-visual-studio-develop-csharp-module/create-new.png)
-1. In the **Add IoT Edge Application and Module** window, select either **C# Module** or **C Module** and then specify your module name and module image repository. Visual Studio autopopulates the module name with **localhost:5000/<your module name\>**. Replace it with your own registry information. If you use a local Docker registry for testing, then **localhost** is fine. If you use Azure Container Registry, then use the login server from your registry's settings. The login server looks like **_\<registry name\>_.azurecr.io**. Only replace the **localhost:5000** part of the string so that the final result looks like **\<*registry name*\>.azurecr.io/_\<your module name\>_**. The default module name is **IotEdgeModule1**
+1. On the **Configure your new project** page, enter a name for your project and specify the location, then select **Create**.
+
+1. On the **Add Module** window, select the type of module you want to develop. You can also select **Existing module** to add an existing IoT Edge module to your deployment. Specify your module name and module image repository.
+
+ Visual Studio autopopulates the repository URL with **localhost:5000/<module name\>**. If you use a local Docker registry for testing, then **localhost** is fine. If you use Azure Container Registry, then replace **localhost:5000** with the login server from your registry's settings. The login server looks like **_\<registry name\>_.azurecr.io**.The final result should look like **\<*registry name*\>.azurecr.io/_\<module name\>_**.
- ![Add Application and Module](./media/how-to-visual-studio-develop-csharp-module/add-application-and-module.png)
+ Select **Add** to add your module to the project.
-1. Select **OK** to create the Azure IoT Edge solution with a module that uses either C# or C.
+ ![Add Application and Module](./media/how-to-visual-studio-develop-csharp-module/add-module.png)
-Now you have an **AzureIoTEdgeApp1.Linux.Amd64** project or an **AzureIoTEdgeApp1.Windows.Amd64** project, and also an **IotEdgeModule1** project in your solution. Each **AzureIoTEdgeApp1** project has a `deployment.template.json` file, which defines the modules you want to build and deploy for your IoT Edge solution, and also defines the routes between modules. The default solution has a **SimulatedTemperatureSensor** module and a **IotEdgeModule1** module. The **SimulatedTemperatureSensor** module generates simulated data to the **IotEdgeModule1** module, while the default code in the **IotEdgeModule1** module directly pipes received messages to Azure IoT Hub.
+Now you have an IoT Edge project and an IoT Edge module in your Visual Studio solution.
-To see how the simulated temperature sensor works, view the [SimulatedTemperatureSensor.csproj source code](https://github.com/Azure/iotedge/tree/master/edge-modules/SimulatedTemperatureSensor).
+The module folder contains a file for your module code, named either `program.cs` or `main.c` depending on the language you chose. This folder also contains a file named `module.json` that describes the metadata of your module. Various Docker files provide the information needed to build your module as a Windows or Linux container.
-The **IotEdgeModule1** project is a .NET Core 2.1 console application if it's a C# module. It contains required Docker files you need for your IoT Edge device running with either a Windows container or Linux container. The `module.json` file describes the metadata of a module. The actual module code, which takes Azure IoT Device SDK as a dependency, is found in the `Program.cs` or `main.c` file.
+The project folder contains a list of all the modules included in that project. Right now it should show only one module, but you can add more. For more information about adding modules to a project, see the [Build and debug multiple modules](#build-and-debug-multiple-modules) section later in this article.
+
+The project folder also contains a file named `deployment.template.json`. This file is a template of an IoT Edge deployment manifest, which defines all the modules that will run on a device along with how they will communicate with each other. For more information about deployment manifests, see [Learn how to deploy modules and establish routes](module-composition.md). If you open this deployment template, you see that the two runtime modules, **edgeAgent** and **edgeHub** are included, along with the custom module that you created in this Visual Studio project. A fourth module named **SimulatedTemperatureSensor** is also included. This default module generates simulated data that you can use to test your modules, or delete if it's not necessary. To see how the simulated temperature sensor works, view the [SimulatedTemperatureSensor.csproj source code](https://github.com/Azure/iotedge/tree/master/edge-modules/SimulatedTemperatureSensor).
## Develop your module
-The default module code that comes with the solution is located at **IotEdgeModule1** > **Program.cs** (for C#) or **main.c** (C). The module and the `deployment.template.json` file are set up so that you can build the solution, push it to your container registry, and deploy it to a device to start testing without touching any code. The module is built to take input from a source (in this case, the **SimulatedTemperatureSensor** module that simulates data) and pipe it to Azure IoT Hub.
+When you add a new module, it comes with default code that is ready to be built and deployed to a device so that you can start testing without touching any code. The module code is located within the module folder in a file named `Program.cs` (for C#) or `main.c` (for C).
+
+The default solution is built so that the simulated data from the **SimulatedTemperatureSensor** module is routed to your module, which takes the input and then sends it to IoT Hub.
When you're ready to customize the module template with your own code, use the [Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md) to build modules that address the key needs for IoT solutions such as security, device management, and reliability.
-## Initialize iotedgehubdev with IoT Edge device connection string
+## Set up the iotedgehubdev testing tool
+
+The IoT edgeHub dev tool provides a local development and debug experience. The tool helps start IoT Edge modules without the IoT Edge runtime so that you can create, develop, test, run, and debug IoT Edge modules and solutions locally. You don't have to push images to a container registry and deploy them to a device for testing.
-1. Copy the connection string of any IoT Edge device from **Primary Connection String** in the Visual Studio Cloud Explorer. Be sure not to copy the connection string of a non-Edge device, as the icon of an IoT Edge device is different from the icon of a non-Edge device.
+For more information, see [Azure IoT EdgeHub Dev Tool](https://pypi.org/project/iotedgehubdev/).
- ![Copy Edge Device Connection String](./media/how-to-visual-studio-develop-csharp-module/copy-edge-conn-string.png)
+To initialize the tool, provide an IoT Edge device connection string from IoT Hub.
-1. From the **Tools** menu, select **Azure IoT Edge Tools** > **Setup IoT Edge Simulator**, paste the connection string and click **OK**.
+1. Retrieve the connection string of an IoT Edge device from the Azure portal, the Azure CLI, or the Visual Studio Cloud Explorer.
- ![Open Set Edge Connection String Window](./media/how-to-visual-studio-develop-csharp-module/set-edge-conn-string.png)
+1. From the **Tools** menu, select **Azure IoT Edge Tools** > **Setup IoT Edge Simulator**.
-1. Enter the connection string from the first step and then select **OK**.
+1. Paste the connection string and click **OK**.
> [!NOTE] > You need to follow these steps only once on your development computer as the results are automatically applied to all subsequent Azure IoT Edge solutions. This procedure can be followed again if you need to change to a different connection string.
-## Build and debug single module
+## Build and debug a single module
Typically, you'll want to test and debug each module before running it within an entire solution with multiple modules.
-1. In **Solution Explorer**, right-click **IotEdgeModule1** and select **Set as StartUp Project** from the context menu.
+1. In **Solution Explorer**, right-click the module folder and select **Set as StartUp Project** from the menu.
![Set Start-up Project](./media/how-to-visual-studio-develop-csharp-module/module-start-up-project.png)
-1. Press **F5** or click the button below to run the module; it may take 10&ndash;20 seconds the first time you do so.
+1. Press **F5** or click the run button in the toolbar to run the module. It may take 10&ndash;20 seconds the first time you do so.
![Run Module](./media/how-to-visual-studio-develop-csharp-module/run-module.png) 1. You should see a .NET Core console app start if the module has been initialized successfully.
- ![Module Running](./media/how-to-visual-studio-develop-csharp-module/single-module-run.png)
+1. Set a breakpoint to inspect the module.
-1. If developing in C#, set a breakpoint in the `PipeMessage()` function in **Program.cs**; if using C, set a breakpoint in the `InputQueue1Callback()` function in **main.c**. You can then test it by sending a message by running the following command in a **Git Bash** or **WSL Bash** shell. (You cannot run the `curl` command from a PowerShell or command prompt.)
+ * If developing in C#, set a breakpoint in the `PipeMessage()` function in **Program.cs**.
+ * If using C, set a breakpoint in the `InputQueue1Callback()` function in **main.c**.
+
+1. Test the module by sending a message by running the following command in a **Git Bash** or **WSL Bash** shell. (You cannot run the `curl` command from a PowerShell or command prompt.)
```bash curl --header "Content-Type: application/json" --request POST --data '{"inputName": "input1","data":"hello world"}' http://localhost:53000/api/v1/messages
Typically, you'll want to test and debug each module before running it within an
![Debug Single Module](./media/how-to-visual-studio-develop-csharp-module/debug-single-module.png)
- The breakpoint should be triggered. You can watch variables in the Visual Studio **Locals** window.
+ The breakpoint should be triggered. You can watch variables in the Visual Studio **Locals** window.
> [!TIP] > You can also use [PostMan](https://www.getpostman.com/) or other API tools to send messages instead of `curl`. 1. Press **Ctrl + F5** or click the stop button to stop debugging.
-## Build and debug IoT Edge solution with multiple modules
+## Build and debug multiple modules
After you're done developing a single module, you might want to run and debug an entire solution with multiple modules.
-1. In **Solution Explorer**, add a second module to the solution by right-clicking **AzureIoTEdgeApp1** and selecting **Add** > **New IoT Edge Module**. The default name of the second module is **IotEdgeModule2** and will act as another pipe module.
+1. In **Solution Explorer**, add a second module to the solution by right-clicking the project folder. On the menu, select **Add** > **New IoT Edge Module**.
+
+ ![Add a new module to an existing IoT Edge project](./media/how-to-visual-studio-develop-csharp-module/add-new-module.png)
-1. Open the file `deployment.template.json` and you'll see **IotEdgeModule2** has been added in the **modules** section. Replace the **routes** section with the following. If you have customized your module names, make sure you update these names to match.
+1. Open the file `deployment.template.json` and you'll see that the new module has been added in the **modules** section. A new route was also added to the **routes** section to send messages from the new module to IoT Hub. If you want to send data from the simulated temperature sensor to the new module, add another route like the following example:
```json
- "routes": {
- "IotEdgeModule1ToIoTHub": "FROM /messages/modules/IotEdgeModule1/outputs/* INTO $upstream",
- "sensorToIotEdgeModule1": "FROM /messages/modules/SimulatedTemperatureSensor/outputs/temperatureOutput INTO BrokeredEndpoint(\"/modules/IotEdgeModule1/inputs/input1\")",
- "IotEdgeModule2ToIoTHub": "FROM /messages/modules/IotEdgeModule2/outputs/* INTO $upstream",
- "sensorToIotEdgeModule2": "FROM /messages/modules/SimulatedTemperatureSensor/outputs/temperatureOutput INTO BrokeredEndpoint(\"/modules/IotEdgeModule2/inputs/input1\")"
- },
+ "sensorTo<NewModuleName>": "FROM /messages/modules/SimulatedTemperatureSensor/outputs/temperatureOutput INTO BrokeredEndpoint(\"/modules/<NewModuleName>/inputs/input1\")"
```
-1. Right-click **AzureIoTEdgeApp1** and select **Set as StartUp Project** from the context menu.
+1. Right-click the project folder and select **Set as StartUp Project** from the context menu.
1. Create your breakpoints and then press **F5** to run and debug multiple modules simultaneously. You should see multiple .NET Core console app windows, which each window representing a different module.
After you're done developing a single module, you might want to run and debug an
## Build and push images
-1. Make sure **AzureIoTEdgeApp1** is the start-up project. Select either **Debug** or **Release** as the configuration to build for your module images.
+1. Make sure the IoT Edge project is the start-up project, not one of the individual modules. Select either **Debug** or **Release** as the configuration to build for your module images.
> [!NOTE] > When choosing **Debug**, Visual Studio uses `Dockerfile.(amd64|windows-amd64).debug` to build Docker images. This includes the .NET Core command-line debugger VSDBG in your container image while building it. For production-ready IoT Edge modules, we recommend that you use the **Release** configuration, which uses `Dockerfile.(amd64|windows-amd64)` without VSDBG.
After you're done developing a single module, you might want to run and debug an
} ```
-1. In **Solution Explorer**, right-click **AzureIoTEdgeApp1** and select **Build and Push IoT Edge Modules** to build and push the Docker image for each module.
+ >[!NOTE]
+ >This article uses admin login credentials for Azure Container Registry, which are convenient for development and test scenarios. When you're ready for production scenarios, we recommend a least-privilege authentication option like service principals. For more information, see [Manage access to your container registry](production-checklist.md#manage-access-to-your-container-registry).
+
+1. In **Solution Explorer**, right-click the project folder and select **Build and Push IoT Edge Modules** to build and push the Docker image for each module.
## Deploy the solution
In the quickstart article that you used to set up your IoT Edge device, you depl
## View generated data
-1. To monitor the D2C message for a specific IoT-Edge device, select it in your IoT hub in **Cloud Explorer** and then click **Start Monitoring Built-in Event Endpoint** in the **Action** window.
+1. To monitor the D2C message for a specific IoT Edge device, select it in your IoT hub in **Cloud Explorer** and then click **Start Monitoring Built-in Event Endpoint** in the **Action** window.
1. To stop monitoring data, select **Stop Monitoring Built-in Event Endpoint** in the **Action** window. ## Next steps
-To develop custom modules for your IoT Edge devices, [Understand and use Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md).
+To develop custom modules for your IoT Edge devices, [Understand and use Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md).
load-balancer Skus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/skus.md
Standalone VMs, availability sets, and virtual machine scale sets can be connect
| **[Health probes](./load-balancer-custom-probe-overview.md#types)** | TCP, HTTP, HTTPS | TCP, HTTP | | **[Health probe down behavior](./load-balancer-custom-probe-overview.md#probedown)** | TCP connections stay alive on an instance probe down __and__ on all probes down. | TCP connections stay alive on an instance probe down. All TCP connections end when all probes are down. | | **Availability Zones** | Zone-redundant and zonal frontends for inbound and outbound traffic. | Not available |
-| **Diagnostics** | [Azure Monitor multi-dimensional metrics](./load-balancer-standard-diagnostics.md) | [Azure Monitor logs](./monitor-load-balancer.md) |
+| **Diagnostics** | [Azure Monitor multi-dimensional metrics](./load-balancer-standard-diagnostics.md) | Not supported |
| **HA Ports** | [Available for Internal Load Balancer](./load-balancer-ha-ports-overview.md) | Not available | | **Secure by default** | Closed to inbound flows unless allowed by a network security group. Internal traffic from the virtual network to the internal load balancer is allowed. | Open by default. Network security group optional. | | **Outbound Rules** | [Declarative outbound NAT configuration](./load-balancer-outbound-connections.md#outboundrules) | Not available |
Standalone VMs, availability sets, and virtual machine scale sets can be connect
| **[Multiple front ends](./load-balancer-multivip-overview.md)** | Inbound and [outbound](./load-balancer-outbound-connections.md) | Inbound only | | **Management Operations** | Most operations < 30 seconds | 60-90+ seconds typical | | **SLA** | [99.99%](https://azure.microsoft.com/support/legal/sla/load-balancer/v1_0/) | Not available |
+| **Global VNet Peering Support** | Standard ILB is supported via Global VNet Peering | Not supported |
For more information, see [Load balancer limits](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer). For Standard Load Balancer details, see [overview](./load-balancer-overview.md), [pricing](https://aka.ms/lbpricing), and [SLA](https://aka.ms/lbsla).
For more information, see [Load balancer limits](../azure-resource-manager/manag
- Learn about [Health Probes](load-balancer-custom-probe-overview.md). - Learn about using [Load Balancer for outbound connections](load-balancer-outbound-connections.md). - Learn about [Standard Load Balancer with HA Ports load balancing rules](load-balancer-ha-ports-overview.md).-- Learn more about [Network Security Groups](../virtual-network/network-security-groups-overview.md).
+- Learn more about [Network Security Groups](../virtual-network/network-security-groups-overview.md).
logic-apps Logic Apps Enterprise Integration X12 997 Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-enterprise-integration-x12-997-acknowledgment.md
+
+ Title: X12 997 acknowledgments and error codes
+description: Learn about 997 functional acknowledgments and error codes for X12 messages in Azure Logic Apps.
+
+ms.suite: integration
++++ Last updated : 07/15/2021++
+# 997 functional acknowledgments and error codes for X12 messages in Azure Logic Apps
+
+In Azure Logic Apps, you can create workflows that handle X12 messages for Electronic Data Interchange (EDI) communication when you use **X12** operations. In EDI messaging, acknowledgments provide the status from processing an EDI interchange. When receiving an interchange, the [**X12 Decode** action](logic-apps-enterprise-integration-x12-decode.md) can return one or more types of acknowledgments to the sender, based on which acknowledgment types are enabled and the specified level of validation.
+
+For example, the receiver reports the status from validating the Functional Group Header (GS) and Functional Group Trailer (GE) in the received X12-encoded message by sending a *997 functional acknowledgment (ACK)* along with each error that happens during processing. The **X12 Decode** action always generates a 4010-compliant 997 ACK, while both the [**X12 Encode** action](logic-apps-enterprise-integration-x12-encode.md) and **X12 Decode** action can validate a 5010-compliant 997 ACK.
+
+The receiver sends the 997 ACK inside a Functional Group Header (GS) and Functional Group Trailer (GE) envelope. However, this GS and GE envelope is no different than in any other transaction set.
+
+This topic provides a brief overview about the X12 997 ACK, including the 997 ACK segments in an interchange and the error codes used in those segments. For other related information, review the following documentation:
+
+* [X12 TA1 technical acknowledgments and error codes](logic-apps-enterprise-integration-x12-ta1-acknowledgment.md)
+* [Exchange X12 messages for B2B enterprise integration](logic-apps-enterprise-integration-x12.md)
+* [Exchange EDIFACT messages for B2B enterprise integration](logic-apps-enterprise-integration-edifact.md)
+* [What is Azure Logic Apps](logic-apps-overview.md)
+* [B2B enterprise integration solutions with Azure Logic Apps and Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md)
+
+<a name="997-ack-segments"></a>
+
+## 997 ACK segments
+
+The following table describes the 997 ACK segments in an interchange and uses the following definitions:
+
+* M = Mandatory
+* O = Optional
+
+| Position | Segment ID | Name | Required designation <br>(Req. Des.) | Maximum use | Loop repeat |
+|-|||--|-|-|
+| 010 | ST | Transaction Set Header, for the acknowledgment | M | 1 | - |
+| 020 | AK1 | Functional Group Response Header | M | 1 | - |
+| 030 | AK2 | Transaction Set Response Header | O | 1 | 999999 <br>(Loop ID = AK2) |
+| 040 | AK3 | Data Segment Note | O | 1 | 999999 <br>(Loop ID = AK2 or AK3) |
+| 050 | AK4 | Data Element Note| O | 9 9 | - |
+| 060 | AK5 | Transaction Set Response Trailer | M | 1 | - |
+| 070 | AK9 | Functional Group Response Trailer | M | 1 | - |
+| 080 | SE | Transaction Set Trailer, for the acknowledgment | M | 1 | - |
+|||||||
+
+The following sections provide more information about each AK segment. In the AK2 to AK5 loop, the segments provide information about an error with a transaction set.
+
+### AK1
+
+The mandatory AK1 segment identifies the functional group to acknowledge by using the following data elements:
+
+| Element | Description |
+||-|
+| AK101 | Mandatory, identifies the functional group ID (GS01) for the functional group to acknowledge. |
+| AK102 | Mandatory, identifies the group control number (GS06 and GE02) for the functional group to acknowledge. |
+| AK103 | Optional, identifies the EDI implementation version sent in the GS08 from the original transaction. AK103 supports an inbound 5010-compliant 997 ACK. |
+|||
+
+### AK2
+
+The optional AK2 segment contains an acknowledgment for a transaction set in the received functional group. If multiple AK2 segments exist, they're sent as a series of loops. Each AK2 loop identifies a transaction set using the order received. If a transaction set is in error, an AK2 loop contains AK3, AK4, and AK5 segments. For more information, review the segment descriptions later in this topic.
+
+The AK2 segment identifies the transaction set by using the following data elements:
+
+| Element | Description |
+||-|
+| AK201 | Mandatory, identifies the transaction set ID (ST01) of the transaction set to acknowledge. |
+| AK202 | Mandatory, identifies the transaction set control number (ST02 and SE02) of the transaction set to acknowledge. |
+| AK203 | Optional, identifies the EDI implementation version sent in the ST03 of the original transaction. AK203 supports inbound 5010-compliant 997. |
+|||
+
+#### Generate AK2 segments
+
+You can specify that AK2 segments are generated for *all* accepted and rejected transaction sets, or *only* for rejected transaction sets. Otherwise, Azure Logic Apps generates AK2 loops *only* for rejected transaction sets. If an agreement doesn't resolve for the interchange being responded to, the 997 generation settings default to the fallback agreement settings, and AK2 segments are not generated for accepted transaction sets.
+
+To have Azure Logic Apps generate AK2 segments for accepted transaction sets where AK501 == A, follow these steps:
+
+1. In the Azure portal, open your integration account, and then open the X12 agreement artifact between your X12 trading partners.
+
+1. Open the **Receive Settings** pane, make sure that **FA Expected** appears selected. You can then select **Include AK2 / IK2 Loop**.
+
+### AK3
+
+The optional AK3 segment reports errors in a data segment and identifies the location of the data segment. An AK3 segment is created for each segment in a transaction set that has one or more errors. If multiple AK3 segments exist, they're sent as a series of loops with one segment per loop. The AK3 segment specifies the location of each segment in error and reports the type of syntactical error found at that location by using the following data elements:
+
+| Element | Description |
+||-|
+| AK301 | Mandatory, identifies the segment in error with the X12 segment ID, for example, NM1. |
+| AK302 | Mandatory, identifies the segment count of the segment in error. The ST segment is `1`, and each segment increments the segment count by one. |
+| AK303 | Mandatory, identifies a bounded loop, which is a loop surrounded by an Loop Start (LS) segment and a Loop End (LE) segment. AK303 contains the values of the LS and LE segments that bound the segment in error. |
+| AK304 | Optional, specifies the code for the error in the data segment. Although AK304 is optional, the element is required when an error exists for the identified segment. For AK304 error codes, review [997 ACK error codes - Data Segment Note](#997-ack-error-codes). |
+|||
+
+### AK4
+
+The optional AK4 segment reports errors in a data element or composite data structure, and identifies the location of the data element. An AK4 segment is sent when the AK304 data element is `"8", "Segment has data element errors"` and can repeat up to 99 times within each AK3 segment. The AK4 segment specifies the location of each data element or composite data structure in error and reports the type of syntactical error found at that location by using the following data elements:
+
+| Element | Description |
+||-|
+| AK401 | Mandatory, a composite data element with the following fields: AK41.1, AK41.2, and AK41.3 <p><p>- AK401.1: Identifies the data element or composite data structure in error using its numerical count. For example, if the second data element in the segment has an error, AK401 equals `2`. <br>AK401.2: Identifies the numerical count of the component data element in a composite data structure that has an error. When AK401 reports an error on a data structure that is not composite, AK401.2 is not valued. <br>- AK41.3: Optional, this field is the repeating data element position. AK41.3 supports inbound 5010 compliant-997. |
+| AK402 | Optional, identifies the simple X12 data element number of the element in error. For example, NM101 is the simple X12 data element number 98. |
+| AK403 | Mandatory, reports the error of the identified element. For AK403 error codes, review [997 ACK error codes - Data Element Note](#997-ack-error-codes). |
+| AK404 | Optional, contains a copy of the identified data element in error. AK404 is not used if the error indicates an invalid character. |
+|||
+
+### AK5
+
+The AK5 segment reports whether the transaction set identified in the AK2 segment is accepted or rejected and why. The AK5 segment is mandatory when the optional AK2 loop is included in the acknowledgment. The AK4 segment specifies the status of the transaction set using a single mandatory data element and provides error codes using between one to five optional data elements, based on the syntax editing of the transaction set.
+
+| Element | Description |
+||-|
+| AK501 | Mandatory, specifies whether the identified transaction set is accepted or rejected. For AK501 error codes, review [997 ACK error codes - Transaction Response Trailer](#997-ack-error-codes). |
+| AK502 - AK506 | Optional, indicate the nature of the error. For AK502 error codes, review [997 ACK error codes - Transaction Set Response Trailer](#997-ack-error-codes). |
+|||
+
+### AK9
+
+The mandatory AK9 segment indicates whether the functional group identified in the AK1 segment is accepted or rejected and why. The AK9 segment specifies the status of the transaction set and the nature of any error by using four mandatory data elements. The segment specifies any noted errors by using between one to five optional elements.
+
+| Element | Description |
+||-|
+| AK901 | Mandatory, specifies whether the functional group identified in AK1 is accepted or rejected. For AK901 error codes, review [997 ACK error codes - Functional Group Response Trailer](#997-ack-error-codes). |
+| AK902 | Mandatory, specifies the number of transaction sets included in the identified functional group trailer (GE01). |
+| AK903 | Mandatory, specifies the number of transaction sets received. |
+| AK904 | Mandatory, specifies the number of transaction sets accepted in the identified functional group. |
+| AK905 - AK909 | Optional, indicates from one to five errors noted in the identified functional group. For AK905 to AK909 error codes, review [997 ACK error codes - Functional Group Response Trailer](#997-ack-error-codes). |
+|||
+
+<a name="997-ack-error-codes"></a>
+
+## 997 ACK error codes
+
+This section covers the error codes used in [997 ACK segments](#997-ack-segments). Each table lists the supported and unsupported error codes, as defined by the X12 specification, for X12 message processing in Azure Logic Apps.
+
+### AK304 error codes - Data Segment Note
+
+The following table lists the error codes used in the AK304 data element of the AK3 segment (Data Segment Note):
+
+| Error code | Condition | Supported? |
+||--||
+| 1 | Unrecognized segment ID | Yes |
+| 2 | Unexpected segment | Yes |
+| 3 | Mandatory segment missing | Yes |
+| 4 | Loop occurs over maximum times | Yes |
+| 5 | Segment exceeds maximum use | Yes |
+| 6 | Segment not in defined transaction set | Yes |
+| 7 | Segment not in proper sequence | Yes |
+| 8 | Segment has data element errors | Yes |
+| 511 | Trailing separators encountered (custom code) | Yes |
+||||
+
+### AK403 error codes - Data Element Note
+
+The following table lists the error codes used in the AK403 data element of the AK4 segment (Data Element Note):
+
+| Error code | Condition | Supported? |
+||--||
+| 1 | Mandatory data element missing | Yes |
+| 2 | Conditional required data element missing | Yes |
+| 3 | Too many data elements | Yes |
+| 4 | Data element is too short | Yes |
+| 5 | Data element is too long | Yes |
+| 6 | Invalid character in data element | Yes |
+| 7 | Invalid code value | Yes |
+| 8 | Invalid date | Yes |
+| 9 | Invalid time | Yes |
+| 10 | Exclusion condition violated | Yes |
+||||
+
+### AK501 error codes - Transaction Set Response Trailer
+
+The following table lists the error codes used in the AK501 data element of the AK5 segment (Transaction Set Response Trailer):
+
+| Error code | Condition | Supported? |
+||--||
+| A | Accepted | Yes |
+| E | Accepted but errors were noted | Yes <p><p>**Note**: No error codes lead to a status of `E`. |
+| M | Rejected, message authentication code (MAC) failed | No |
+| P | Partially accepted, at least one transaction set was rejected | Yes |
+| R | Rejected | Yes |
+| W | Rejected, assurance failed validity tests | No |
+| X | Rejected, content after decryption could not be analyzed | No |
+||||
+
+### AK502 to AK506 error codes - Transaction Set Response Trailer
+
+The following table lists the error codes used in the AK502 to AK506 data elements of the AK5 segment (Transaction Set Response Trailer):
+
+| Error code | Condition | Supported or <br>correlated with AK501? |
+||--|--|
+| 1 | Transaction set not supported | Yes, R |
+| 2 | Transaction set trailer missing | Yes, R |
+| 3 | Transaction set control number in header and trailer do not match | Yes, R |
+| 4 | Number of included segments does not match actual count | Yes, R |
+| 5 | One or more segments in error | Yes, R |
+| 6 | Missing or invalid transaction set identifier | Yes, R |
+| 7 | Missing or invalid transaction set control number, a duplicate transaction number may have occurred | Yes, R |
+| 8 through 27 | - | No |
+||||
+
+### AK901 error codes - Functional Group Response Trailer
+
+The following table lists the error codes used in the AK901 data elements of the AK9 segment (Functional Group Response Trailer):
+
+| Error code | Condition | Supported or <br>correlated with AK501? |
+||--|--|
+| A | Accepted | Yes |
+| E | Accepted, but errors were noted | Yes |
+| M | Rejected, message authentication code (MAC) failed | No |
+| P | Partially accepted, at least one transaction set was rejected | Yes |
+| R | Rejected | Yes |
+| W | Rejected, assurance failed validity tests | No |
+| X | Rejected, content after decryption could not be analyzed | No |
+||||
+
+### AK905 to AK909 error codes - Functional Group Response Trailer
+
+The following table lists the error codes used in the AK905 to AK909 data elements of the AK9 segment (Functional Group Response Trailer):
+
+| Error code | Condition | Supported or <br>correlated with AK501? |
+||--|--|
+| 1 | Functional group not supported | No |
+| 2 | Functional group version not supported | No |
+| 3 | Functional group trailer missing | Yes |
+| 4 | Group control number in the functional group header and trailer do not agree | Yes |
+| 5 | Number of included transaction sets does not match actual count | Yes |
+| 6 | Group control number violates syntax, a duplicate group control number may have occurred | Yes |
+| 7 to 26 | - | No |
+||||
+
+## Next steps
+
+* [Exchange X12 messages for B2B enterprise integration](logic-apps-enterprise-integration-x12.md)
logic-apps Logic Apps Enterprise Integration X12 Ta1 Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-enterprise-integration-x12-ta1-acknowledgment.md
+
+ Title: X12 TA1 acknowledgments and error codes
+description: Learn about TA1 technical acknowledgments and error codes used for X12 messages in Azure Logic Apps.
+
+ms.suite: integration
++++ Last updated : 07/15/2021++
+# TA1 technical acknowledgments and error codes for X12 messages in Azure Logic Apps
+
+In Azure Logic Apps, you can create workflows that handle X12 messages for Electronic Data Interchange (EDI) communication when you use **X12** operations. In EDI messaging, acknowledgments provide the status from processing an EDI interchange. When receiving an interchange, the [**X12 Decode** action](logic-apps-enterprise-integration-x12-decode.md) can return one or more types of acknowledgments to the sender, based on which acknowledgment types are enabled and the specified level of validation.
+
+For example, the receiver reports the status from validating the Interchange Control Header (ISA) and Interchange Control Trailer (IEA) in the received X12-encoded message by sending a *TA1 technical acknowledgment (ACK)*. If this header and trailer are valid, the receiver sends a positive TA1 ACK, no matter the status of other content. If the header and trailer aren't valid, the receiver sends a **TA1 ACK** with an error code instead.
+
+The X12 TA1 ACK conforms to the schema for **X12_<*version number*>_TA1.xsd**. The receiver sends the TA1 ACK in an ISA and IEA envelope. However, this ISA and IEA envelope are no different than in any other interchange.
+
+This topic provides a brief overview about the X12 TA1 ACK, including the TA1 ACK segments in an interchange and the error codes used in those segments. For other related information, review the following documentation:
+
+* [X12 997 functional acknowledgments and error codes](logic-apps-enterprise-integration-x12-997-acknowledgment.md)
+* [Exchange X12 messages for B2B enterprise integration](logic-apps-enterprise-integration-x12.md)
+* [Exchange EDIFACT messages for B2B enterprise integration](logic-apps-enterprise-integration-edifact.md)
+* [What is Azure Logic Apps](logic-apps-overview.md)
+* [B2B enterprise integration solutions with Azure Logic Apps and Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md)
+
+<a name="ta1-ack-segments"></a>
+
+## TA1 ACK segments
+
+The following table describes the TA1 ACK segments in an interchange:
+
+| TA1 field | Field name | Mapped to incoming interchange | Value |
+|--||--|-|
+| TA101 | Interchange control number | ISA13 - Interchange control number | - |
+| TA102 | Interchange Date | ISA09 - Interchange Date | - |
+| TA103 | Interchange Time | ISA10 - Interchange Time | - |
+| TA104 | Interchange ACK Code* | N/A | * Engine behavior is based on data element validation with the exception of security and authentication information, which are based on string comparisons in the configuration information. <p>The engine behavior (TA104) value is A, E, or R, based on the following definitions: <p><p>A = Accept <br>E = Interchange accepted with errors <br>R = Interchange rejected or suspended. <p><p>For more information, review [TA1 ACK error codes](#ta1-ack-error-codes). |
+| TA105 | Interchange Note Code | N/A | Processing result error code. For more information, review [TA1 ACK error codes](#ta1-ack-error-codes). |
+|||||
+
+<a name="ta1-ack-error-codes"></a>
+
+## TA1 ACK error codes
+
+This section covers the error codes used in [TA1 ACK segments](#ta1-ack-segments). The following table lists supported and unsupported error codes, as defined by the X12 specification, for X12 message processing in Azure Logic Apps. In the **Engine behavior** column, the TA104 values have the following definitions:
+
+* A = Accept
+* E = Interchange accepted with errors
+* R = Interchange rejected or suspended
+
+| Condition | Engine behavior <br>(TA104 value) | TA105 value | Supported? |
+|--|--|-||
+| Success | A | 000 | Yes |
+| The Interchange Control Numbers in the header ISA 13 and trailer IEA02 do not match | E | 001 | Yes |
+| Standard in ISA11 (Control Standards) is not supported | E | 002 | Yes, if an ID mismatch exists. |
+| Version of the controls is not supported | E | 003 | No, error code 017 is used instead. |
+| Segment Terminator is Invalid* <p><p>* The segment terminator can have the following valid combinations: <p><p>- Segment Terminator char only. <br>- Segment Terminator character followed by suffix 1 and suffix 2. | R | 004 | Yes |
+| Invalid Interchange ID Qualifier for Sender | R | 005 | Yes, if an ID mismatch exists. |
+| Invalid Interchange Sender ID | E | 006 | Yes, if receiving an interchange on a receive port that requires authentication. <p><p>**Note**: Sender ID-related properties are reviewed. If these properties are inconsistent, or if party settings are unavailable due to not being set, the interchange is rejected. |
+| Invalid Interchange ID Qualifier for Receiver | R | 007 | Yes, if an ID mismatch exists. |
+| Invalid Interchange Receiver ID | E | 008 | No* <p><p>* Supported if receiving an interchange on a receive port that requires authentication. Sender ID-related properties are reviewed. If these properties are inconsistent, or if party settings are unavailable due to not being set, the interchange is rejected. |
+| Unknown Interchange Receiver ID | E | 009 | Yes |
+| Invalid Authorization Information Qualifier value | R | 010 | Yes, if an ID mismatch exists. |
+| Invalid Authorization Information value | R | 011 | Yes, if party is set up or valued. |
+| Invalid Security Information Qualifier value | R | 012 | Yes, if an ID mismatch exists. |
+| Invalid Security Information value | R | 013 | Yes, if party is set up or valued. |
+| Invalid Interchange Date value | R | 014 | Yes |
+| Invalid Interchange Time value | R | 015 | Yes |
+| Invalid Interchange Standards Identifier value | R | 016 | Yes |
+| Invalid Interchange Version ID value | R | 017 | Yes, indicating that the enum value is not valid. |
+| Invalid Interchange Control Number value | R | 018 | Yes |
+| Invalid Acknowledgment Requested value | E | 019 | Yes |
+| Invalid Test Indicator value | E | 020 | Yes |
+| Invalid Number of Included Groups value | E | 021 | Yes |
+| Invalid Control Structure | R | 022 | Yes |
+| Improper (Premature) End-of-File (Transmission) | R | 023 | Yes |
+| Invalid Interchange Content, for example, Invalid GS segment | R | 024 | Yes |
+| Duplicate Interchange Control Number | R, based on settings | 025 | Yes |
+| Invalid Data Element Separator | R | 026 | Yes |
+| Invalid Component Element Separator | R | 027 | Yes |
+| Invalid Delivery Date in Deferred Delivery Request | - | - | No |
+| Invalid Delivery Time in Deferred Delivery Request | - | - | No |
+| Invalid Delivery Time Code in Deferred Delivery Request | - | - | No |
+| Invalid Grade of Service | - | - | No |
+||||
+
+## Next steps
+
+* [Exchange X12 messages for B2B enterprise integration](logic-apps-enterprise-integration-x12.md)
logic-apps Logic Apps Enterprise Integration X12 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-enterprise-integration-x12.md
Title: Send and receive X12 messages for B2B
-description: Exchange X12 messages for B2B enterprise integration scenarios by using Azure Logic Apps with Enterprise Integration Pack
+ Title: Exchange X12 messages for B2B integration
+description: Send, receive, and process X12 messages when building B2B enterprise integration solutions with Azure Logic Apps and the Enterprise Integration Pack.
ms.suite: integration -- Previously updated : 04/29/2020++ Last updated : 07/16/2021
-# Exchange X12 messages for B2B enterprise integration in Azure Logic Apps with Enterprise Integration Pack
+# Exchange X12 messages for B2B enterprise integration using Azure Logic Apps and Enterprise Integration Pack
-To work with X12 messages in Azure Logic Apps, you can use the X12 connector, which provides triggers and actions for managing X12 communication. For information about EDIFACT messages instead, see [Exchange EDIFACT messages](logic-apps-enterprise-integration-edifact.md).
+In Azure Logic Apps, you can create workflows that work with X12 messages by using **X12** operations. These operations include triggers and actions that you can use in your workflow to handle X12 communication. You can add X12 triggers and actions in the same way as any other trigger and action in a workflow, but you need to meet extra prerequisites before you can use X12 operations.
+
+This article describes the requirements and settings for using X12 triggers and actions in your workflow. If you're looking for EDIFACT messages instead, review [Exchange EDIFACT messages](logic-apps-enterprise-integration-edifact.md). If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overview.md) and [Quickstart: Create an integration workflow with multi-tenant Azure Logic Apps and the Azure portal](quickstart-create-first-logic-app-workflow.md).
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* A logic app resource and workflow where you want to use an X12 trigger or action. To use an X12 trigger, you need a blank workflow. To use an X12 action, you need a workflow that has an existing trigger.
+
+* An [integration account](logic-apps-enterprise-integration-create-integration-account.md) that's linked to your logic app resource. Both your logic app and integration account have to use the same Azure subscription and exist in the same Azure region or location.
+
+ Your integration account also need to include the following B2B artifacts:
+
+ * At least two [trading partners](logic-apps-enterprise-integration-partners.md) that use the X12 identity qualifier.
-* The logic app from where you want to use the X12 connector and a trigger that starts your logic app's workflow. The X12 connector provides only actions, not triggers. If you're new to logic apps, review [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md) and [Quickstart: Create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+ * An X12 [agreement](logic-apps-enterprise-integration-agreements.md) defined between your trading partners. For information about settings to use when receiving and sending messages, review [Receive Settings](#receive-settings) and [Send Settings](#send-settings).
-* An [integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md) that's associated with your Azure subscription and linked to the logic app where you plan to use the X12 connector. Both your logic app and integration account must exist in the same location or Azure region.
+ > [!IMPORTANT]
+ > If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, you have to add a
+ > `schemaReferences` section to your agreement. For more information, review [HIPAA schemas and message types](#hipaa-schemas).
-* At least two [trading partners](../logic-apps/logic-apps-enterprise-integration-partners.md) that you've already defined in your integration account by using the X12 identity qualifier.
+ * The [schemas](logic-apps-enterprise-integration-schemas.md) to use for XML validation.
-* The [schemas](../logic-apps/logic-apps-enterprise-integration-schemas.md) to use for XML validation that you've already added to your integration account. If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, see [HIPAA schemas](#hipaa-schemas).
+ > [!IMPORTANT]
+ > If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, make sure to review [HIPAA schemas and message types](#hipaa-schemas).
-* Before you can use the X12 connector, you must create an X12 [agreement](../logic-apps/logic-apps-enterprise-integration-agreements.md) between your trading partners and store that agreement in your integration account. If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, you need to add a `schemaReferences` section to your agreement. For more information, see [HIPAA schemas](#hipaa-schemas).
+## Connector reference
+
+For more technical information about this connector, such as triggers, actions, and limits as described by the connector's Swagger file, see the [connector's reference page](/connectors/x12/).
+
+> [!NOTE]
+> For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
+> this connector's ISE-labeled version uses the [B2B message limits for ISE](../logic-apps/logic-apps-limits-and-config.md#b2b-protocol-limits).
<a name="receive-settings"></a> ## Receive Settings
-After you set the agreement properties, you can configure how this agreement identifies and handles inbound messages that you receive from your partner through this agreement.
+After you set the properties in your trading partner agreement, you can configure how this agreement identifies and handles inbound messages that you receive from your partner through this agreement.
1. Under **Add**, select **Receive Settings**.
-1. Configure these properties based on your agreement with the partner that exchanges messages with you. The **Receive Settings** are organized into these sections:
+1. Based on the agreement with the partner that exchanges messages with you, set the properties in the **Receive Settings** pane, which is organized into the following sections:
* [Identifiers](#inbound-identifiers) * [Acknowledgement](#inbound-acknowledgement)
After you set the agreement properties, you can configure how this agreement ide
* [Validations](#inbound-validations) * [Internal Settings](#inbound-internal-settings)
- For property descriptions, see the tables in this section.
- 1. When you're done, make sure to save your settings by selecting **OK**. <a name="inbound-identifiers"></a>
After you set the agreement properties, you can configure how this agreement ide
| Property | Description | |-|-| | **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. <p>This setting specifies that the host partner, who is sending the message, requests an acknowledgment from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
-| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgements. <p>This settings specifies that the host partner, who is sending the message, requests an acknowledgement from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
+| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgements. <p>This setting specifies that the host partner, who is sending the message, requests an acknowledgement from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
||| <a name="outbound-schemas"></a>
For this section, select a [schema](../logic-apps/logic-apps-enterprise-integrat
The **Default** row shows the character set that's used as delimiters for a message schema. If you don't want to use the **Default** character set, you can enter a different set of delimiters for each message type. After you complete each row, a new empty row automatically appears. > [!TIP]
-> To provide special character values, edit the agreement as JSON
-> and provide the ASCII value for the special character.
+> To provide special character values, edit the agreement as JSON and provide the ASCII value for the special character.
| Property | Description | |-|-|
The **Default** row shows the validation rules that are used for an EDI message
## HIPAA schemas and message types
-When you work with HIPAA schemas and the 277 or 837 message types, you need to perform a few extra steps. The [document version numbers (GS8)](#outbound-control-version-number) for these message types have more than 9 characters, for example, "005010X222A1". Also, some document version numbers map to variant message types. If you don't reference the correct message type in your schema and in your agreement, you get this error message:
+When you work with HIPAA schemas and the 277 or 837 message types, you need to perform a few extra steps. The [document version numbers (GS8)](#outbound-control-version-number) for these message types have more than nine characters, for example, "005010X222A1". Also, some document version numbers map to variant message types. If you don't reference the correct message type in your schema and in your agreement, you get this error message:
`"The message has an unknown document type and did not resolve to any of the existing schemas configured in the agreement."`
To specify these document version numbers and message types, follow these steps:
1. In the Azure portal, go to your integration account. Find and download your schema. Replace the message type and rename the schema file, and upload your revised schema to your integration account. For more information, see [Edit schemas](../logic-apps/logic-apps-enterprise-integration-schemas.md#edit-schemas).
- 1. In your your agreement's message settings, select the revised schema.
+ 1. In your agreement's message settings, select the revised schema.
1. In your agreement's `schemaReferences` object, add another entry that specifies the variant message type that matches your document version number.
To specify these document version numbers and message types, follow these steps:
![Disable validation for all message types or each message type](./media/logic-apps-enterprise-integration-x12/x12-disable-validation.png)
-## Connector reference
-
-For additional technical details about this connector, such as actions and limits as described by the connector's Swagger file, see the [connector's reference page](/connectors/x12/).
-
-> [!NOTE]
-> For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
-> this connector's ISE-labeled version uses the [B2B message limits for ISE](../logic-apps/logic-apps-limits-and-config.md#b2b-protocol-limits).
- ## Next steps
-* Learn about other [connectors for Logic Apps](../connectors/apis-list.md)
+* [X12 TA1 technical acknowledgments and error codes](logic-apps-enterprise-integration-x12-ta1-acknowledgment.md)
+* [X12 997 functional acknowledgments and error codes](logic-apps-enterprise-integration-x12-997-acknowledgment.md)
+* [About connectors in Azure Logic Apps](../connectors/apis-list.md)
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
Before you set up your firewall with IP addresses, review these considerations:
* If you're using [Power Automate](/power-automate/getting-started), some actions, such as **HTTP** and **HTTP + OpenAPI**, go directly through the Azure Logic Apps service and come from the IP addresses that are listed here. For more information about the IP addresses used by Power Automate, see [Limits and configuration for Power Automate](/flow/limits-and-config#ip-address-configuration).
-* For [Azure China 21Vianet](/azure/chin), such as Azure Storage, SQL Server, Office 365 Outlook, and so on.
+* For [Microsoft Azure operated by 21Vianet](/azure/china/), review the [documentation version for Azure operated by 21Vianet](https://docs.azure.cn/en-us/logic-apps/logic-apps-limits-and-config#firewall-ip-configuration).
* If your logic app workflows run in single-tenant Azure Logic Apps, you need to find the fully qualified domain names (FQDNs) for your connections. For more information, review the corresponding sections in these topics:
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-endpoints.md
+ Last updated 06/17/2021 #Customer intent: As an MLOps administrator, I want to understand what a managed endpoint is and why I need it.
machine-learning Reference Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/reference-known-issues.md
authentication), you will not be given a password. However, in some scenarios, s
a password. Run `sudo passwd <user_name>` to create a new password for a certain user. With `sudo passwd`, you can create a new password for the root user.
-Running these command will not change the configuration of SSH, and allowed login mechanisms will be kept the same.
+Running these command will not change the configuration of SSH, and allowed sign-in mechanisms will be kept the same.
### Prompted for password when running sudo command When running a `sudo` command on an Ubuntu machine, you might be asked to enter your password again and again to confirm
-that you are really the user who is logged in. This is expected behavior and the default in Linux systems such as
-Ubuntu. However, in some scenarios, a repeated authentication is not necessary and rather annoying.
+that you are really the user who is logged in. This behavior is expected, and it is the default in Ubuntu. However, in some scenarios, a repeated authentication is not necessary and rather annoying.
-To disable re-authentication for most cases, you can run the following command in a terminal.
+To disable reauthentication for most cases, you can run the following command in a terminal.
`echo -e "\n$USER ALL=(ALL) NOPASSWD: ALL\n" | sudo tee -a /etc/sudoers`
In order to use docker as a non-root user, your user needs to be member of the d
### Docker containers cannot interact with the outside via network By default, docker adds new containers to the so-called "bridge network", which is `172.17.0.0/16`. If the subnet of
-that bridge network overlaps with the subnet the DSVM is in, no network communication between the host and the container
-is possible. In that case, for instance, web applications running in the container cannot be reached, and the container
-cannot update packages from apt.
+that bridge network overlaps with the subnet of your DSVM or with another private subnet you have in your subscription,
+no network communication between the host and the container is possible. In that case, web applications running in the container cannot be reached, and the container cannot update packages from apt.
-To fix the issue, you can change the default subnet for containers in the bridge network. By adding
+To fix the issue, you need to reconfigure docker to use an IP address space for its bridge network that does not overlap
+with other networks of your subscription. For example, by adding
```json "default-address-pools": [ {
- "base": "172.18.0.0/16",
- "size": 24
+ "base": "10.255.248.0/21",
+ "size": 21
} ] ``` to the JSON document contained in file `/etc/docker/daemon.json`, docker will assign another subnet to the bridge
-network, and the conflict should be resolved. (The file needs to be edited using sudo, eg. by running
-`sudo nano /etc/docker/daemon.json`.)
+network. (The file needs to be edited using sudo, for example by running `sudo nano /etc/docker/daemon.json`.)
After the change, the docker service needs to be restarted by running `service docker restart`. To check if your changes have taken effect, you can run `docker network inspect bridge`. The value under *IPAM.Config.Subnet* should correspond to the address pool specified above.
+### GPU(s) not available in docker container
+
+The docker installed on the DSVM supports GPUs by default. However, there is a few prerequisite that must be met.
+
+* Obviously, the VM size of the DSVM has to include at least one GPU.
+* When starting your docker container with `docker run`, you need to add a *--gpus* parameter, for example, `--gpus all`.
+* VM sizes that include NVIDIA A100 GPUs need additional software packages installed, esp. the
+[NVIDIA Fabric Manager](https://docs.nvidia.com/datacenter/tesla/pdf/fabric-manager-user-guide.pdf). These packages
+might not be pre-installed in your image yet.
+ ## Windows ### Accessing SQL Server When you try to connect to the pre-installed SQL Server instance, you might encounter a "login failed" error. To
-successfully connect to the SQL Server instance, you need to run the program you are connecting with, eg. SQL Server
+successfully connect to the SQL Server instance, you need to run the program you are connecting with, for example, SQL Server
Management Studio (SSMS), in administrator mode. The administrator mode is required because by DSVM's default, only administrators are allowed to connect.
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Previously updated : 07/20/2021 Last updated : 07/21/2021
For information on configuring UDR, see [Route network traffic with a routing ta
| Service tag | Protocol | Port | | -- |:--:|:--:| | AzureActiveDirectory | TCP | * |
- | AzureMachineLearning | TCP | * |
- | AzureResourceManager | TCP | * |
+ | AzureMachineLearning | TCP | 443 |
+ | AzureResourceManager | TCP | 443 |
| Storage.region | TCP | 443 |
- | AzureFrontDoor.FirstParty | TCP | 443 |
+ | AzureFrontDoor.FrontEnd</br>* Not needed in Azure China. | TCP | 443 |
| ContainerRegistry.region | TCP | 443 | | MicrosoftContainerRegistry.region | TCP | 443 |
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-cli.md
Last updated 05/25/2021 -+ # Install, set up, and use the 2.0 CLI (preview)
The new Machine Learning extension **requires Azure CLI version `>=2.15.0`**. En
az version ```
-If it is not, [upgrade your Azure CLI](/cli/azure/update-azure-cli).
+If it isn't, [upgrade your Azure CLI](/cli/azure/update-azure-cli).
-Check the Azure CLI extensions you have installed:
+Check the Azure CLI extensions you've installed:
:::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_extension_list":::
You can upgrade the extension to the latest version:
:::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_ml_update":::
+### Installation on Linux
+
+If you're using Linux, the fastest way to install the necessary CLI version and the Machine Learning extension is:
++
+For more, see [Install the Azure CLI for Linux](https://docs.microsoft.com/cli/azure/install-azure-cli-linux).
+ ## Set up Login:
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-connect-data-ui.md
To create a dataset in the studio:
1. Select **Tabular** or **File** for Dataset type. 1. Select **Next** to open the **Datastore and file selection** form. On this form you select where to keep your dataset after creation, as well as select what data files to use for your dataset. 1. Enable skip validation if your data is in a virtual network. Learn more about [virtual network isolation and privacy](how-to-enable-studio-virtual-network.md).
- 1. For Tabular datasets, you can specify a 'timeseries' trait to enable time related operations on your dataset. Learn how to [add the timeseries trait to your dataset](how-to-monitor-datasets.md#studio-dataset).
-1. Select **Next** to populate the **Settings and preview** and **Schema** forms; they are intelligently populated based on file type and you can further configure your dataset prior to creation on these forms. You can also indicate on this form if your data contains multi-line data.
+
+1. Select **Next** to populate the **Settings and preview** and **Schema** forms; they are intelligently populated based on file type and you can further configure your dataset prior to creation on these forms.
+ 1. On the Settings and preview form, you can indicate if your data contains multi-line data.
+ 1. On the Schema form, you can specify that your TabularDataset has a time component by selecting type: **Timestamp** for your date or time column.
+ 1. If your data is formatted into subsets, for example time windows, and you want to use those subsets for training, select type **Partition timestamp**. Doing so enables timeseries operations on your dataset. Learn more about how to [leverage partitions in your dataset for training](how-to-monitor-datasets.md?tabs=azure-studio#create-target-dataset).
1. Select **Next** to review the **Confirm details** form. Check your selections and create an optional data profile for your dataset. Learn more about [data profiling](#profile). 1. Select **Create** to complete your dataset creation.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
You can also use the following environment variables in your script:
3. CI_NAME 4. CI_LOCAL_UBUNTU_USER. This points to azureuser
+You can use setup script in conjunction with Azure policy to either enforce or default a setup script for every compute instance creation.
+ ### Use the script in the studio Once you store the script, specify it during creation of your compute instance:
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-custom-container.md
Last updated 06/16/2021 -+ # Deploy a TensorFlow model served with TF Serving using a custom container in a managed online endpoint (preview)
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
Last updated 05/13/2021 -+ # Deploy and score a machine learning model by using a managed online endpoint (preview)
machine-learning How To Deploy With Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-with-rest.md
Last updated 05/25/2021 + # Deploy models with REST (preview)
machine-learning How To Inference Server Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-inference-server-http.md
-+ Last updated 05/14/2021
machine-learning How To Label Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-label-data.md
After you submit tags for the data at hand, Azure refreshes the page with a new
## Medical image tasks
-Image projects support DICOM image format for X-ray file images. These images can be used to train machine learning models for clinical use.
-- > [!IMPORTANT] > The capability to label DICOM or similar image types is not intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability is not designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of Data Labeling for DICOM or similar image types.
+Image projects support DICOM image format for X-ray file images.
++ While you label the medical images with the same tools as any other images, there is an additional tool for DICOM images. Select the **Window and level** tool to change the intensity of the image. This tool is available only for DICOM images. :::image type="content" source="media/how-to-label-data/window-level-tool.png" alt-text="Window and level tool for DICOM images.":::
machine-learning How To Monitor Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-monitor-datasets.md
The monitor will compare the baseline and target datasets.
## Create target dataset
-The target dataset needs the `timeseries` trait set on it by specifying the timestamp column either from a column in the data or a virtual column derived from the path pattern of the files. Create the dataset with a timestamp through the [Python SDK](#sdk-dataset) or [Azure Machine Learning studio](#studio-dataset). A column representing a "timestamp" must be specified to add `timeseries` trait to the dataset. If your data is partitioned into folder structure with time info, such as '{yyyy/MM/dd}', create a virtual column through the path pattern setting and set it as the "partition timestamp" to improve the importance of time series functionality.
+The target dataset needs the `timeseries` trait set on it by specifying the timestamp column either from a column in the data or a virtual column derived from the path pattern of the files. Create the dataset with a timestamp through the [Python SDK](#sdk-dataset) or [Azure Machine Learning studio](#studio-dataset). A column representing a "timestamp" must be specified to add `timeseries` trait to the dataset. If your data is partitioned into folder structure with time info, such as '{yyyy/MM/dd}', create a virtual column through the path pattern setting and set it as the "partition timestamp" to enable time series API functionality.
# [Python](#tab/python) <a name="sdk-dataset"></a>
In the following example, all data under the subfolder *NoaaIsdFlorida/2019* is
[![Partition format](./media/how-to-monitor-datasets/partition-format.png)](media/how-to-monitor-datasets/partition-format-expand.png)
-In the **Schema** settings, specify the timestamp column from a virtual or real column in the specified dataset:
+In the **Schema** settings, specify the **timestamp** column from a virtual or real column in the specified dataset. This type indicates that your data has a time component.
:::image type="content" source="media/how-to-monitor-datasets/timestamp.png" alt-text="Set the timestamp":::
-If your data is partitioned by date, as is the case here, you can also specify the partition_timestamp. This allows more efficient processing of dates.
+If your data is already partitioned by date or time, as is the case here, you can also specify the **Partition timestamp**. This allows more efficient processing of dates and enables timeseries APIs that you can leverage during training.
:::image type="content" source="media/how-to-monitor-datasets/timeseries-partitiontimestamp.png" alt-text="Partition timestamp":::
machine-learning How To Monitor Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-monitor-online-endpoints.md
Last updated 05/03/2021 -+ # Monitor managed online endpoints (preview)
machine-learning How To Safely Rollout Managed Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-safely-rollout-managed-endpoints.md
Last updated 05/25/2021 -+ # Safe rollout for online endpoints (preview)
machine-learning How To Setup Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-setup-vs-code.md
Last updated 05/25/2021 + # Set up the Visual Studio Code Azure Machine Learning extension (preview)
machine-learning How To Train Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-cli.md
Last updated 06/18/2021 -+ # Train models (create jobs) with the 2.0 CLI (preview)
Create job and open in the studio:
## Distributed training
-You can specify the `distributed` section in a command job. Azure ML supports distributed training for PyTorch, Tensorflow, and MPI compatible frameworks. PyTorch and TensorFlow enable native distributed training for the respective frameworks, such as `tf.distributed.Strategy` APIs for TensorFlow.
+You can specify the `distribution` section in a command job. Azure ML supports distributed training for PyTorch, Tensorflow, and MPI compatible frameworks. PyTorch and TensorFlow enable native distributed training for the respective frameworks, such as `tf.distributed.Strategy` APIs for TensorFlow.
Be sure to set the `compute.instance_count`, which defaults to 1, to the desired number of nodes for the job.
machine-learning How To Train With Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-with-rest.md
Last updated 05/25/2021 + # Train models with REST (preview)
machine-learning How To Train With Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-with-ui.md
-+ Last updated 06/22/2021
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-batch-endpoints.md
-+
machine-learning How To Troubleshoot Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-managed-online-endpoints.md
Last updated 05/13/2021 + #Customer intent: As a data scientist, I want to figure out why my managed online endpoint deployment failed so that I can fix it.
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-batch-endpoint.md
Last updated 5/25/2021-+ # Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-environments.md
Environment(name="myenv")
Curated environments contain collections of Python packages and are available in your workspace by default. These environments are backed by cached Docker images which reduces the run preparation cost. You can select one of these popular curated environments to start with:
-* The _AzureML-Minimal_ environment contains a minimal set of packages to enable run tracking and asset uploading. You can use it as a starting point for your own environment.
+* The _AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu_ environment contains Scikit-learn, LightGBM, XGBoost, Dask as well as other AzureML Python SDK and additional packages.
-* The _AzureML-Tutorial_ environment contains common data science packages. These packages include Scikit-Learn, Pandas, Matplotlib, and a larger set of azureml-sdk packages.
+* The _AzureML-sklearn-0.24-ubuntu18.04-py37-cpu_ environment contains common data science packages. These packages include Scikit-Learn, Pandas, Matplotlib, and a larger set of azureml-sdk packages.
For a list of curated environments, see the [curated environments article](resource-curated-environments.md).
Using the Azure Machine Learning extension, you can create and manage environmen
* To use a managed compute target to train a model, see [Tutorial: Train a model](tutorial-train-models-with-aml.md). * After you have a trained model, learn [how and where to deploy models](how-to-deploy-and-where.md).
-* View the [`Environment` class SDK reference](/python/api/azureml-core/azureml.core.environment%28class%29).
+* View the [`Environment` class SDK reference](/python/api/azureml-core/azureml.core.environment%28class%29).
machine-learning How To Use Managed Online Endpoint Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-managed-online-endpoint-studio.md
-+
machine-learning How To View Online Endpoints Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-view-online-endpoints-costs.md
Last updated 05/03/2021 -+ # View costs for an Azure Machine Learning managed online endpoint (preview)
machine-learning Reference Managed Online Endpoints Vm Sku List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-managed-online-endpoints-vm-sku-list.md
+ Last updated 05/10/2021
machine-learning Tutorial Deploy Managed Endpoints Using System Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-deploy-managed-endpoints-using-system-managed-identity.md
Last updated 05/25/2021 -+ # Customer intent: As a data scientist, I want to securely access Azure resources for my machine learning model deployment with a managed online endpoint and system assigned managed identity.
machine-learning Tutorial Designer Automobile Price Train Score https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-designer-automobile-price-train-score.md
Splitting data is a common task in machine learning. You will split your data in
1. Connect the left port of the **Clean Missing Data** module to the **Split Data** module. > [!IMPORTANT]
- > Be sure that the left output ports of **Clean Missing Data** connects to **Split Data**. The left port contains the the cleaned data. The right port contains the discarted data.
+ > Be sure that the left output ports of **Clean Missing Data** connects to **Split Data**. The left port contains the the cleaned data. The right port contains the discarded data.
1. Select the **Split Data** module.
marketplace Business Applications Isv Program https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/business-applications-isv-program.md
The final step for enrollment in the Business Applications ISV Connect program i
Ask your Account Manager or contact [Microsoft Partner Support](https://aka.ms/marketplacepublishersupport) for assistance with your account. For general information on the Business Applications ISV Connect Program, see: -- [Business Applications for ISVs](https://partner.microsoft.com/solutions/business-applications/isv-overview) (online article)-- [Overview of the New Program for Business Applications ISVs](https://aka.ms/BizAppsISVProgram) (PDF)-- [Business Applications ISV Connect Program FAQ](https://assetsprod.microsoft.com/faq-using-partner-center-isv-connect.pdf) (PDF)-- [Upcoming program for Business Applications ISVs](https://cloudblogs.microsoft.com/dynamics365/bdm/2019/04/17/upcoming-program-for-business-applications-isvs/) (blog post)-- [ISV Connect Program Policies](https://aka.ms/bizappsisvpolicies) (PDF)
+- [Business Applications partner information](https://aka.ms/bizappsisvWeb) (website)
+- [ISV Connect program guide](https://aka.ms/bizappsisvProgram) (PDF)
+- [ISV Connect program partner FAQ](https://powerplatformpartners.transform.microsoft.com/download?assetname=assets/ISV%20Connect%20Partner%20FAQ.pdf&download=1) (PDF)
+- [ISV Connect program changes](https://cloudblogs.microsoft.com/dynamics365/bdm/2021/07/14/innovate-and-grow-with-the-simplified-business-applications-isv-connect-program/) (blog post)
migrate Tutorial Migrate Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-migrate-hyper-v.md
After you've verified that the test migration works as expected, you can migrate
## Complete the migration
-1. After the migration is done, right-click the VM > **Stop migration**. This does the following:
+1. After the migration is done, right-click the VM > **Stop Replication**. This does the following:
- Stops replication for the on-premises machine. - Removes the machine from the **Replicating servers** count in Azure Migrate: Server Migration. - Cleans up replication state information for the VM.
network-function-manager Create Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-function-manager/create-device.md
To create a **device** resource, use the following steps.
1. Sign in to the Azure [Preview portal](https://aka.ms/AzureNetworkFunctionManager) using your Microsoft Azure credentials. 1. On the **Basics** tab, configure **Project details** and **Instance details** with the device settings.
- :::image type="content" source="./media/create-device/device-settings.png" alt-text="Screenshot of device settings." lightbox="./media/create-device/device-settings.png":::
+ :::image type="content" source="./media/create-device/device-settings.png" alt-text="Screenshot of device settings.":::
When you fill in the fields, a green check mark will appear when characters you add are validated. Some details are auto filled, while others are customizable fields:
postgresql Concepts Hyperscale Connection Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-hyperscale-connection-pool.md
+
+ Title: Connection pooling ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Scaling client database connections
+++++ Last updated : 04/07/2021++
+# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) connection pooling
+
+Establishing new connections takes time. That works against most applications,
+which request many short-lived connections. We recommend using a connection
+pooler, both to reduce idle transactions and reuse existing connections. To
+learn more, visit our [blog
+post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
+
+You can run your own connection pooler, or use PgBouncer managed by Azure.
+
+## Managed PgBouncer (preview)
+
+> [!IMPORTANT]
+> The managed PgBouncer connection pooler in Hyperscale (Citus) is currently in
+> preview. This preview version is provided without a service level agreement,
+> and it's not recommended for production workloads. Certain features might not
+> be supported or might have constrained capabilities.
+>
+> You can see a complete list of other new features in [preview features for
+> Hyperscale (Citus)](hyperscale-preview-features.md).
+
+Connection poolers such as PgBouncer allow more clients to connect to the
+coordinator node at once. Applications connect to the pooler, and the pooler
+relays commands to the destination database.
+
+When clients connect through PgBouncer, the number of connections that can
+actively run in the database doesn't change. Instead, PgBouncer queues excess
+connections and runs them when the database is ready.
+
+Hyperscale (Citus) is now offering a managed instance of PgBouncer for server
+groups (in preview). It supports up to 2,000 simultaneous client connections.
+To connect through PgBouncer, follow these steps:
+
+1. Go to the **Connection strings** page for your server group in the Azure
+ portal.
+2. Enable the checkbox **PgBouncer connection strings**. (The listed connection
+ strings will change.)
+3. Update client applications to connect with the new string.
+
+## Next steps
+
+Discover more about the [limits and limitations](concepts-hyperscale-limits.md)
+of Hyperscale (Citus).
postgresql Concepts Hyperscale Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-hyperscale-limits.md
Previously updated : 04/07/2021 Last updated : 07/20/2021 # Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) limits and limitations
fewer connections available for user queries than connections total.
### Connection pooling
-Establishing new connections takes time. That works against most applications,
-which request many short-lived connections. We recommend using a connection
-pooler, both to reduce idle transactions and reuse existing connections. To
-learn more, visit our [blog
-post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
-
-You can run your own connection pooler, or use PgBouncer managed by Azure.
-
-#### Managed PgBouncer (preview)
-
-> [!IMPORTANT]
-> The managed PgBouncer connection pooler in Hyperscale (Citus) is currently in
-> preview. This preview version is provided without a service level agreement,
-> and it's not recommended for production workloads. Certain features might not
-> be supported or might have constrained capabilities.
->
-> You can see a complete list of other new features in [preview features for
-> Hyperscale (Citus)](hyperscale-preview-features.md).
-
-Connection poolers such as PgBouncer allow more clients to connect to the
-coordinator node at once. Applications connect to the pooler, and the pooler
-relays commands to the destination database.
-
-When clients connect through PgBouncer, the number of connections that can
-actively run in the database doesn't change. Instead, PgBouncer queues excess
-connections and runs them when the database is ready.
-
-Hyperscale (Citus) is now offering a managed instance of PgBouncer for server
-groups (in preview). It supports up to 2,000 simultaneous client connections.
-To connect through PgBouncer, follow these steps:
-
-1. Go to the **Connection strings** page for your server group in the Azure
- portal.
-2. Enable the checkbox **PgBouncer connection strings**. (The listed connection
- strings will change.)
-3. Update client applications to connect with the new string.
+You can scale connections further using [connection
+pooling](concepts-hyperscale-connection-pool.md). Hyperscale (Citus) offers a
+managed pgBouncer connection pooler configured for up to 2,000 simultaneous
+client connections.
## Storage scaling
tables](concepts-hyperscale-columnar.md):
## Next steps
-Learn how to [create a Hyperscale (Citus) server group in the
-portal](quickstart-create-hyperscale-portal.md).
+* Learn how to [create a Hyperscale (Citus) server group in the
+ portal](quickstart-create-hyperscale-portal.md).
+* Learn to enable [connection pooling](concepts-hyperscale-connection-pool.md).
postgresql Tutorial Webapp Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/tutorial-webapp-server-vnet.md
ms.devlang: azurecli Previously updated : 03/18/2021 Last updated : 06/30/2021
This tutorial shows you how create a Azure App Service Web app with Azure Databa
In this tutorial you will learn how to: >[!div class="checklist"] > * Create a PostgreSQL flexible server in a virtual network
-> * Create a subnet to delegate to App Service
> * Create a web app > * Add the web app to the virtual network > * Connect to Postgres from the web app ## Prerequisites
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+- [Install Azure CLI](/cli/azure/install-azure-cli).version 2.0 or later locally. To see the version installed, run the `az --version` command.
+- Login to your account using the [az login](/cli/azure/authenticate-azure-cli) command. Note the **id** property from the command output for the corresponding subscription name.
-This article requires that you're running the Azure CLI version 2.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+ ```azurecli
+ az login
+ ```
+- If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. Select the specific subscription ID under your account using [az account set](/cli/azure/account) command.
-You'll need to login to your account using the [az login](/cli/azure/authenticate-azure-cli) command. Note the **id** property from the command output for the corresponding subscription name.
-
-```azurecli
-az login
-```
-
-If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. Select the specific subscription ID under your account using [az account set](/cli/azure/account) command. Substitute the **subscription ID** property from the **az login** output for your subscription into the subscription ID placeholder.
-
-```azurecli
-az account set --subscription <subscription ID>
-```
+ ```azurecli
+ az account set --subscription <subscription ID>
+ ```
## Create a PostgreSQL Flexible Server in a new virtual network Create a private flexible server inside a virtual network (VNET) using the following command:+ ```azurecli
-az postgres flexible-server create --resource-group myresourcegroup --location westus2
+az postgres flexible-server create --resource-group demoresourcegroup --name demoserverpostgres --vnet demoappvnet --location westus2
``` This command performs the following actions, which may take a few minutes: - Create the resource group if it doesn't already exist. - Generates a server name if it is not provided.-- Create a new virtual network for your new postgreSQL server. Make a note of virtual network name and subnet name created for your server since you need to add the web app to the same virtual network.
+- Create a new virtual network for your new postgreSQL server and subnet within this virtual network for the database server.
- Creates admin username , password for your server if not provided. - Creates an empty database called **postgres**
-> [!NOTE]
-> - Make a note of your password that will be generate for you if not provided. If you forget the password you would have to reset the password using ``` az postgres flexible-server update``` command
-> - If you are not using App Service Environment , you would need to enable Allow access from any Azure IPs using this command.
-> ```azurecli
-> az postgres flexible-server firewall-rule list --resource-group myresourcegroup --server-name mydemoserver --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
-> ```
-
-## Create Subnet for App Service Endpoint
-We now need to have subnet that is delegated to App Service Web App endpoint. Run the following command to create a new subnet in the same virtual network as the database server was created.
-
-```azurecli
-az network vnet subnet create -g myresourcegroup --vnet-name VNETName --name webappsubnetName --address-prefixes 10.0.1.0/24 --delegations Microsoft.Web/serverFarms --service-endpoints Microsoft.Web
+Here is the sample output.
+
+```json
+Local context is turned on. Its information is saved in working directory /home/jane. You can run `az local-context off` to turn it off.
+Command argument values from local context: --resource-group demoresourcegroup, --location: eastus
+Command group 'postgres flexible-server' is in preview. It may be changed/removed in a future release.
+Checking the existence of the resource group ''...
+Creating Resource group 'demoresourcegroup ' ...
+Creating new vnet "demoappvnet" in resource group "demoresourcegroup" ...
+Creating new subnet "Subnet095447391" in resource group "demoresourcegroup " and delegating it to "Microsoft.DBforPostgreSQL/flexibleServers"...
+Creating PostgreSQL Server 'demoserverpostgres' in group 'demoresourcegroup'...
+Your server 'demoserverpostgres' is using sku 'Standard_D2s_v3' (Paid Tier). Please refer to https://aka.ms/postgres-pricing for pricing details
+Make a note of your password. If you forget, you would have to resetyour password with 'az postgres flexible-server update -n demoserverpostgres --resource-group demoresourcegroup -p <new-password>'.
+{
+ "connectionString": "postgresql://generated-username:generated-password@demoserverpostgres.postgres.database.azure.com/postgres?sslmode=require",
+ "host": "demoserverpostgres.postgres.database.azure.com",
+ "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/demoresourcegroup/providers/Microsoft.DBforPostgreSQL/flexibleServers/demoserverpostgres",
+ "location": "East US",
+ "password": "generated-password",
+ "resourceGroup": "demoresourcegroup",
+ "skuname": "Standard_D2s_v3",
+ "subnetId": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/demoresourcegroup/providers/Microsoft.Network/virtualNetworks/VNET095447391/subnets/Subnet095447391",
+ "username": "generated-username",
+ "version": "12"
+}
```
-Make a note of the virtual network name and subnet name after this command as would need it to add VNET integration rule for the web app after it is created.
## Create a Web App In this section, you create app host in App Service app, connect this app to the Postgres database, then deploy your code to that host. Make sure you're in the repository root of your application code in the terminal. Note Basic Plan does not support VNET integration. Please use Standard or Premium.
In this section, you create app host in App Service app, connect this app to the
Create an App Service app (the host process) with the az webapp up command ```azurecli
-az webapp up --resource-group myresourcegroup --location westus2 --plan testappserviceplan --sku P2V2 --name mywebapp
+az webapp up --resource-group demoresourcegroup --location westus2 --plan testappserviceplan --sku P2V2 --name mywebapp
``` > [!NOTE] > - For the --location argument, use the same location as you did for the database in the previous section.
-> - Replace <app-name> with a unique name across all Azure (the server endpoint is https://\<app-name>.azurewebsites.net). Allowed characters for <app-name> are A-Z, 0-9, and -. A good pattern is to use a combination of your company name and an app identifier.
+> - Replace <app-name> with a unique name across all Azure. Allowed characters for <app-name> are A-Z, 0-9, and -. A good pattern is to use a combination of your company name and an app identifier.
This command performs the following actions, which may take a few minutes:
This command performs the following actions, which may take a few minutes:
- Enable default logging for the app, if not already enabled. - Upload the repository using ZIP deployment with build automation enabled.
-## Add the Web App to the virtual network
-Use **az webapp vnet-integration** command to add a regional virtual network integration to a webapp. Replace <vnet-name> and <subnet-name> with the virtual network and subnet name that the flexible server is using.
+### Create Subnet for Web App
+Before enabling VNET integration, you need to have subnet that is delegated to App Service Web App. Before creating the subnet, view the database subnet address to avoid using the same address-prefix for web app subnet.
```azurecli
-az webapp vnet-integration add -g myresourcegroup -n mywebapp --vnet VNETName --subnet webappsubnetName
+az network vnet show --resource-group demoresourcegroup -n demoappvnet
```
-## Configure environment variables to connect the database
-With the code now deployed to App Service, the next step is to connect the app to the flexible server in Azure. The app code expects to find database information in a number of environment variables. To set environment variables in App Service, you create "app settings" with the ```az webapp config appsettings``` set command.
+Run the following command to create a new subnet in the same virtual network as the database server was created. **Update the address-prefix to avoid conflict with the database subnet.**
```azurecli
-az webapp config appsettings set --settings DBHOST="<postgres-server-name>.postgres.database.azure.com" DBNAME="postgres" DBUSER="<username>" DBPASS="<password>"
+az network vnet subnet create --resource-group demoresourcegroup --vnet-name demoappvnet --name webappsubnet --address-prefixes 10.0.1.0/24 --delegations Microsoft.Web/serverFarms
```
+## Add the Web App to the virtual network
+Use [az webapp vnet-integration](/cli/azure/webapp/vnet-integration) command to add a regional virtual network integration to a webapp.
+
+```azurecli
+az webapp vnet-integration add --resource-group demoresourcegroup -n mywebapp --vnet demoappvnet --subnet webappsubnet
+```
+
+## Configure environment variables to connect the database
+With the code now deployed to App Service, the next step is to connect the app to the flexible server in Azure. The app code expects to find database information in a number of environment variables. To set environment variables in App Service, use [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) command.
-- Replace ```postgres-server-name```,```username```, ```password``` for the newly created flexible server command.-- Replace <username> and <password> with the credentials that the command also generated for you.
+
+```azurecli
+
+az webapp config appsettings set --name mywebapp --settings DBHOST="<postgres-server-name>.postgres.database.azure.com" DBNAME="postgres" DBUSER="<username>" DBPASS="<password>"
+```
+- Replace **postgres-server-name**,**username**,**password** for the newly created flexible server command.
+- Replace **<username>** and **<password>** with the credentials that the command also generated for you.
- The resource group and app name are drawn from the cached values in the .azure/config file.-- The command creates settings named ```DBHOST```,```DBNAME```,```DBUSER```, and ```DBPASS```. If your application code is using different name for the database information then use those names for the app settings as mentioned in the code.
+- The command creates settings named **DBHOST**, **DBNAME**, **DBUSER***, and **DBPASS**. If your application code is using different name for the database information then use those names for the app settings as mentioned in the code.
+
+Configure the web app to allow all outbound connections from within the virtual network.
+```azurecli
+az webapp config set --name mywebapp --resource-group demoresourcegroup --generic-configurations '{"vnetRouteAllEnabled": true}'
+```
## Clean up resources Clean up all resources you created in the tutorial using the following command. This command deletes all the resources in this resource group. ```azurecli
-az group delete -n myresourcegroup
+az group delete -n demoresourcegroup
``` - ## Next steps > [!div class="nextstepaction"] > [Map an existing custom DNS name to Azure App Service](../../app-service/app-service-web-tutorial-custom-domain.md)
postgresql Howto Hyperscale Scale Grow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-hyperscale-scale-grow.md
Previously updated : 04/07/2021 Last updated : 07/20/2021 # Scale a Hyperscale (Citus) server group
Click the **Save** button to make the changed value take effect.
> To take advantage of newly added nodes you must [rebalance distributed table > shards](howto-hyperscale-scale-rebalance.md), which means moving some > [shards](concepts-hyperscale-distributed-data.md#shards) from existing nodes
-> to the new ones.
+> to the new ones. Rebalancing can work in the background, and requires no
+> downtime.
## Increase or decrease vCores on nodes
postgresql Howto Hyperscale Scale Rebalance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-hyperscale-scale-rebalance.md
Previously updated : 04/09/2021 Last updated : 07/20/2021 # Rebalance shards in Hyperscale (Citus) server group To take advantage of newly added nodes you must rebalance distributed table [shards](concepts-hyperscale-distributed-data.md#shards), which means moving
-some shards from existing nodes to the new ones.
+some shards from existing nodes to the new ones. Hyperscale (Citus) offers
+zero-downtime rebalancing, meaning queries can run without interruption during
+shard rebalancing.
## Determine if the server group needs a rebalance
postgresql Hyperscale Preview Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale-preview-features.md
Here are the features currently available for preview:
Replicas are a useful tool to improve performance for read-only workloads. * **[Managed
- PgBouncer](concepts-hyperscale-limits.md#managed-pgbouncer-preview)**.
+ PgBouncer](concepts-hyperscale-connection-pool.md)**.
A connection pooler that allows many clients to connect to the server group at once, while limiting the number of active connections. It satisfies connection requests while keeping
purview How To Bulk Edit Assets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-bulk-edit-assets.md
Title: How to bulk edit assets to tag classifications, glossary terms and modify
description: Learn bulk edit assets in Azure Purview. --++ Last updated 11/24/2020
purview Register Scan Adls Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-adls-gen1.md
Title: 'Register and scan Azure Data Lake Storage (ADLS) Gen1'
description: This tutorial describes how to scan data from Azure Data Lake Storage Gen1 into Azure Purview. --++ Last updated 05/08/2021 # Customer intent: As a data steward or catalog administrator, I need to understand how to scan data from Azure Data Lake Storage Gen1 into the catalog.
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-sql-database.md
It is required to get the service principal's application ID and secret:
### Firewall settings
-Your database server must allow Azure connections to be enabled. This will allow Azure Purview to reach and connect the server. You can follow the How-to guide for [Connections from inside Azure](../azure-sql/database/firewall-configure.md#connections-from-inside-azure).
+If your database server has a firewall enabled, you will need to update the firewall to allow access in one of two ways:
+
+1. Allow Azure connections through the firewall.
+1. Install a Self-Hosted Integration Runtime and give it access through the firewall.
+
+#### Allow Azure Connections
+
+Enabling Azure connections will allow Azure Purview to reach and connect the server without updating the firewall itself. You can follow the How-to guide for [Connections from inside Azure](../azure-sql/database/firewall-configure.md#connections-from-inside-azure).
1. Navigate to your database account 1. Select the server name in the **Overview** page
Your database server must allow Azure connections to be enabled. This will allow
1. Select **Yes** for **Allow Azure services and resources to access this server** :::image type="content" source="media/register-scan-azure-sql-database/sql-firewall.png" alt-text="Allow Azure services and resources to access this server." border="true":::
-
-> [!Note]
-> Currently Azure Purview does not support VNET configuration. Therefore you cannot do IP-based firewall settings.
+
+#### Self-Hosted Integration Runtime
+
+A self-hosted integration runtime (SHIR) can be installed on a machine to connect with a resource in a private network.
+
+1. [Create and install a self-hosted integration runtime](/azure/purview/manage-integration-runtimes) on a personal machine, or a machine inside the same VNet as your database server.
+1. Check your database server firewall to confirm that the SHIR machine has access through the firewall. Add the IP of the machine if it does not already have access.
+1. If your Azure SQL Server is behind a private endpoint or in a VNet, you can use an [ingestion private endpoint](catalog-private-link.md#ingestion-private-endpoints-and-scanning-sources) to ensure end-to-end network isolation.
## Register an Azure SQL Database data source
purview Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/supported-browsers.md
Title: Supported browsers
description: This article provides the list of supported browsers for Azure Purview. --++ Last updated 11/18/2020
purview Tutorial Scan Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-scan-data.md
After the catalog configuration is complete, run the following scripts in the Po
When you run the command, a pop-up window may appear for you to sign in using your Azure Active Directory credentials.
+ > [!TIP]
+ > If MFA is enabled across your tenant, you may encounter an MFA error at this step. If you do, briefly disable MFA for the account running this script. Then run again.
+ 1. Use the following command to run the starter kit. Replace the `CatalogName`, `TenantID`, `SubscriptionID`, `NewResourceGroupName`, and `CatalogResourceGroupName` placeholders. For `NewResourceGroupName`, use a unique name (with lowercase alphanumeric characters only) for the resource group that will contain the data estate.
search Search Api Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-api-preview.md
Previously updated : 07/15/2021 Last updated : 07/21/2021 # Preview features in Azure Cognitive Search
Preview features that transition to general availability are removed from this l
|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability | |||-||
-| [**RBAC support**](search-security-rbac.md) | Securit