Updates from: 01/11/2022 02:06:56
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/tutorial-create-tenant.md
You learn how to register an application in the next tutorial.
![Directories + subscriptions with Switch button](media/tutorial-create-tenant/switch-directory.png)
-1. Add **Microsoft.AzureActiveDirectory** as a resource provider for the Azure subscription your're using ([learn more](../azure-resource-manager/management/resource-providers-and-types.md?WT.mc_id=Portal-Microsoft_Azure_Support#register-resource-provider-1)):
+1. Add **Microsoft.AzureActiveDirectory** as a resource provider for the Azure subscription you're using ([learn more](../azure-resource-manager/management/resource-providers-and-types.md?WT.mc_id=Portal-Microsoft_Azure_Support#register-resource-provider-1)):
1. On the Azure portal, search for and select **Subscriptions**. 2. Select your subscription, and then in the left menu, select **Resource providers**. If you don't see the left menu, select the **Show the menu for < name of your subscription >** icon at the top left part of the page to expand it.
active-directory-b2c User Profile Attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-profile-attributes.md
In user migration scenarios, if the accounts you want to migrate have weaker pas
## MFA phone number attribute
-When using a phone for multi-factor authentication (MFA), the mobile phone is used to verify the user identity. To [add](/graph/api/authentication-post-phonemethods) a new phone number programatically, [update](/graph/api/b2cauthenticationmethodspolicy-update), [get](/graph/api/b2cauthenticationmethodspolicy-get), or [delete](/graph/api/phoneauthenticationmethod-delete) the phone number, use MS Graph API [phone authentication method](/graph/api/resources/phoneauthenticationmethod).
+When using a phone for multi-factor authentication (MFA), the mobile phone is used to verify the user identity. To [add](/graph/api/authentication-post-phonemethods) a new phone number programmatically, [update](/graph/api/b2cauthenticationmethodspolicy-update), [get](/graph/api/b2cauthenticationmethodspolicy-get), or [delete](/graph/api/phoneauthenticationmethod-delete) the phone number, use MS Graph API [phone authentication method](/graph/api/resources/phoneauthenticationmethod).
In Azure AD B2C [custom policies](custom-policy-overview.md), the phone number is available through `strongAuthenticationPhoneNumber` claim type.
active-directory Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/reference-powershell.md
Title: 'AADCloudSyncTools PowerShell Module for Azure AD Connect cloud sync'
+ Title: 'AADCloudSyncTools PowerShell module for Azure AD Connect cloud sync'
description: This article describes how to install the Azure AD Connect cloud provisioning agent.
-# AADCloudSyncTools PowerShell Module for Azure AD Connect cloud sync
+# AADCloudSyncTools PowerShell module for Azure AD Connect cloud sync
-The AADCloudSyncTools module provides a set of useful tools that you can use to help manage your Azure AD Connect Cloud Sync deployments.
+The AADCloudSyncTools module provides a set of useful tools that can help you manage your deployments of Azure Active Directory Connect (Azure AD Connect) cloud sync.
## Prerequisites
-The following prerequisites are required:
-- All the prerequisites for this module can be automatically installed using `Install-AADCloudSyncToolsPrerequisites`-- This module uses MSAL authentication, so it requires MSAL.PS module installed. To verify, in a PowerShell window, execute `Get-module MSAL.PS -ListAvailable`. If the module is installed correctly you will get a response. You can use `Install-AADCloudSyncToolsPrerequisites` to install the latest version of MSAL.PS-- Although the AzureAD PowerShell module is not a prerequisite for any functionality of this module it is useful to be present, so it is also automatically installed when using `Install-AADCloudSyncToolsPrerequisites`. -- Installing modules from PowerShell Gallery requires TLS 1.2 enforcement. The cmdlet `Install-AADCloudSyncToolsPrerequisites` sets TLS 1.2 enforcement before installing all the prerequisites. To ensure that you can manually install modules, set the following in the PowerShell session before using `Install-Module`:
+You can automatically install all the prerequisites for the AADCloudSyncTools module by using `Install-AADCloudSyncToolsPrerequisites`. You'll do that in the next section of this article.
+
+Here are some details about what you need:
+
+- The AADCloudSyncTools module uses Microsoft Authentication Library (MSAL) authentication, so it requires installation of the MSAL.PS module. To verify the installation, in a PowerShell window, run `Get-module MSAL.PS -ListAvailable`. If the module is installed correctly, you'll get a response. If necessary, you can use `Install-AADCloudSyncToolsPrerequisites` to install the latest version of MSAL.PS.
+- Although the Azure AD PowerShell module is not required for any functionality of the AADCloudSyncTools module, it is useful. So it's automatically installed when you use `Install-AADCloudSyncToolsPrerequisites`.
+- Installing modules from the PowerShell Gallery requires Transport Layer Security (TLS) 1.2 enforcement. The cmdlet `Install-AADCloudSyncToolsPrerequisites` sets TLS 1.2 enforcement before installing all the prerequisites. To ensure that you can manually install modules, set the following in the PowerShell session before using the cmdlet:
+ ``` [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 ``` ## Install the AADCloudSyncTools PowerShell module
-To install and use AADCloudSyncTools module use the following steps:
-
-1. Open Windows PowerShell with administrative privileges
-2. Type or copy and paste the following: `Import-module -Name "C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\Utility\AADCloudSyncTools"`
-3. Hit enter.
-4. To verify the module was imported, enter or copy and paste the following: `Get-module AADCloudSyncTools`
-5. You should now see information about the module.
-6. Next, to install the AADCloudSyncTools module pre-requisites run: `Install-AADCloudSyncToolsPrerequisites`
-7. On the first run, the PoweShellGet module will be installed if not present. To load the new PowershellGet module close the PowerShell Window and open a new PowerShell session with administrative privileges.
-8. Import the module again using step 2.
-9. Run `Install-AADCloudSyncToolsPrerequisites` to install the MSAL and AzureAD modules
-11. All prerequisites should be successfully installed
- ![Install module](media/reference-powershell/install-1.png)
-12. Every time you want to use AADCloudSyncTools module in new PowerShell session, enter or copy and paste the following:
-```
-Import-module "C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\Utility\AADCloudSyncTools"
-```
--
-## AADCloudSyncTools Cmdlets
+
+1. Open Windows PowerShell with administrative privileges.
+2. Run `Import-module -Name "C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\Utility\AADCloudSyncTools"`.
+3. To verify that the module was imported, run `Get-module AADCloudSyncTools`.
+
+ You should now see information about the module.
+4. To install the AADCloudSyncTools module prerequisites, run `Install-AADCloudSyncToolsPrerequisites`.
+5. On the first run, the PowerShellGet module will be installed if it's not present. To load the new PowerShellGet module, close the PowerShell window and open a new PowerShell session with administrative privileges.
+6. Import the module again by running `Import-module -Name "C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\Utility\AADCloudSyncTools"`.
+7. Run `Install-AADCloudSyncToolsPrerequisites` again to install the MSAL and Azure AD modules.
+
+ All prerequisites should now be installed.
+
+ ![Screenshot of the notification in the PowerShell window that says the prerequisites were installed successfully.](media/reference-powershell/install-1.png)
+8. Every time you want to use the AADCloudSyncTools module in a new PowerShell session, run the following command:
+
+ ```
+ Import-module "C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\Utility\AADCloudSyncTools"
+ ```
++
+## AADCloudSyncTools cmdlets
+ ### Connect-AADCloudSyncTools
-Uses the MSAL.PS module to request a token for the Azure AD administrator to access Microsoft Graph
+This cmdlet uses the MSAL.PS module to request a token for the Azure AD administrator to access Microsoft Graph.
### Export-AADCloudSyncToolsLogs
-Exports and packages all the troubleshooting data in a compressed file, as follows:
- 1. Sets verbose tracing and starts collecting data from the provisioning agent (same as `Start-AADCloudSyncToolsVerboseLogs`)
- <br>You can find these trace logs in the folder `C:\ProgramData\Microsoft\Azure AD Connect Provisioning Agent\Trace` </br>
- 2. Stops data collection after 3 minutes and disables verbose tracing (same as `Stop-AADCloudSyncToolsVerboseLogs`)
- <br>You can specify a different duration with `-TracingDurationMins` or completely skip verbose tracing with `-SkipVerboseTrace` </br>
- 3. Collects Event Viewer Logs for the last 24 hours
- 4. Compresses all the agent logs, verbose logs and event viewer logs into a compressed zip file under the User's Documents folder
- <br>You can specify a different output folder with `-OutputPath <folder path>` </br>
+
+This cmdlet exports and packages all the troubleshooting data in a compressed file, as follows:
+
+1. Sets verbose tracing and starts collecting data from the provisioning agent (same as `Start-AADCloudSyncToolsVerboseLogs`). You can find these trace logs in the folder *C:\ProgramData\Microsoft\Azure AD Connect Provisioning Agent\Trace*.
+2. Stops data collection after three minutes and disables verbose tracing (same as `Stop-AADCloudSyncToolsVerboseLogs`). You can specify a different duration by using `-TracingDurationMins` or completely skip verbose tracing by using `-SkipVerboseTrace`.
+3. Collects Event Viewer logs for the last 24 hours.
+4. Compresses all the agent logs, verbose logs, and Event Viewer logs into a .zip file in the user's *Documents* folder. You can specify a different output folder by using `-OutputPath <folder path>`.
### Get-AADCloudSyncToolsInfo
-Shows Azure AD Tenant details and internal variables state
+
+This cmdlet shows Azure AD tenant details and the state of internal variables.
### Get-AADCloudSyncToolsJob
-Uses Graph to get AD2AAD Service Principals and returns the Synchronization Job information.
-Can be also called using the specific Sync Job ID as a parameter.
+
+This cmdlet uses Microsoft Graph to get Azure AD service principals and returns the sync job's information. You can also call it by using the specific sync job ID as a parameter.
### Get-AADCloudSyncToolsJobSchedule
-Uses Graph to get AD2AAD Service Principals and returns the Synchronization Job's Schedule.
-Can be also called using the specific Sync Job ID as a parameter.
+
+This cmdlet uses Microsoft Graph to get Azure AD service principals and returns the sync job's schedule. You can also call it by using the specific sync job ID as a parameter.
### Get-AADCloudSyncToolsJobSchema
-Uses Graph to get AD2AAD Service Principals and returns the Synchronization Job's Schema.
+
+This cmdlet uses Microsoft Graph to get Azure AD service principals and returns the sync job's schema.
### Get-AADCloudSyncToolsJobScope
-Uses Graph to get the Synchronization Job's Schema for the provided Sync Job ID and outputs all filter group's scopes.
+
+This cmdlet uses Microsoft Graph to get the sync job's schema for the provided sync job ID and outputs all filter groups' scopes.
### Get-AADCloudSyncToolsJobSettings
-Uses Graph to get AD2AAD Service Principals and returns the Synchronization Job's Settings.
-Can be also called using the specific Sync Job ID as a parameter.
+
+This cmdlet uses Microsoft Graph to get Azure AD service principals and returns the sync job's settings. You can also call it by using the specific sync job ID as a parameter.
### Get-AADCloudSyncToolsJobStatus
-Uses Graph to get AD2AAD Service Principals and returns the Synchronization Job's Status.
-Can be also called using the specific Sync Job ID as a parameter.
+
+This cmdlet uses Microsoft Graph to get Azure AD service principals and returns the sync job's status. You can also call it by using the specific sync job ID as a parameter.
### Get-AADCloudSyncToolsServicePrincipal
-Uses Graph to get the Service Principal(s) for AD2AAD and/or SyncFabric.
-Without parameters, will only return AD2AAD Service Principal(s).
+
+This cmdlet uses Microsoft Graph to get the service principals for Azure AD and/or Azure Service Fabric. Without parameters, it will return only Azure AD service principals.
### Install-AADCloudSyncToolsPrerequisites
-Checks for the presence of PowerShellGet v2.2.4.1 or later and Azure AD and MSAL.PS modules and installs these if missing.
+
+This cmdlet checks for the presence of PowerShellGet v2.2.4.1 or later, the Azure AD module, and the MSAL.PS module. It installs these items if they're missing.
### Invoke-AADCloudSyncToolsGraphQuery
-Invokes a Web request for the URI, Method and Body specified as parameters
+
+This cmdlet invokes a web request for the URI, method, and body specified as parameters.
### Repair-AADCloudSyncToolsAccount
-Uses Azure AD PowerShell to delete the current account (if present) and resets the Sync Account authentication with a new synchronization account in Azure AD.
+
+This cmdlet uses Azure AD PowerShell to delete the current account (if present). It then resets the sync account authentication with a new sync account in Azure AD.
### Restart-AADCloudSyncToolsJob
-Restarts a full synchronization.
+
+This cmdlet restarts a full synchronization.
### Resume-AADCloudSyncToolsJob
-Continues synchronization from the previous watermark.
+
+This cmdlet continues synchronization from the previous watermark.
### Start-AADCloudSyncToolsVerboseLogs
-Modifies the 'AADConnectProvisioningAgent.exe.config' to enable verbose tracing and restarts the AADConnectProvisioningAgent service
-You can use -SkipServiceRestart to prevent service restart but any config changes will not take effect. You can find these trace logs in the folder C:\ProgramData\Microsoft\Azure AD Connect Provisioning Agent\Trace.
+
+This cmdlet modifies *AADConnectProvisioningAgent.exe.config* to enable verbose tracing and restarts the AADConnectProvisioningAgent service. You can use `-SkipServiceRestart` to prevent service restart, but any configuration changes will not take effect. You can find these trace logs in the folder *C:\ProgramData\Microsoft\Azure AD Connect Provisioning Agent\Trace*.
### Stop-AADCloudSyncToolsVerboseLogs
-Modifies the 'AADConnectProvisioningAgent.exe.config' to disable verbose tracing and restarts the AADConnectProvisioningAgent service.
-You can use -SkipServiceRestart to prevent service restart but any config changes will not take effect.
+
+This cmdlet modifies *AADConnectProvisioningAgent.exe.config* to disable verbose tracing and restarts the AADConnectProvisioningAgent service. You can use `-SkipServiceRestart` to prevent service restart, but any configuration changes will not take effect.
### Suspend-AADCloudSyncToolsJob
-Pauses synchronization.
+
+This cmdlet pauses synchronization.
## Next steps
active-directory Concept Conditional Access Session https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-session.md
Previously updated : 10/25/2021 Last updated : 01/10/2022 -+
Within a Conditional Access policy, an administrator can make use of session con
## Application enforced restrictions
-Organizations can use this control to require Azure AD to pass device information to the selected cloud apps. The device information enables the cloud apps to know whether a connection is started from a compliant or domain-joined device and alter the session experience. This control only supports SharePoint Online and Exchange Online as selected cloud apps. When selected, the cloud app uses the device information to provide users, depending on the device state, with a limited (when the device isn't managed) or full experience (when the device is managed and compliant).
+Organizations can use this control to require Azure AD to pass device information to the selected cloud apps. The device information allows cloud apps to know if a connection is from a compliant or domain-joined device and update the session experience. This control only supports SharePoint Online and Exchange Online as selected cloud apps. When selected, the cloud app uses the device information to provide users with a limited or full experience. Limited when the device isn't managed or compliant and full when the device is managed and compliant.
-For more information on the use and configuration of app enforced restrictions, see the following articles:
+For more information on the use and configuration of app-enforced restrictions, see the following articles:
- [Enabling limited access with SharePoint Online](/sharepoint/control-access-from-unmanaged-devices) - [Enabling limited access with Exchange Online](https://aka.ms/owalimitedaccess) ## Conditional Access application control
-Conditional Access App Control uses a reverse proxy architecture and is uniquely integrated with Azure AD Conditional Access. Azure AD Conditional Access allows you to enforce access controls on your organizationΓÇÖs apps based on certain conditions. The conditions define who (user or group of users) and what (which cloud apps) and where (which locations and networks) a Conditional Access policy is applied to. After youΓÇÖve determined the conditions, you can route users to [Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security) where you can protect data with Conditional Access App Control by applying access and session controls.
+Conditional Access App Control uses a reverse proxy architecture and is uniquely integrated with Azure AD Conditional Access. Azure AD Conditional Access allows you to enforce access controls on your organizationΓÇÖs apps based on certain conditions. The conditions define what user or group of users, cloud apps, and locations and networks a Conditional Access policy applies to. After youΓÇÖve determined the conditions, you can route users to [Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security) where you can protect data with Conditional Access App Control by applying access and session controls.
-Conditional Access App Control enables user app access and sessions to be monitored and controlled in real time based on access and session policies. Access and session policies are used within the Defender for Cloud Apps portal to further refine filters and set actions to be taken on a user. With the access and session policies, you can:
+Conditional Access App Control enables user app access and sessions to be monitored and controlled in real time based on access and session policies. Access and session policies are used within the Defender for Cloud Apps portal to refine filters and set actions to take. With the access and session policies, you can:
- Prevent data exfiltration: You can block the download, cut, copy, and print of sensitive documents on, for example, unmanaged devices. - Protect on download: Instead of blocking the download of sensitive documents, you can require documents to be labeled and protected with Azure Information Protection. This action ensures the document is protected and user access is restricted in a potentially risky session.
For more information, see the article [Configure authentication session manageme
## Customize continuous access evaluation
-[Continuous access evaluation](concept-continuous-access-evaluation.md) is auto enabled as part of an organization's Conditional Access policies. For organizations who wish to disable or strictly enforce continuous access evaluation, this configuration is now an option within the session control within Conditional Access. Continuous access evaluation policies can be scoped to all users or specific users and groups. Admins can make the following selections while creating a new policy or while editing an existing Conditional Access policy.
+[Continuous access evaluation](concept-continuous-access-evaluation.md) is auto enabled as part of an organization's Conditional Access policies. For organizations who wish to disable continuous access evaluation, this configuration is now an option within the session control within Conditional Access. Continuous access evaluation policies can be scoped to all users or specific users and groups. Admins can make the following selection while creating a new policy or while editing an existing Conditional Access policy.
- **Disable** only work when **All cloud apps** are selected, no conditions are selected, and **Disable** is selected under **Session** > **Customize continuous access evaluation** in a Conditional Access policy. You can choose to disable all users or specific users and groups.-- **Strict enforcement** can be used to further strengthen the security benefits from CAE. It will make sure that any critical event and policy will be enforced in real time. There are two additional scenarios where CAE will enforce when strict enforcement mode is turned on:
- - Non-CAE capable clients will not be allowed to access CAE-capable services.
- - Access will be rejected when client's IP address seen by resource provider isn't in the Conditional Access's allowed range.
-> [!NOTE]
-> You should only enable strict enforcement after you ensure that all the client applications support CAE and you have included all your IP addresses seen by Azure AD and the resource providers, like Exchange online and Azure Resource Mananger, in your location policy under Conditional Access. Otherwise, users in your tenants could be blocked.
:::image type="content" source="media/concept-conditional-access-session/continuous-access-evaluation-session-controls.png" alt-text="CAE Settings in a new Conditional Access policy in the Azure portal." lightbox="media/concept-conditional-access-session/continuous-access-evaluation-session-controls.png":::
For more information, see the article [Configure authentication session manageme
During an outage, Azure AD will extend access to existing sessions while enforcing Conditional Access policies. If a policy cannot be evaluated, access is determined by resilience settings.
-If resilience defaults are disabled, access is denied once existing sessions expire.ΓÇï For more information, see the article [Conditional Access: Resilience defaults](resilience-defaults.md).
+If resilience defaults are disabled, access is denied once existing sessions expire. For more information, see the article [Conditional Access: Resilience defaults](resilience-defaults.md).
## Next steps
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Previously updated : 10/21/2021 Last updated : 01/10/2022 -+
This process enables the scenario where users lose access to organizational Shar
> [!NOTE] > Teams and SharePoint Online do not support user risk events.
-### Conditional Access policy evaluation (preview)
+### Conditional Access policy evaluation
Exchange Online, SharePoint Online, Teams, and MS Graph can synchronize key Conditional Access policies for evaluation within the service itself.
If you aren't using CAE-capable clients, your default access token lifetime will
1. In this case, the resource provider denies access, and sends a 401+ claim challenge back to the client. 1. The CAE-capable client understands the 401+ claim challenge. It bypasses the caches and goes back to step 1, sending its refresh token along with the claim challenge back to Azure AD. Azure AD will then reevaluate all the conditions and prompt the user to reauthenticate in this case.
-### User condition change flow (Preview)
+### User condition change flow
In the following example, a Conditional Access administrator has configured a location based Conditional Access policy to only allow access from specific IP ranges:
In the following example, a Conditional Access administrator has configured a lo
1. In this case, the resource provider denies access, and sends a 401+ claim challenge back to the client. The client is challenged because it isn't coming from an allowed IP range. 1. The CAE-capable client understands the 401+ claim challenge. It bypasses the caches and goes back to step 1, sending its refresh token along with the claim challenge back to Azure AD. Azure AD reevaluates all the conditions and will deny access in this case.
-## Enable or disable CAE (Preview)
+## Enable or disable CAE
-CAE setting has been moved to under the Conditional Access blade. New CAE customers will be able to access and toggle CAE directly when creating Conditional Access policies. However, some existing customers will need to go through migration before they can begin to access CAE through Conditional Access.
+The CAE setting has been moved to under the Conditional Access blade. New CAE customers can access and toggle CAE directly when creating Conditional Access policies. However, some existing customers must go through migration before they can access CAE through Conditional Access.
#### Migration
-Customers who have configured CAE settings under Security before have to migrate these setting to a new Conditional Access policy. Use the steps that follow to migrate your CAE settings to a Conditional Access policy.
+Customers who have configured CAE settings under Security before have to migrate settings to a new Conditional Access policy. Use the steps that follow to migrate your CAE settings to a Conditional Access policy.
:::image type="content" source="media/concept-continuous-access-evaluation/migrate-continuous-access-evaluation.png" alt-text="Portal view showing the option to migrate continuous access evaluation to a Conditional Access policy." lightbox="media/concept-continuous-access-evaluation/migrate-continuous-access-evaluation.png"::: 1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Continuous access evaluation (preview)**.
+1. Browse to **Azure Active Directory** > **Security** > **Continuous access evaluation**.
1. You'll then see the option to **Migrate** your policy. This action is the only one that youΓÇÖll have access to at this point.
-1. Browse to **Conditional Access** and you will find a new policy named **CA policy created from CAE settings** with your settings configured. Administrators can choose to customize this policy or create their own to replace it.
+1. Browse to **Conditional Access** and you'll find a new policy named **CA policy created from CAE settings** with your settings configured. Administrators can choose to customize this policy or create their own to replace it.
The following table describes the migration experience of each customer group based on previously configured CAE settings.
The following table describes the migration experience of each customer group ba
More information about continuous access evaluation as a session control can be found in the section, [Customize continuous access evaluation](concept-conditional-access-session.md#customize-continuous-access-evaluation).
-### Strict enforcement
-
-With the latest CAE setting under Conditional Access, strict enforcement is a new feature that allows for enhanced security based on two factors: IP address variation and client capability. This functionality can be enabled while customizing CAE options for a given policy. By turning on strict enforcement, CAE will revoke access upon detecting any instances of either [IP address variation](#ip-address-variation) or a lack of CAE [client capability](#client-capabilities).
-
-> [!NOTE]
-> You should only enable strict enforcement after you ensure that all the client applications support CAE and you have included all your IP addresses seen by Azure AD and the resource providers, like Exchange online and Azure Resource Mananger, in your location policy under Conditional Access. Otherwise, you could be blocked.
- ## Limitations ### Group membership and Policy update effective time
To reduce this time a SharePoint Administrator can reduce the maximum lifetime o
### Enable after a user is disabled
-If you enable a user right after disabling, there's some latency before the account is recognized as enabled in downstream Microsoft services.
+If you enable a user right after disabling, there's some latency before the account is recognized as enabled in downstream Microsoft services.
-- SharePoint Online and Teams typically have a 15-minute delay. 
+- SharePoint Online and Teams typically have a 15-minute delay.
- Exchange Online typically has a 35-40 minute delay. ### Push notifications
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/workload-identity.md
Previously updated : 10/25/2021 Last updated : 01/10/2022
A workload identity is an identity that allows an application or service princip
These differences make workload identities difficult to manage, puts them at higher risk for leaks, and reduces the potential for securing access.
+> [!IMPORTANT]
+> In public preview, you can scope Conditional Access policies to service principals in Azure AD with an Azure Active Directory Premium P2 edition active in your tenant. After general availability, additional licenses might be required.
+ > [!NOTE]
-> Policy can be applied to single tenant service principals that have been registered in your tenant.
-> Third party SaaS and multi-tenanted apps are out of scope.
-> Managed identities are not covered by policy.
+> Policy can be applied to single tenant service principals that have been registered in your tenant. Third party SaaS and multi-tenanted apps are out of scope. Managed identities are not covered by policy.
This preview enables blocking service principals from outside of trusted IP ranges, such as a corporate network public IP ranges.
active-directory App Resilience Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/app-resilience-continuous-access-evaluation.md
# How to use Continuous Access Evaluation enabled APIs in your applications
-[Continuous Access Evaluation](../conditional-access/concept-continuous-access-evaluation.md) (CAE) is an Azure AD feature that allows access tokens to be revoked based on [critical events](../conditional-access/concept-continuous-access-evaluation.md#critical-event-evaluation) and [policy evaluation](../conditional-access/concept-continuous-access-evaluation.md#conditional-access-policy-evaluation-preview) rather than relying on token expiry based on lifetime. For some resource APIs, because risk and policy are evaluated in real time, this can increase token lifetime up to 28 hours. These long-lived tokens will be proactively refreshed by the Microsoft Authentication Library (MSAL), increasing the resiliency of your applications.
+[Continuous Access Evaluation](../conditional-access/concept-continuous-access-evaluation.md) (CAE) is an Azure AD feature that allows access tokens to be revoked based on [critical events](../conditional-access/concept-continuous-access-evaluation.md#critical-event-evaluation) and [policy evaluation](../conditional-access/concept-continuous-access-evaluation.md#conditional-access-policy-evaluation) rather than relying on token expiry based on lifetime. For some resource APIs, because risk and policy are evaluated in real time, this can increase token lifetime up to 28 hours. These long-lived tokens will be proactively refreshed by the Microsoft Authentication Library (MSAL), increasing the resiliency of your applications.
This article shows you how to use CAE-enabled APIs in your applications. Applications not using MSAL can add support for [claims challenges, claims requests, and client capabilities](claims-challenge.md) to use CAE.
active-directory V2 Oauth2 Client Creds Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-client-creds-grant-flow.md
A successful response from any method looks like this:
### Error response
-An error response looks like this:
+An error response (400 Bad Request) looks like this:
```json {
active-directory Active Directory Deployment Checklist P2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-deployment-checklist-p2.md
In this phase, administrators enable baseline security features to create a more
| Task | Detail | Required license | | - | | - |
-| [Create more than one global administrator](../roles/security-emergency-access.md) | Assign at least two cloud-only permanent global administrator accounts for use in an emergency. These accounts aren't be used daily and should have long and complex passwords. | Azure AD Free |
+| [Create more than one global administrator](../roles/security-emergency-access.md) | Assign at least two cloud-only permanent global administrator accounts for use in an emergency. These accounts aren't to be used daily and should have long and complex passwords. | Azure AD Free |
| [Use non-global administrative roles where possible](../roles/permissions-reference.md) | Give your administrators only the access they need to only the areas they need access to. Not all administrators need to be global administrators. | Azure AD Free | | [Enable Privileged Identity Management for tracking admin role use](../privileged-identity-management/pim-getting-started.md) | Enable Privileged Identity Management to start tracking administrative role usage. | Azure AD Premium P2 | | [Roll out self-service password reset](../authentication/howto-sspr-deployment.md) | Reduce helpdesk calls for password resets by allowing staff to reset their own passwords using policies you as an administrator control. | Azure AD Premium P1 |
Phase 4 sees administrators enforcing least privilege principles for administrat
[Identity and device access configurations](/microsoft-365/enterprise/microsoft-365-policies-configurations)
-[Common recommended identity and device access policies](/microsoft-365/enterprise/identity-access-policies)
+[Common recommended identity and device access policies](/microsoft-365/enterprise/identity-access-policies)
active-directory Resilience Client App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/resilience-client-app.md
Broker authentication is automatically supported by MSAL. You can find more info
## Adopt Continuous Access Evaluation
-[Continuous Access Evaluation (CAE)](../conditional-access/concept-continuous-access-evaluation.md) is a recent development that can increase application security and resilience with long-lived tokens. CAE is an emerging industry standard being developed in the Shared Signals and Events Working Group of the OpenID Foundation. With CAE, an access token can be revoked based on [critical events](../conditional-access/concept-continuous-access-evaluation.md#critical-event-evaluation) and [policy evaluation](../conditional-access/concept-continuous-access-evaluation.md#conditional-access-policy-evaluation-preview), rather than relying on a short token lifetime. For some resource APIs, because risk and policy are evaluated in real time, CAE can substantially increase token lifetime up to 28 hours. As resource APIs and applications adopt CAE, Microsoft Identity will be able to issue access tokens that are revocable and are valid for extended periods of time. These long-lived tokens will be proactively refreshed by MSAL.
+[Continuous Access Evaluation (CAE)](../conditional-access/concept-continuous-access-evaluation.md) is a recent development that can increase application security and resilience with long-lived tokens. CAE is an emerging industry standard being developed in the Shared Signals and Events Working Group of the OpenID Foundation. With CAE, an access token can be revoked based on [critical events](../conditional-access/concept-continuous-access-evaluation.md#critical-event-evaluation) and [policy evaluation](../conditional-access/concept-continuous-access-evaluation.md#conditional-access-policy-evaluation), rather than relying on a short token lifetime. For some resource APIs, because risk and policy are evaluated in real time, CAE can substantially increase token lifetime up to 28 hours. As resource APIs and applications adopt CAE, Microsoft Identity will be able to issue access tokens that are revocable and are valid for extended periods of time. These long-lived tokens will be proactively refreshed by MSAL.
While CAE is in early phases, it is possible to [develop client applications today that will benefit from CAE](../develop/app-resilience-continuous-access-evaluation.md) when the resources (APIs) the application uses adopt CAE. As more resources adopt CAE, your application will be able to acquire CAE enabled tokens for those resources as well. The Microsoft Graph API, and [Microsoft Graph SDKs](/graph/sdks/sdks-overview), will preview CAE capability early 2021. If you would like to participate in the public preview of Microsoft Graph with CAE, you can let us know you are interested here: [https://aka.ms/GraphCAEPreview](https://aka.ms/GraphCAEPreview).
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-version-history.md
To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-t
### Bug fixes
-We fixed a bug in version 2.0.88.0 where, under certain conditions, linked mailboxes of disabled users and mailboxes of certain resource objects, were getting deleted.
+- We fixed a bug in version 2.0.88.0 where, under certain conditions, linked mailboxes of disabled users and mailboxes of certain resource objects, were getting deleted.
+- We fixed an issue which causes upgrade to Azure AD Connect version 2.x to fail, when using SQL localdb along with a VSA service account for ADSync.
## 2.0.88.0
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-integration.md
Integrating F5 BIG-IP with Azure AD for SHA have the following pre-requisites:
No previous experience or F5 BIG-IP knowledge is necessary to implement SHA, but we do recommend familiarizing yourself with F5 BIG-IP terminology. F5ΓÇÖs rich [knowledge base](https://www.f5.com/services/resources/glossary) is also a good place to start building BIG-IP knowledge.
-## Deployment scenarios
+## Configuration scenarios
Configuring a BIG-IP for SHA is achieved using any of the many available methods, including several template based options, or a manual configuration. The following tutorials provide detailed guidance on implementing some of the more common patterns for BIG-IP and Azure AD SHA, using these methods.
active-directory F5 Big Ip Forms Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-forms-advanced.md
Prior BIG-IP experience is not necessary, but you'll need:
- An existing form-based authentication application, or [set up an IIS FBA app](/troubleshoot/aspnet/forms-based-authentication) for testing.
-## BIG-IP deployment methods
+## BIG-IP configuration methods
There are many methods to configure BIG-IP for this scenario, including a template-driven guided configuration. This article covers the advanced approach, which provides a more flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would also use this approach for more complex scenarios that the guided configuration templates don't cover.
active-directory F5 Big Ip Header Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-header-advanced.md
Configuring BIG-IP published applications with Azure AD provides many benefits,
- Full Single sign-on (SSO) between Azure AD and BIG-IP published services. -- Manage identities and access from a single control plane - The [Azure portal](https://azure.microsoft.com/features/azure-portal)
+- Manage identities and access from a single control plane, The [Azure portal](https://azure.microsoft.com/features/azure-portal)
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
for publishing services over HTTPS or use default certificates while testing
- An existing header-based application or [setup a simple IIS header app](/previous-versions/iis/6.0-sdk/ms525396(v=vs.90)) for testing
-## Deployment modes
+## BIG-IP configuration methods
-Several methods exist for configuring a BIG-IP for this scenario,
-including two wizard-based options or an advanced configuration.
-
-This tutorial covers the advanced approach, which provides a more flexible way of implementing secure hybrid access by manually creating all BIG-IP configuration objects. You would also use this approach for scenarios not covered by the Guided configuration.
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This article covers the advanced approach, which provides a more flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would also use this approach for scenarios that the guided configuration templates don't cover.
>[!NOTE]
->All example strings or values referenced throughout this article
-should be replaced with those for your actual environment.
+> All example strings or values in this article should be replaced with those for your actual environment.
## Adding F5 BIG-IP from the Azure AD gallery
-Setting up a SAML federation trust between BIG-IP APM and Azure AD is one of the first step in implementing secure hybrid access. It establishes the integration required for BIG-IP to hand off pre-authentication and [conditional
+Setting up a SAML federation trust between BIG-IP APM and Azure AD is one of the first step in implementing SHA. It establishes the integration required for BIG-IP to hand off pre-authentication and [conditional
access](../conditional-access/overview.md) to Azure AD, before granting access to the published service. 1. Sign-in to the Azure AD portal using an account with application administrative rights.
If making a change to the app is a no go then consider having the BIG-IP listen
This last step provides break down of all applied settings before they are committed. Select **Deploy** to commit all settings and verify that the application has appeared in your tenant.
-Your application is now published and accessible via Secure Hybrid Access, either directly via its URL or through Microsoft's application portals.
+Your application is now published and accessible via SHA, either directly via its URL or through Microsoft's application portals.
## Next steps
For increased security, organizations using this pattern could also consider blo
## Troubleshooting
-Failure to access the secure hybrid access protected application could be down to any number of potential factors, including a
+Failure to access the SHA protected application could be down to any number of potential factors, including a
misconfiguration. - BIG-IP logs are a great source of information for isolating all sorts of authentication & SSO issues. When troubleshooting you should increase the log verbosity level by heading to **Access Policy** > **Overview** > **Event Logs** > **Settings**. Select the row for your published application then **Edit** > **Access System Logs**. Select **Debug**
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
Prior BIG-IP experience isnΓÇÖt necessary, but youΓÇÖll need:
* An existing header-based application or [setup a simple IIS header app](/previous-versions/iis/6.0-sdk/ms525396(v=vs.90)) for testing
-## Big-IP deployment methods
+## Big-IP configuration methods
There are many methods to deploy BIG-IP for this scenario including a template-driven Guided Configuration, or an advanced configuration. This tutorial covers the Easy Button templates offered by the Guided Configuration 16.1 and upwards.
active-directory F5 Big Ip Kerberos Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md
Prior BIG-IP experience isn't necessary, but you will need:
* An existing Kerberos application, or [set up an Internet Information Services (IIS) app](https://active-directory-wp.com/docs/Networking/Single_Sign_On/SSO_with_IIS_on_Windows.html) for KCD SSO.
-## Configuration methods
+## BIG-IP configuration methods
There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This article covers the advanced approach, which provides a more flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would also use this approach for scenarios that the guided configuration templates don't cover.
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefi
* Improved zero-trust governance through Azure AD pre-authentication and authorization
-* End-to-end SSO between Azure AD and BIG-IP published services
+* Full SSO between Azure AD and BIG-IP published services
-* Manage identities and access from a single control plane - [The Azure portal](https://portal.azure.com/)
+* Manage identities and access from a single control plane, [The Azure portal](https://portal.azure.com/)
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
Prior BIG-IP experience isnΓÇÖt necessary, but you will need:
* An existing Kerberos application or [setup an IIS (Internet Information Services) app](https://active-directory-wp.com/docs/Networking/Single_Sign_On/SSO_with_IIS_on_Windows.html) for KCD SSO
-## BIG-IP deployment methods
+## BIG-IP configuration methods
There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers latest Guided Configuration 16.1 offering an Easy button template.
There may be cases where the Guided Configuration templates lack the flexibility
For those scenarios, go ahead and deploy using the Guided Configuration. Then navigate to **Access > Guided Configuration** and select the small padlock icon on the far right of the row for your applicationsΓÇÖ configs. At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+For more information, see [Advanced Configuration for kerberos-based SSO](./f5-big-ip-kerberos-advanced.md).
+ >[!NOTE] >Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the manual approach for production services.
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
Prior BIG-IP experience isn't necessary, but you'll need:
- A user directory that supports LDAP, including Windows Active Directory Lightweight Directory Services (AD LDS), OpenLDAP etc.
-## BIG-IP deployment methods
+## BIG-IP configuration methods
There are many methods to deploy BIG-IP for this scenario including a template-driven Guided Configuration wizard, or the manual advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy Button template.
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-disk-customer-managed-keys.md
Title: Use a customer-managed key to encrypt Azure disks in Azure Kubernetes Ser
description: Bring your own keys (BYOK) to encrypt AKS OS and Data disks. Previously updated : 09/01/2020 Last updated : 1/9/2022
Create a file called **byok-azure-disk.yaml** that contains the following inform
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata:
- name: hdd
-provisioner: kubernetes.io/azure-disk
+ name: byok
+provisioner: disk.csi.azure.com # replace with "kubernetes.io/azure-disk" if aks version is less than 1.21
parameters:
- skuname: Standard_LRS
+ skuname: StandardSSD_LRS
kind: managed diskEncryptionSetID: "/subscriptions/{myAzureSubscriptionId}/resourceGroups/{myResourceGroup}/providers/Microsoft.Compute/diskEncryptionSets/{myDiskEncryptionSetName}" ```
aks Azure Files Dynamic Pv https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-files-dynamic-pv.md
A storage class is used to define how an Azure file share is created. A storage
* *Premium_ZRS* - premium zone redundant storage (ZRS) > [!NOTE]
-> Azure Files support premium storage in AKS clusters that run Kubernetes 1.13 or higher, minimum premium file share is 100GB
+> minimum premium file share is 100GB
For more information on Kubernetes storage classes for Azure Files, see [Kubernetes Storage Classes][kubernetes-storage-classes].
kind: StorageClass
apiVersion: storage.k8s.io/v1 metadata: name: my-azurefile
-provisioner: kubernetes.io/azure-file
+provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
mountOptions: - dir_mode=0777 - file_mode=0777
kind: StorageClass
apiVersion: storage.k8s.io/v1 metadata: name: my-azurefile
-provisioner: kubernetes.io/azure-file
+provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
mountOptions: - dir_mode=0777 - file_mode=0777
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/certificate-rotation.md
Title: Rotate certificates in Azure Kubernetes Service (AKS)
description: Learn how to rotate your certificates in an Azure Kubernetes Service (AKS) cluster. Previously updated : 11/03/2021 Last updated : 1/9/2022 # Rotate certificates in Azure Kubernetes Service (AKS)
AKS generates and uses the following certificates, Certificate Authorities, and
* The `kubectl` client has a certificate for communicating with the AKS cluster. > [!NOTE]
-> AKS clusters created prior to May 2019 have certificates that expire after two years. Any cluster created after May 2019 or any cluster that has its certificates rotated have Cluster CA certificates that expire after 30 years. All other AKS certificates, which use the Cluster CA to for signing, will expire after two years and are automatically rotated during AKS version upgrade. To verify when your cluster was created, use `kubectl get nodes` to see the *Age* of your node pools.
+> AKS clusters created prior to May 2019 have certificates that expire after two years. Any cluster created after May 2019 or any cluster that has its certificates rotated have Cluster CA certificates that expire after 30 years. All other AKS certificates, which use the Cluster CA to for signing, will expire after two years and are automatically rotated during AKS version upgrade happened after 8/1/2021. To verify when your cluster was created, use `kubectl get nodes` to see the *Age* of your node pools.
> > Additionally, you can check the expiration date of your cluster's certificate. For example, the following bash command displays the client certificate details for the *myAKSCluster* cluster in resource group *rg* > ```console
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-configuration.md
az group create --name myResourceGroup --location eastus
az aks create -n aks -g myResourceGroup --enable-oidc-issuer ```
-### Upgrade an AKS cluster with OIDC Issuer
+### Update an AKS cluster with OIDC Issuer
-To upgrade a cluster to use OIDC Issuer.
+To update a cluster to use OIDC Issuer.
```azurecli-interactive az aks update -n aks -g myResourceGroup --enable-oidc-issuer
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/intro-kubernetes.md
You can create an AKS cluster using:
* [The Azure CLI](kubernetes-walkthrough.md) * [The Azure portal](kubernetes-walkthrough-portal.md) * [Azure PowerShell](kubernetes-walkthrough-powershell.md)
-* Using template-driven deployment options, like [Azure Resource Manager templates](kubernetes-walkthrough-rm-template.md) and Terraform
+* Using template-driven deployment options, like [Azure Resource Manager templates](kubernetes-walkthrough-rm-template.md), [Bicep](../azure-resource-manager/bicep/overview.md) and Terraform
When you deploy an AKS cluster, the Kubernetes master and all nodes are deployed and configured for you. Advanced networking, Azure Active Directory (Azure AD) integration, monitoring, and other features can be configured during the deployment process.
Learn more about deploying and managing AKS with the Azure CLI Quickstart.
[kubernetes-rbac]: concepts-identity.md#kubernetes-rbac [concepts-identity]: concepts-identity.md [concepts-storage]: concepts-storage.md
-[conf-com-node]: ../confidential-computing/confidential-nodes-aks-overview.md
+[conf-com-node]: ../confidential-computing/confidential-nodes-aks-overview.md
aks Use Ultra Disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-ultra-disks.md
Title: Enable Ultra Disk support on Azure Kubernetes Service (AKS)
description: Learn how to enable and configure Ultra Disks in an Azure Kubernetes Service (AKS) cluster Previously updated : 10/12/2021 Last updated : 1/9/2022
kind: StorageClass
apiVersion: storage.k8s.io/v1 metadata: name: ultra-disk-sc
-provisioner: kubernetes.io/azure-disk
+provisioner: disk.csi.azure.com # replace with "kubernetes.io/azure-disk" if aks version is less than 1.21
volumeBindingMode: WaitForFirstConsumer # optional, but recommended if you want to wait until the pod that will use this disk is created parameters: skuname: UltraSSD_LRS kind: managed
- cachingmode: None
+ cachingMode: None
diskIopsReadWrite: "2000" # minimum value: 2 IOPS/GiB diskMbpsReadWrite: "320" # minimum value: 0.032/GiB ```
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/virtual-network-reference.md
When an API Management service instance is hosted in a VNet, the ports in the fo
| Source / Destination Port(s) | Direction | Transport protocol | Service tags <br> Source / Destination | Purpose | VNet type | ||--|--||-|-|
-| * / [80], 443 | Inbound | TCP | INTERNET / VIRTUAL_NETWORK | **Client communication to API Management** | External only |
-| * / 3443 | Inbound | TCP | ApiManagement / VIRTUAL_NETWORK | **Management endpoint for Azure portal and PowerShell** | External & Internal |
-| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / Storage | **Dependency on Azure Storage** | External & Internal |
-| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) and Azure Key Vault dependency (optional) | External & Internal |
-| * / 1433 | Outbound | TCP | VIRTUAL_NETWORK / SQL | **Access to Azure SQL endpoints** | External & Internal |
-| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureKeyVault | **Access to Azure Key Vault** | External & Internal |
-| * / 5671, 5672, 443 | Outbound | TCP | VIRTUAL_NETWORK / Event Hub | Dependency for [Log to Event Hub policy](api-management-howto-log-event-hubs.md) and monitoring agent (optional) | External & Internal |
-| * / 445 | Outbound | TCP | VIRTUAL_NETWORK / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal |
-| * / 443, 12000 | Outbound | TCP | VIRTUAL_NETWORK / AzureCloud | Health and Monitoring Extension (optional) | External & Internal |
-| * / 1886, 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal |
-| * / 25, 587, 25028 | Outbound | TCP | VIRTUAL_NETWORK / INTERNET | Connect to SMTP Relay for sending e-mail (optional) | External & Internal |
-| * / 6381 - 6383 | Inbound & Outbound | TCP | VIRTUAL_NETWORK / VIRTUAL_NETWORK | Access Redis Service for [Cache](api-management-caching-policies.md) policies between machines (optional) | External & Internal |
-| * / 4290 | Inbound & Outbound | UDP | VIRTUAL_NETWORK / VIRTUAL_NETWORK | Sync Counters for [Rate Limit](api-management-access-restriction-policies.md#LimitCallRateByKey) policies between machines (optional) | External & Internal |
-| * / 6390 | Inbound | TCP | AZURE_LOAD_BALANCER / VIRTUAL_NETWORK | **Azure Infrastructure Load Balancer** | External & Internal |
+| * / [80], 443 | Inbound | TCP | Internet / VirtualNetwork | **Client communication to API Management** | External only |
+| * / 3443 | Inbound | TCP | ApiManagement / VirtualNetwork | **Management endpoint for Azure portal and PowerShell** | External & Internal |
+| * / 443 | Outbound | TCP | VirtualNetwork / Storage | **Dependency on Azure Storage** | External & Internal |
+| * / 443 | Outbound | TCP | VirtualNetwork / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) and Azure Key Vault dependency (optional) | External & Internal |
+| * / 1433 | Outbound | TCP | VirtualNetwork / SQL | **Access to Azure SQL endpoints** | External & Internal |
+| * / 443 | Outbound | TCP | VirtualNetwork / AzureKeyVault | **Access to Azure Key Vault** | External & Internal |
+| * / 5671, 5672, 443 | Outbound | TCP | VirtualNetwork / Azure Event Hubs | Dependency for [Log to Azure Event Hubs policy](api-management-howto-log-event-hubs.md) and monitoring agent (optional) | External & Internal |
+| * / 445 | Outbound | TCP | VirtualNetwork / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal |
+| * / 443, 12000 | Outbound | TCP | VirtualNetwork / AzureCloud | Health and Monitoring Extension (optional) | External & Internal |
+| * / 1886, 443 | Outbound | TCP | VirtualNetwork / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal |
+| * / 25, 587, 25028 | Outbound | TCP | VirtualNetwork / Internet | Connect to SMTP Relay for sending e-mail (optional) | External & Internal |
+| * / 6381 - 6383 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access Redis Service for [Cache](api-management-caching-policies.md) policies between machines (optional) | External & Internal |
+| * / 4290 | Inbound & Outbound | UDP | VirtualNetwork / VirtualNetwork | Sync Counters for [Rate Limit](api-management-access-restriction-policies.md#LimitCallRateByKey) policies between machines (optional) | External & Internal |
+| * / 6390 | Inbound | TCP | AzureLoadBalancer / VirtualNetwork | **Azure Infrastructure Load Balancer** | External & Internal |
### [stv1](#tab/stv1) | Source / Destination Port(s) | Direction | Transport protocol | [Service Tags](../virtual-network/network-security-groups-overview.md#service-tags) <br> Source / Destination | Purpose | VNet type | ||--|--||-|-|
-| * / [80], 443 | Inbound | TCP | INTERNET / VIRTUAL_NETWORK | **Client communication to API Management** | External only |
-| * / 3443 | Inbound | TCP | ApiManagement / VIRTUAL_NETWORK | **Management endpoint for Azure portal and PowerShell** | External & Internal |
-| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / Storage | **Dependency on Azure Storage** | External & Internal |
-| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) dependency (optional) | External & Internal |
-| * / 1433 | Outbound | TCP | VIRTUAL_NETWORK / SQL | **Access to Azure SQL endpoints** | External & Internal |
-| * / 5671, 5672, 443 | Outbound | TCP | VIRTUAL_NETWORK / Event Hub | Dependency for [Log to Event Hub policy](api-management-howto-log-event-hubs.md) and monitoring agent (optional)| External & Internal |
-| * / 445 | Outbound | TCP | VIRTUAL_NETWORK / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal |
-| * / 443, 12000 | Outbound | TCP | VIRTUAL_NETWORK / AzureCloud | Health and Monitoring Extension & Dependency on Event Grid (if events notification activated) (optional) | External & Internal |
-| * / 1886, 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal |
-| * / 25, 587, 25028 | Outbound | TCP | VIRTUAL_NETWORK / INTERNET | Connect to SMTP Relay for sending e-mail (optional) | External & Internal |
-| * / 6381 - 6383 | Inbound & Outbound | TCP | VIRTUAL_NETWORK / VIRTUAL_NETWORK | Access Redis Service for [Cache](api-management-caching-policies.md) policies between machines (optional) | External & Internal |
-| * / 4290 | Inbound & Outbound | UDP | VIRTUAL_NETWORK / VIRTUAL_NETWORK | Sync Counters for [Rate Limit](api-management-access-restriction-policies.md#LimitCallRateByKey) policies between machines (optional) | External & Internal |
-| * / * | Inbound | TCP | AZURE_LOAD_BALANCER / VIRTUAL_NETWORK | **Azure Infrastructure Load Balancer** (required for Premium SKU, optional for other SKUs) | External & Internal |
+| * / [80], 443 | Inbound | TCP | Internet / VirtualNetwork | **Client communication to API Management** | External only |
+| * / 3443 | Inbound | TCP | ApiManagement / VirtualNetwork | **Management endpoint for Azure portal and PowerShell** | External & Internal |
+| * / 443 | Outbound | TCP | VirtualNetwork / Storage | **Dependency on Azure Storage** | External & Internal |
+| * / 443 | Outbound | TCP | VirtualNetwork / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) dependency (optional) | External & Internal |
+| * / 1433 | Outbound | TCP | VirtualNetwork / SQL | **Access to Azure SQL endpoints** | External & Internal |
+| * / 5671, 5672, 443 | Outbound | TCP | VirtualNetwork / Azure Event Hubs | Dependency for [Log to Azure Event Hubs policy](api-management-howto-log-event-hubs.md) and monitoring agent (optional)| External & Internal |
+| * / 445 | Outbound | TCP | VirtualNetwork / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal |
+| * / 443, 12000 | Outbound | TCP | VirtualNetwork / AzureCloud | Health and Monitoring Extension & Dependency on Event Grid (if events notification activated) (optional) | External & Internal |
+| * / 1886, 443 | Outbound | TCP | VirtualNetwork / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal |
+| * / 25, 587, 25028 | Outbound | TCP | VirtualNetwork / Internet | Connect to SMTP Relay for sending e-mail (optional) | External & Internal |
+| * / 6381 - 6383 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access Redis Service for [Cache](api-management-caching-policies.md) policies between machines (optional) | External & Internal |
+| * / 4290 | Inbound & Outbound | UDP | VirtualNetwork / VirtualNetwork | Sync Counters for [Rate Limit](api-management-access-restriction-policies.md#LimitCallRateByKey) policies between machines (optional) | External & Internal |
+| * / * | Inbound | TCP | AzureLoadBalancer / VirtualNetwork | **Azure Infrastructure Load Balancer** (required for Premium SKU, optional for other SKUs) | External & Internal |
## Regional service tags
-NSG rules allowing outbound connectivity to Storage, SQL, and Event Hubs service tags may use the regional versions of those tags corresponding to the region containing the API Management instance (for example, **Storage.WestUS** for an API Management instance in the West US region). In multi-region deployments, the NSG in each region should allow traffic to the service tags for that region and the primary region.
+NSG rules allowing outbound connectivity to Storage, SQL, and Azure Event Hubs service tags may use the regional versions of those tags corresponding to the region containing the API Management instance (for example, **Storage.WestUS** for an API Management instance in the West US region). In multi-region deployments, the NSG in each region should allow traffic to the service tags for that region and the primary region.
## TLS functionality To enable TLS/SSL certificate chain building and validation, the API Management service needs outbound network connectivity to `ocsp.msocsp.com`, `mscrl.microsoft.com`, and `crl.microsoft.com`. This dependency is not required if any certificate you upload to API Management contains the full chain to the CA root.
Enable publishing the [developer portal](api-management-howto-developer-portal.m
When using the API Management diagnostics extension from inside a VNet, outbound access to `dc.services.visualstudio.com` on port `443` is required to enable the flow of diagnostic logs from Azure portal. This access helps in troubleshooting issues you might face when using the extension. ## Azure load balancer
- You're not required to allow inbound requests from service tag `AZURE_LOAD_BALANCER` for the Developer SKU, since only one compute unit is deployed behind it. However, inbound connectivity from `AZURE_LOAD_BALANCER` becomes **critical** when scaling to a higher SKU, such as Premium, because failure of the health probe from load balancer then blocks all inbound access to the control plane and data plane.
+ You're not required to allow inbound requests from service tag `AzureLoadBalancer` for the Developer SKU, since only one compute unit is deployed behind it. However, inbound connectivity from `AzureLoadBalancer` becomes **critical** when scaling to a higher SKU, such as Premium, because failure of the health probe from load balancer then blocks all inbound access to the control plane and data plane.
## Application Insights If you enabled [Azure Application Insights](api-management-howto-app-insights.md) monitoring on API Management, allow outbound connectivity to the [telemetry endpoint](../azure-monitor/app/ip-addresses.md#outgoing-ports) from the VNet.
When adding virtual machines running Windows to the VNet, allow outbound connect
Commonly, you configure and define your own default route (0.0.0.0/0), forcing all traffic from the API Management subnet to flow through an on-premises firewall or to a network virtual appliance. This traffic flow breaks connectivity with Azure API Management, since outbound traffic is either blocked on-premises, or NAT'd to an unrecognizable set of addresses no longer working with various Azure endpoints. You can solve this issue via one of the following methods: * Enable [service endpoints][ServiceEndpoints] on the subnet in which the API Management service is deployed for:
- * Azure SQL
+ * Azure SQL (required only in the primary region if the API Management service is deployed to [multiple regions](api-management-howto-deploy-multi-region.md))
* Azure Storage
- * Azure Event Hub
+ * Azure Event Hubs
* Azure Key Vault (required when API Management is deployed on the v2 platform) By enabling endpoints directly from the API Management subnet to these services, you can use the Microsoft Azure backbone network, providing optimal routing for service traffic. If you use service endpoints with a force tunneled API Management, the above Azure services traffic isn't force tunneled. The other API Management service dependency traffic is force tunneled and can't be lost. If lost, the API Management service would not function properly.
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/overview.md
There are a few features that are not available in ASEv3 that were available in
- monitor your traffic with Network Watcher or NSG Flow - configure a IP-based TLS/SSL binding with your apps - configure custom domain suffix
+- backup/restore operation on a storage account behind a firewall
## Pricing
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integra
### Support limitations for Azure Key Vault (AKV) secrets provider extension - Following Kubernetes distributions are currently supported - Cluster API Azure
+ - Azure Kubernetes Service on Azure Stack HCI (AKS-HCI)
- Google Kubernetes Engine - OpenShift Kubernetes Distribution - Canonical Kubernetes Distribution
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
URLs:
|`*.his.arc.azure.com`|Metadata and hybrid identity services| |`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions| |`dc.services.visualstudio.com`|Agent telemetry|
+|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service|
For a list of IP addresses for each service tag/region, see the JSON file - [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). Microsoft publishes weekly updates containing each Azure Service and the IP ranges it uses. This information in the JSON file is the current point-in-time list of the IP ranges that correspond to each service tag. The IP addresses are subject to change. If IP address ranges are required for your firewall configuration, then the **AzureCloud** Service Tag should be used to allow access to all Azure services. Do not disable security monitoring or inspection of these URLs, allow them as you would other Internet traffic.
azure-arc Scenario Onboard Azure Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/scenario-onboard-azure-sentinel.md
Before you start, make sure that you've met the following requirements:
- Microsoft Sentinel [enabled in your subscription](../../sentinel/quickstart-onboard.md). -- You're machine or server is connected to Azure Arc-enabled servers.
+- Your machine or server is connected to Azure Arc-enabled servers.
## Onboard Azure Arc-enabled servers to Microsoft Sentinel
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
Last updated 11/10/2021
Before using the Azure Arc-enabled VMware vSphere features, you'll need to connect your VMware vCenter Server to Azure Arc. This quickstart shows you how to connect your VMware vCenter Server to Azure Arc using a helper script.
-First, the script deploys a lightweight Azure Arc appliance, called Azure Arc resource bridge (preview), as a virtual machine running in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between your vCenter Server and Azure Arc.
+First, the script deploys a lightweight Azure Arc appliance, called [Azure Arc resource bridge](../resource-bridge/overview.md) (preview), as a virtual machine running in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between your vCenter Server and Azure Arc.
## Prerequisites
sudo bash arcvmware-setup.sh --force
## Next steps -- [Browse and enable VMware vCenter resources in Azure](browse-and-enable-vcenter-resources-in-azure.md)
+- [Browse and enable VMware vCenter resources in Azure](browse-and-enable-vcenter-resources-in-azure.md)
azure-cache-for-redis Cache Best Practices Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices-development.md
This request/response is a difficult one to measure. You could instrument your c
Resolutions for large response sizes are varied but include: - Optimize your application for a large number of small values, rather than a few large values.
- - The preferred solution is to break up your data into related smaller values.
- - See the post [What is the ideal value size range for redis? Is 100 KB too large?](https://groups.google.com/forum/#!searchin/redis-db/size/redis-db/n7aa2A4DZDs/3OeEPHSQBAAJ) for details on why smaller values are recommended.
+ - The preferred solution is to break up your data into related smaller values.
+ - See the post [What is the ideal value size range for redis? Is 100 KB too large?](https://groups.google.com/forum/#!searchin/redis-db/size/redis-db/n7aa2A4DZDs/3OeEPHSQBAAJ) for details on why smaller values are recommended.
- Increase the size of your VM to get higher bandwidth capabilities
- - More bandwidth on your client or server VM may reduce data transfer times for larger responses.
- - Compare your current network usage on both machines to the limits of your current VM size. More bandwidth on only the server or only on the client may not be enough.
+ - More bandwidth on your client or server VM may reduce data transfer times for larger responses.
+ - Compare your current network usage on both machines to the limits of your current VM size. More bandwidth on only the server or only on the client may not be enough.
- Increase the number of connection objects your application uses.
- - Use a round-robin approach to make requests over different connection objects.
+ - Use a round-robin approach to make requests over different connection objects.
## Key distribution
Try to choose a Redis client that supports [Redis pipelining](https://redis.io/t
## Avoid expensive operations
-Some Redis operations, like the [KEYS](https://redis.io/commands/keys) command, are expensive and should be avoided. For some considerations around long running commands, see [long-running commands](cache-troubleshoot-server.md#long-running-commands)
+Some Redis operations, like the [KEYS](https://redis.io/commands/keys) command, are expensive and should be avoided. For some considerations around long running commands, see [long-running commands](cache-troubleshoot-timeouts.md#long-running-commands).
## Choose an appropriate tier
azure-cache-for-redis Cache Troubleshoot Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-client.md
Title: Troubleshoot Azure Cache for Redis client-side issues
-description: Learn how to resolve common client-side issues with Azure Cache for Redis such as Redis client memory pressure, traffic burst, high CPU, limited bandwidth, large requests or large response size.
+ Title: Troubleshoot Azure Cache for Redis client issues
+description: Learn how to resolve common client issues, such as client memory pressure, traffic burst, high CPU, limited bandwidth, large requests, or large response size, when using Azure Cache for Redis.
Previously updated : 10/18/2019 Last updated : 12/31/2021 # Troubleshoot Azure Cache for Redis client-side issues
This section discusses troubleshooting issues that occur because of a condition
- [Traffic burst](#traffic-burst) - [High client CPU usage](#high-client-cpu-usage) - [Client-side bandwidth limitation](#client-side-bandwidth-limitation)
-<!-- [Large request or response size](#large-request-or-response-size) -->
## Memory pressure on Redis client
-Memory pressure on the client machine leads to all kinds of performance problems that can delay processing of responses from the cache. When memory pressure hits, the system may page data to disk. This _page faulting_ causes the system to slow down significantly.
+Memory pressure on the client can lead to performance problems that can delay processing of responses from the cache. When memory pressure hits, the system might page data to disk. This _page faulting_ causes the system to slow down significantly.
To detect memory pressure on the client:
High memory pressure on the client can be mitigated several ways:
## Traffic burst
-Bursts of traffic combined with poor `ThreadPool` settings can result in delays in processing data already sent by the Redis Server but not yet consumed on the client side.
-
-Monitor how your `ThreadPool` statistics change over time using [an example `ThreadPoolLogger`](https://github.com/JonCole/SampleCode/blob/master/ThreadPoolMonitor/ThreadPoolLogger.cs). You can use `TimeoutException` messages from StackExchange.Redis like below to further investigate:
-
-```output
- System.TimeoutException: Timeout performing EVAL, inst: 8, mgr: Inactive, queue: 0, qu: 0, qs: 0, qc: 0, wr: 0, wq: 0, in: 64221, ar: 0,
- IOCP: (Busy=6,Free=999,Min=2,Max=1000), WORKER: (Busy=7,Free=8184,Min=2,Max=8191)
-```
-
-In the preceding exception, there are several issues that are interesting:
--- Notice that in the `IOCP` section and the `WORKER` section you have a `Busy` value that is greater than the `Min` value. This difference means your `ThreadPool` settings need adjusting.-- You can also see `in: 64221`. This value indicates that 64,211 bytes have been received at the client's kernel socket layer but haven't been read by the application. This difference typically means that your application (for example, StackExchange.Redis) isn't reading data from the network as quickly as the server is sending it to you.-
-You can [configure your `ThreadPool` Settings](cache-management-faq.yml#important-details-about-threadpool-growth) to make sure that your thread pool scales up quickly under burst scenarios.
+This section was moved. For more information, see [Traffic burst and thread pool configuration](cache-troubleshoot-timeouts.md#traffic-burst-and-thread-pool-configuration).
## High client CPU usage
-High client CPU usage indicates the system can't keep up with the work it's been asked to do. Even though the cache sent the response quickly, the client may fail to process the response in a timely fashion.
-
-Monitor the client's system-wide CPU usage using metrics available in the Azure portal or through performance counters on the machine. Be careful not to monitor *process* CPU because a single process can have low CPU usage but the system-wide CPU can be high. Watch for spikes in CPU usage that correspond with timeouts. High CPU may also cause high `in: XXX` values in `TimeoutException` error messages as described in the [Traffic burst](#traffic-burst) section.
-
-> [!NOTE]
-> StackExchange.Redis 1.1.603 and later includes the `local-cpu` metric in `TimeoutException` error messages. Ensure you using the latest version of the [StackExchange.Redis NuGet package](https://www.nuget.org/packages/StackExchange.Redis/). There are bugs constantly being fixed in the code to make it more robust to timeouts so having the latest version is important.
->
-
-To mitigate a client's high CPU usage:
--- Investigate what is causing CPU spikes.-- Upgrade your client to a larger VM size with more CPU capacity.
+This section was moved. For more information, see [High CPU on client hosts](cache-troubleshoot-timeouts.md#high-cpu-on-client-hosts).
## Client-side bandwidth limitation
-Depending on the architecture of client machines, they may have limitations on how much network bandwidth they have available. If the client exceeds the available bandwidth by overloading network capacity, then data isn't processed on the client side as quickly as the server is sending it. This situation can lead to timeouts.
-
-Monitor how your Bandwidth usage change over time using [an example `BandwidthLogger`](https://github.com/JonCole/SampleCode/blob/master/BandWidthMonitor/BandwidthLogger.cs). This code may not run successfully in some environments with restricted permissions (like Azure web sites).
+This section was moved. For more information, see [Network bandwidth limitation on client hosts](cache-troubleshoot-timeouts.md#network-bandwidth-limitation-on-client-hosts).
-To mitigate, reduce network bandwidth consumption or increase the client VM size to one with more network capacity.
-
-<!--
-## Large request or response Size
-
-A large request/response can cause timeouts. As an example, suppose your timeout value configured on your client is 1 second. Your application requests two keys (for example, 'A' and 'B') at the same time (using the same physical network connection). Most clients support request "pipelining", where both requests 'A' and 'B' are sent one after the other without waiting for their responses. The server sends the responses back in the same order. If response 'A' is large, it can eat up most of the timeout for later requests.
-
-In the following example, request 'A' and 'B' are sent quickly to the server. The server starts sending responses 'A' and 'B' quickly. Because of data transfer times, response 'B' must wait behind response 'A' times out even though the server responded quickly.
-
-```console
-|-- 1 Second Timeout (A)-|
-|-Request A-|
- |-- 1 Second Timeout (B) -|
- |-Request B-|
- |- Read Response A --|
- |- Read Response B-| (**TIMEOUT**)
-```
-
-This request/response is a difficult one to measure. You could instrument your client code to track large requests and responses.
-
-Resolutions for large response sizes are varied but include:
-
-1. Optimize your application for a large number of small values, rather than a few large values.
- - The preferred solution is to break up your data into related smaller values.
- - See the post [What is the ideal value size range for redis? Is 100 KB too large?](https://groups.google.com/forum/#!searchin/redis-db/size/redis-db/n7aa2A4DZDs/3OeEPHSQBAAJ) for details on why smaller values are recommended.
-1. Increase the size of your VM to get higher bandwidth capabilities
- - More bandwidth on your client or server VM may reduce data transfer times for larger responses.
- - Compare your current network usage on both machines to the limits of your current VM size. More bandwidth on only the server or only on the client may not be enough.
-1. Increase the number of connection objects your application uses.
- - Use a round-robin approach to make requests over different connection objects.
-
- -->
-
## High client connections
-Client connections reaching the maximum for the cache can cause failures in client requests for connections beyond the maximum, and can also cause high server CPU usage on the cache due to processing repeated reconnection attempts.
+When client connections reach the maximum for the cache, you can have failures in client requests for connections beyond the maximum. High client connections can also cause high server load when processing repeated reconnection attempts.
-High client connections may indicate a connection leak in client code. Connections may not be getting re-used or closed properly. Review client code for connection use.
+High client connections might indicate a connection leak in client code. Connections might not be getting reused or closed properly. Review client code for connection use.
-If the high connections are all legitimate and required client connections, upgrading your cache to a size with a higher connection limit may be required.
+If the high connections are all legitimate and required client connections, upgrading your cache to a size with a higher connection limit might be required. Check if the `Max aggregate for Connected Clients` metric is close or higher than the maximum number of allowed connections for a particular cache size. For more information on sizing per client connections, see [Azure Cache for Redis performance](cache-planning-faq.yml#azure-cache-for-redis-performance).
## Additional information -- [Troubleshoot Azure Cache for Redis server-side issues](cache-troubleshoot-server.md)
+These articles provide more information on troubleshooting and performance testing:
+
+- [Troubleshoot Azure Cache for Redis server issues](cache-troubleshoot-server.md)
+- [Troubleshoot Azure Cache for Redis latency and timeouts](cache-troubleshoot-timeouts.md)
- [How can I benchmark and test the performance of my cache?](cache-management-faq.yml#how-can-i-benchmark-and-test-the-performance-of-my-cache-)
azure-cache-for-redis Cache Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-connectivity.md
+
+ Title: Troubleshoot connectivity in Azure Cache for Redis
+
+description: Learn how to resolve connectivity problems when creating clients with Azure Cache for Redis.
++++ Last updated : 12/31/2021+++
+# Connectivity troubleshooting
+
+In this article, we provide troubleshooting help for connecting your client application to Azure Cache for Redis. Connectivity issues are divided into two types: intermittent connectivity issues and continuous connectivity issues.
+
+- [Intermittent connectivity issues](#intermittent-connectivity-issues)
+ - [Server maintenance](#server-maintenance)
+ - [Number of connected clients](#number-of-connected-clients)
+ - [Kubernetes hosted applications](#kubernetes-hosted-applications)
+ - [Linux-based client application](#linux-based-client-application)
+- [Continuous connectivity issues](#continuous-connectivity)
+ - [Azure Cache for Redis CLI](#azure-cache-for-redis-cli)
+ - [PSPING](#psping)
+ - [Virtual network configuration](#virtual-network-configuration)
+ - [Private endpoint configuration](#private-endpoint-configuration)
+ - [Firewall rules](#third-party-firewall-or-external-proxy)
+
+## Intermittent connectivity issues
+
+Your client application might have intermittent connectivity issues caused by events such as patching, or spikes in the number of connections.
+
+### Server maintenance
+
+Sometimes, your cache undergoes a planned or an unplanned server maintenance. Your application can be negatively affected during the maintenance. You can validate by checking the `Errors (Type: Failover)` metric on your portal. To minimize the effects of failovers, see [Connection resilience](cache-best-practices-connection.md#connection-resilience).
+
+### Number of connected clients
+
+Check if the Max aggregate for `Connected Clients` metric is close or higher than the maximum number of allowed connections for a particular cache size. For more information on sizing per client connections, see [Azure Cache for Redis performance](https://azure.microsoft.com/pricing/details/cache/).
+
+### Kubernetes hosted applications
+
+- If your client application is hosted on Kubernetes, check that the pod running the client application or the cluster nodes aren't under memory/CPU/Network pressure. A pod running the client application can be affected by other pods running on the same node and throttle Redis connections or IO operations.
+- If you're using *Istio* or any other service mesh, check that your service mesh proxy reserves port 13000-13019 or 15000-15019. These ports are used by clients to communicate with a clustered Azure Cache For Redis nodes and could cause connectivity issues on those ports.
+
+### Linux-based client application
+
+Using optimistic TCP settings in Linux might cause client applications to experience connectivity issues. See [Connection stalls lasting for 15 minutes](https://github.com/StackExchange/StackExchange.Redis/issues/1848#issuecomment-913064646).
+
+## Continuous connectivity
+
+If your application can't maintain a continuous connection to your Azure Cache for Redis, it's possible some configuration on the cache isn't set up correctly. The following sections offer suggestions on how to make sure your cache is configured correctly.
+
+### Azure Cache for Redis CLI
+
+Test connectivity using Azure Cache for Redis CLI. For more information on CLI, [Use the Redis command-line tool with Azure Cache for Redis](cache-how-to-redis-cli-tool.md).
+
+### PSPING
+
+If Azure Cache for Redis CLI is unable to connect, you can test connectivity using `PSPING` in PowerShell.
+
+```azurepowershell-interactive
+psping -q <cache DNS endpoint>:<Port Number>
+```
+
+You can confirm the number of sent packets is equal to the received packets. Confirming ensures no drop in connectivity.
+
+### Virtual network configuration
+
+Steps to check your virtual network configuration:
+
+1. Check if a virtual network is assigned to your cache from the "**Virtual Network**" section under the **Settings** on the Resource menu of the Azure portal.
+1. Ensure that the client host machine is in the same virtual network as the Azure Cache For Redis.
+1. When the client application is in a different VNet than your Azure Cache For Redis, both VNets must have VNet peering enabled within the same Azure region.
+1. Validate that the [Inbound](cache-how-to-premium-vnet.md#inbound-port-requirements) and [Outbound](cache-how-to-premium-vnet.md#outbound-port-requirements) rules meet the requirement.
+1. For more information, see [Configure a virtual network - Premium-tier Azure Cache for Redis instance](cache-how-to-premium-vnet.md#how-can-i-verify-that-my-cache-is-working-in-a-virtual-network).
+
+### Private endpoint configuration
+
+Steps to check your private endpoint configurtation:
+
+1. `Public Network Access` flag is disabled by default on creating a private endpoint. Ensure that you have set the `Public Network Access` correctly. When you have your cache in Azure portal, look under **Private Endpoint** in the Resource menu on the left for this setting.
+1. If you're trying to connect to your cache private endpoint from outside your virtual network of your cache, `Public Network Access` needs to be enabled.
+1. If you've deleted your private endpoint, ensure that the public network access is enabled.
+1. Verify if your private endpoint is configured correctly. For more information, see [Create a private endpoint with a new Azure Cache for Redis instance](cache-private-link.md#create-a-private-endpoint-with-a-new-azure-cache-for-redis-instance).
+
+### Firewall rules
+
+If you have a firewall configured for your Azure Cache For Redis, ensure that your client IP address is added to the firewall rules. You can check **Firewall** on the Resource menu under **Settings** on the Azure portal.
+
+#### Third-party firewall or external proxy
+
+When you use a third-party firewall or proxy in your network, check that the endpoint for Azure Cache for Redis, `*.redis.cache.windows.net`, is allowed along with the ports `6379` and `6380`. You might need to allow more ports when using a clustered cache or geo-replication.
+
+## Next steps
+
+These articles provide more information on connectivity and resilience:
+
+- [Best practices for connection resilience](cache-best-practices-connection.md)
+- [High availability for Azure Cache for Redis](cache-high-availability.md)
azure-cache-for-redis Cache Troubleshoot Data Loss https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-data-loss.md
Previously updated : 10/17/2019 Last updated : 12/01/2021 # Troubleshoot data loss in Azure Cache for Redis This article discusses how to diagnose actual or perceived data losses that might occur in Azure Cache for Redis.
+- [Partial loss of keys](#partial-loss-of-keys)
+ - [Key expiration](#key-expiration)
+ - [Key eviction](#key-eviction)
+ - [Key deletion](#key-deletion)
+ - [Async replication](#async-replication)
+- [Major or complete loss of keys](#major-or-complete-loss-of-keys)
+ - [Key flushing](#key-flushing)
+ - [Incorrect database selection](#incorrect-database-selection)
+ - [Redis instance failure](#redis-instance-failure)
+ > [!NOTE] > Several of the troubleshooting steps in this guide include instructions to run Redis commands and monitor various performance metrics. For more information and instructions, see the articles in the [Additional information](#additional-information) section. > ## Partial loss of keys
-Azure Cache for Redis doesn't randomly delete keys after they've been stored in memory. However, it does remove keys in response to expiration or eviction policies and to explicit key-deletion commands. Keys that have been written to the primary node in a Premium or Standard Azure Cache for Redis instance also might not be available on a replica right away. Data is replicated from the primary to the replica in an asynchronous and non-blocking manner.
+Azure Cache for Redis doesn't randomly delete keys after they've been stored in memory. However, it does remove keys in response to expiration policies, eviction policies, and to explicit key-deletion commands. You can run these commands on the [console](cache-configure.md#redis-console) or through the [CLI](cache-how-to-redis-cli-tool.md).
+
+Keys that have been written to the primary node in a Premium or Standard Azure Cache for Redis instance also might not be available on a replica right away. Data is replicated from the primary to the replica in an asynchronous and non-blocking manner.
If you find that keys have disappeared from your cache, check the following possible causes:
Azure Cache for Redis removes a key automatically if the key is assigned a time-
To get stats on how many keys have expired, use the [INFO](https://redis.io/commands/info) command. The `Stats` section shows the total number of expired keys. The `Keyspace` section provides more information about the number of keys with time-outs and the average time-out value.
-```
+```azurecli-interactive
+ # Stats expired_keys:46583
expired_keys:46583
db0:keys=3450,expires=2,avg_ttl=91861015336 ```
-You can also look at diagnostic metrics for your cache, to see if there's a correlation between when the key went missing and a spike in expired keys. See the Appendix of [Debugging Redis Keyspace Misses](https://gist.github.com/JonCole/4a249477142be839b904f7426ccccf82#appendix) for information about using keyspace notifications or **MONITOR** to debug these types of issues.
+You can also look at diagnostic metrics for your cache, to see if there's a correlation between when the key went missing and a spike in expired keys. See the Appendix of [Debugging Redis Keyspace Misses](https://gist.github.com/JonCole/4a249477142be839b904f7426ccccf82#appendix) for information about using `keyspace` notifications or `MONITOR` to debug these types of issues.
### Key eviction
Azure Cache for Redis requires memory space to store data. It purges keys to fre
You can monitor the number of evicted keys by using the [INFO](https://redis.io/commands/info) command:
-```
+```azurecli-interactive
# Stats evicted_keys:13224
You can also look at diagnostic metrics for your cache, to see if there's a corr
Redis clients can issue the [DEL](https://redis.io/commands/del) or [HDEL](https://redis.io/commands/hdel) command to explicitly remove keys from Azure Cache for Redis. You can track the number of delete operations by using the [INFO](https://redis.io/commands/info) command. If **DEL** or **HDEL** commands have been called, they'll be listed in the `Commandstats` section.
-```
+```azurecli-interactive
# Commandstats cmdstat_del:calls=2,usec=90,usec_per_call=45.00
cmdstat_hdel:calls=1,usec=47,usec_per_call=47.00
### Async replication
-Any Azure Cache for Redis instance in the Standard or Premium tier is configured with a primary node and at least one replica. Data is copied from the primary to a replica asynchronously by using a background process. The [redis.io](https://redis.io/topics/replication) website describes how Redis data replication works in general. For scenarios where clients write to Redis frequently, partial data loss can occur because this replication is not guaranteed to be instantaneous. For example, if the primary goes down *after* a client writes a key to it, but *before* the background process has a chance to send that key to the replica, the key is lost when the replica takes over as the new primary.
+Any Azure Cache for Redis instance in the Standard or Premium tier is configured with a primary node and at least one replica. Data is copied from the primary to a replica asynchronously by using a background process. The [redis.io](https://redis.io/topics/replication) website describes how Redis data replication works in general. For scenarios where clients write to Redis frequently, partial data loss can occur because replication is not guaranteed to be instantaneous. For example, if the primary goes down *after* a client writes a key to it, but *before* the background process has a chance to send that key to the replica, the key is lost when the replica takes over as the new primary.
## Major or complete loss of keys
If most or all keys have disappeared from your cache, check the following possib
### Key flushing
-Clients can call the [FLUSHDB](https://redis.io/commands/flushdb) command to remove all keys in a *single* database or [FLUSHALL](https://redis.io/commands/flushall) to remove all keys from *all* databases in a Redis cache. To find out whether keys have been flushed, use the [INFO](https://redis.io/commands/info) command. The `Commandstats` section shows whether either **FLUSH** command has been called:
+Clients can call the [FLUSHDB](https://redis.io/commands/flushdb) command to remove all keys in a *single* database or [FLUSHALL](https://redis.io/commands/flushall) to remove all keys from *all* databases in a Redis cache. To find out whether keys have been flushed, use the [INFO](https://redis.io/commands/info) command. The `Commandstats` section shows whether either `FLUSH` command has been called:
-```
+```azurecli-interactive
# Commandstats cmdstat_flushall:calls=2,usec=112,usec_per_call=56.00
cmdstat_flushdb:calls=1,usec=110,usec_per_call=52.00
### Incorrect database selection
-Azure Cache for Redis uses the **db0** database by default. If you switch to another database (for example, **db1**) and try to read keys from it, Azure Cache for Redis won't find them there. Every database is a logically separate unit and holds a different dataset. Use the [SELECT](https://redis.io/commands/select) command to use other available databases and look for keys in each of them.
+Azure Cache for Redis uses the `db0` database by default. If you switch to another database (for example, `db1`) and try to read keys from it, Azure Cache for Redis won't find them there. Every database is a logically separate unit and holds a different dataset. Use the [SELECT](https://redis.io/commands/select) command to use other available databases and look for keys in each of them.
### Redis instance failure
-Redis is an in-memory data store. Data is kept on the physical or virtual machines that host the Redis cache. An Azure Cache for Redis instance in the Basic tier runs on only a single virtual machine (VM). If that VM is down, all data that you've stored in the cache is lost.
+Redis is an in-memory data store. Data is kept on the physical or virtual machines that host the Redis cache. An Azure Cache for Redis instance in the Basic tier runs on only a single virtual machine (VM). If that VM is down, all data that you've stored in the cache is lost.
Caches in the Standard and Premium tiers offer much higher resiliency against data loss by using two VMs in a replicated configuration. When the primary node in such a cache fails, the replica node takes over to serve data automatically. These VMs are located on separate domains for faults and updates, to minimize the chance of both becoming unavailable simultaneously. If a major datacenter outage happens, however, the VMs might still go down together. Your data will be lost in these rare cases.
Consider using [Redis data persistence](https://redis.io/topics/persistence) and
## Additional information
+These articles provide more information on avoiding data loss:
+ - [Troubleshoot Azure Cache for Redis server-side issues](cache-troubleshoot-server.md) - [Choosing the right tier](cache-overview.md#choosing-the-right-tier) - [How to monitor Azure Cache for Redis](cache-how-to-monitor.md)-- [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)
+- [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)
azure-cache-for-redis Cache Troubleshoot Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-server.md
Title: Troubleshoot Azure Cache for Redis server-side issues
-description: Learn how to resolve common server-side issues with Azure Cache for Redis, such as memory pressure, high CPU, long running commands, or bandwidth limitations.
+ Title: Troubleshoot Azure Cache for Redis server issues
+description: Learn how to resolve common server issues, such as memory pressure, high CPU, long running commands, or bandwidth limitations, when using Azure Cache for Redis.
Previously updated : 10/18/2019 Last updated : 12/30/2021+
-# Troubleshoot Azure Cache for Redis server-side issues
+# Troubleshoot Azure Cache for Redis server issues
-This section discusses troubleshooting issues that occur because of a condition on an Azure Cache for Redis or the virtual machine(s) hosting it.
+This section discusses troubleshooting issues caused by conditions on an Azure Cache for Redis server or any of the virtual machines hosting it.
-- [Memory pressure on Redis server](#memory-pressure-on-redis-server)-- [High CPU usage or server load](#high-cpu-usage-or-server-load)
+- [High server load](#high-server-load)
+ - [Scale up or scale out](#scale-up-or-scale-out)
+ - [Rapid changes in number of client connections](#rapid-changes-in-number-of-client-connections)
+ - [Long running or expensive commands](#long-running-or-expensive-commands)
+ - [Scaling](#scaling)
+ - [Server maintenance](#server-maintenance)
+- [High memory usage](#high-memory-usage)
- [Long-running commands](#long-running-commands) - [Server-side bandwidth limitation](#server-side-bandwidth-limitation)
This section discusses troubleshooting issues that occur because of a condition
> Several of the troubleshooting steps in this guide include instructions to run Redis commands and monitor various performance metrics. For more information and instructions, see the articles in the [Additional information](#additional-information) section. >
-## Memory pressure on Redis server
+## High server load
+
+High server load means the Redis server is busy and unable to keep up with requests, leading to timeouts. Check the **Redis Server Load** metric on your cache by selecting **Monitor** from the Resource menu on the left. You can Redis Server Load graph in the working pane.
+
+Following are some options to consider for high server load.
+
+### Scale up or scale out
+
+Scale out to add more shards, so that load is distributed across multiple Redis processes. Also, consider scaling up to a larger cache size with more CPU cores. For more information, see [Azure Cache for Redis planning FAQs](cache-planning-faq.yml).
+
+### Rapid changes in number of client connections
+
+For more information, see [Avoid client connection spikes](cache-best-practices-connection.md#avoid-client-connection-spikes).
+
+### Long running or expensive commands
+
+This section was moved. For more information, see [Long running commands](cache-troubleshoot-timeouts.md#long-running-commands).
+
+### Scaling
+
+Scaling operations are CPU and memory intensive as it could involve moving data around nodes and changing cluster topology. For more information, see [Scaling](cache-best-practices-scale.md).
+
+### Server maintenance
-Memory pressure on the server side leads to all kinds of performance problems that can delay processing of requests. When memory pressure hits, the system may page data to disk. This _page faulting_ causes the system to slow down significantly. There are several possible causes of this memory pressure:
+If your Azure Cache for Redis underwent a failover, all client connections from the node that went down are transferred to the node that is still running. The server load could spike because of the increased connections. You can try rebooting your client applications so that all the client connections get recreated and redistributed among the two nodes.
+
+## High memory usage
+
+Memory pressure on the server can lead to various performance problems that delay processing of requests. When memory pressure hits, the system pages data to disk, which causes the system to slow down significantly.
+
+Several possible can cause this memory pressure:
- The cache is filled with data near its maximum capacity.-- Redis is seeing high memory fragmentation. This fragmentation is most often caused by storing large objects since Redis is optimized for small objects.
+- Redis server is seeing high memory fragmentation. Fragmentation is most often caused by storing large objects. Redis is optimized for small objects. If the `used_memory_rss` value is higher than the `used_memory` metric, it means part of Redis memory has been swapped off by the operating system, and you can expect some significant latencies. Because Redis server does not have control over how its allocations are mapped to memory pages, high `used_memory_rss` is often the result of a spike in memory usage.
Redis exposes two stats through the [INFO](https://redis.io/commands/info) command that can help you identify this issue: "used_memory" and "used_memory_rss". You can [view these metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor-metrics-explorer) using the portal.
+Validate that the `maxmemory-reserved` and `maxfragmentationmemory-reserved` values are set appropriately.
+ There are several possible changes you can make to help keep memory usage healthy: - [Configure a memory policy](cache-configure.md#maxmemory-policy-and-maxmemory-reserved) and set expiration times on your keys. This policy may not be sufficient if you have fragmentation. - [Configure a maxmemory-reserved value](cache-configure.md#maxmemory-policy-and-maxmemory-reserved) that is large enough to compensate for memory fragmentation.-- Break up your large cached objects into smaller related objects. - [Create alerts](cache-how-to-monitor.md#alerts) on metrics like used memory to be notified early about potential impacts.-- [Scale](cache-how-to-scale.md) to a larger cache size with more memory capacity. - [Scale](cache-how-to-scale.md) to a larger cache size with more memory capacity. For more information, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
-## High CPU usage or server load
-
-A high server load or CPU usage means the server can't process requests in a timely fashion. The server might be slow to respond and unable to keep up with request rates.
-
-[Monitor metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor-metrics-explorer) such as CPU or server load. Watch for spikes in CPU usage that correspond with timeouts.
-
-There are several changes you can make to mitigate high server load:
--- Investigate what is causing CPU spikes such as [long-running commands](#long-running-commands) noted below or page faulting because of high memory pressure.-- [Create alerts](cache-how-to-monitor.md#alerts) on metrics like CPU or server load to be notified early about potential impacts.-- [Scale](cache-how-to-scale.md) out to more shards to distribute load across multiple Redis processes or scale up to a larger cache size with more CPU cores. For more information, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
+For recommendations on memory management, see [Best practices for memory management](cache-best-practices-memory-management.md).
## Long-running commands
-Some Redis commands are more expensive to execute than others. The [Redis commands documentation](https://redis.io/commands) shows the time complexity of each command. Because Redis command processing is single-threaded, a command that takes time to run blocks all others that come after it. Review the commands that you're issuing to your Redis server to understand their performance impacts. For instance, the [KEYS](https://redis.io/commands/keys) command is often used without knowing that it's an O(N) operation. You can avoid KEYS by using [SCAN](https://redis.io/commands/scan) to reduce CPU spikes.
-
-Using the [SLOWLOG GET](https://redis.io/commands/slowlog-get) command, you can measure expensive commands being executed against the server.
-
+This section was moved. For more information, see [Long running commands](cache-troubleshoot-timeouts.md#long-running-commands).
## Server-side bandwidth limitation
-Different cache sizes have different network bandwidth capacities. If the server exceeds the available bandwidth, then data won't be sent to the client as quickly. Clients requests could time out because the server can't push data to the client fast enough.
-
-The "Cache Read" and "Cache Write" metrics can be used to see how much server-side bandwidth is being used. You can [view these metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor-metrics-explorer) in the portal.
-
-To mitigate situations where network bandwidth usage is close to maximum capacity:
--- Change client call behavior to reduce network demand.-- [Create alerts](cache-how-to-monitor.md#alerts) on metrics like cache read or cache write to be notified early about potential impacts.-- [Scale](cache-how-to-scale.md) to a larger cache size with more network bandwidth capacity. For more information, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
+This section was moved. For more information, see [Network bandwidth limitation](cache-troubleshoot-timeouts.md#network-bandwidth-limitation).
## Additional information
azure-cache-for-redis Cache Troubleshoot Timeouts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-timeouts.md
Title: Troubleshoot Azure Cache for Redis timeouts
-description: Learn how to resolve common timeout issues with Azure Cache for Redis, such as redis server patching and StackExchange.Redis timeout exceptions.
+ Title: Troubleshoot Azure Cache for Redis latency and timeouts
+description: Learn how to resolve common latency and timeout issues with Azure Cache for Redis, such as Redis server patching and timeout exceptions.
Previously updated : 11/3/2021 Last updated : 12/30/2021
-# Troubleshoot Azure Cache for Redis timeouts
-This section discusses troubleshooting timeout issues that occur when connecting to Azure Cache for Redis.
+# Troubleshoot Azure Cache for Redis latency and timeouts
-- [Redis server patching](#redis-server-patching)-- [StackExchange.Redis timeout exceptions](#stackexchangeredis-timeout-exceptions)
+A client operation that doesn't receive a timely response can result in a high latency or timeout exception. An operation could time out at various stages. Where the timeout comes from helps to determine the cause and the mitigation.
+
+This section discusses troubleshooting for latency and timeout issues that occur when connecting to Azure Cache for Redis.
+
+- [Client-side troubleshooting](#client-side-troubleshooting)
+ - [Traffic burst and thread pool configuration](#troubleshoot-azure-cache-for-redis-latency-and-timeouts)
+ - [Large key value](#large-key-value)
+ - [High CPU on client hosts](#high-cpu-on-client-hosts)
+ - [Network bandwidth limitation on client hosts](#network-bandwidth-limitation-on-client-hosts)
+ - [TCP settings for Linux based client applications](#tcp-settings-for-linux-based-client-applications)
+ - [RedisSessionStateProvider retry timeout](#redissessionstateprovider-retry-timeout)
+- [Server-side troubleshooting](#server-side-troubleshooting)
+ - [Server maintenance](#server-maintenance)
+ - [High server load](#high-server-load)
+ - [High memory usage](#high-memory-usage)
+ - [Long running commands](#long-running-commands)
+ - [Network bandwidth limitation](#network-bandwidth-limitation)
+ - [StackExchange.Redis timeout exceptions](#stackexchangeredis-timeout-exceptions)
> [!NOTE] > Several of the troubleshooting steps in this guide include instructions to run Redis commands and monitor various performance metrics. For more information and instructions, see the articles in the [Additional information](#additional-information) section. >
-## Redis server patching
+## Client-side troubleshooting
-Azure Cache for Redis regularly updates its server software as part of the managed service functionality that it provides. This [patching](cache-failover.md) activity takes place largely behind the scene. During the failovers when Redis server nodes are being patched, Redis clients connected to these nodes can experience temporary timeouts as connections are switched between these nodes. For more information on the side-effects patching can have on your application and how to improve its handling of patching events, see [How does a failover affect my client application](cache-failover.md#how-does-a-failover-affect-my-client-application).
+### Traffic burst and thread pool configuration
-## StackExchange.Redis timeout exceptions
+Bursts of traffic combined with poor `ThreadPool` settings can result in delays in processing data already sent by the Redis server but not yet consumed on the client side. Check the metric "Errors" (Type: UnresponsiveClients) to validate if your client hosts can keep up with a sudden spike in traffic.
-StackExchange.Redis uses a configuration setting named `synctimeout` for synchronous operations with a default value of 5000 ms. If a synchronous call doesnΓÇÖt complete in this time, the StackExchange.Redis client throws a timeout error similar to the following example:
+Monitor how your `ThreadPool` statistics change over time using [an example `ThreadPoolLogger`](https://github.com/JonCole/SampleCode/blob/master/ThreadPoolMonitor/ThreadPoolLogger.cs). You can use `TimeoutException` messages from StackExchange.Redis like below to further investigate:
```output
- System.TimeoutException: Timeout performing MGET 2728cc84-58ae-406b-8ec8-3f962419f641, inst: 1,mgr: Inactive, queue: 73, qu=6, qs=67, qc=0, wr=1/1, in=0/0 IOCP: (Busy=6, Free=999, Min=2,Max=1000), WORKER (Busy=7,Free=8184,Min=2,Max=8191)
+ System.TimeoutException: Timeout performing EVAL, inst: 8, mgr: Inactive, queue: 0, qu: 0, qs: 0, qc: 0, wr: 0, wq: 0, in: 64221, ar: 0,
+ IOCP: (Busy=6,Free=999,Min=2,Max=1000), WORKER: (Busy=7,Free=8184,Min=2,Max=8191)
```
-This error message contains metrics that can help point you to the cause and possible resolution of the issue. The following table contains details about the error message metrics.
+In the preceding exception, there are several issues that are interesting:
+
+- Notice that in the `IOCP` section and the `WORKER` section you have a `Busy` value that is greater than the `Min` value. This difference means your `ThreadPool` settings need adjusting.
+- You can also see `in: 64221`. This value indicates that 64,211 bytes have been received at the client's kernel socket layer but haven't been read by the application. This difference typically means that your application (for example, StackExchange.Redis) isn't reading data from the network as quickly as the server is sending it to you.
+
+You can [configure your `ThreadPool` Settings](cache-management-faq.yml#important-details-about-threadpool-growth) to make sure that your thread pool scales up quickly under burst scenarios.
+
+### Large key value
-| Error message metric | Details |
-| | |
-| `inst` |In the last time slice: 0 commands have been issued |
-| `mgr` |The socket manager is doing `socket.select`, which means it's asking the OS to indicate a socket that has something to do. The reader isn't actively reading from the network because it doesn't think there's anything to do |
-| `queue` |There are 73 total in-progress operations |
-| `qu` |6 of the in-progress operations are in the unsent queue and haven't yet been written to the outbound network |
-| `qs`|67 of the in-progress operations have been sent to the server but a response isn't yet available. The response could be `Not yet sent by the server` or `sent by the server but not yet processed by the client.` |
-| `qc` |Zero of the in-progress operations have seen replies but haven't yet been marked as complete because they're waiting on the completion loop |
-| `wr` |There's an active writer (meaning the six unsent requests aren't being ignored) bytes/activewriters |
-| `in` |There are no active readers and zero bytes are available to be read on the NIC bytes/activereaders |
+For information about using multiple keys and smaller values, see [Consider more keys and smaller values](cache-best-practices-development.md#consider-more-keys-and-smaller-values).
-In the preceding exception example, the `IOCP` and `WORKER` sections each include a `Busy` value that is greater than the `Min` value. The difference means that you should adjust your `ThreadPool` settings. You can [configure your ThreadPool settings](cache-management-faq.yml#important-details-about-threadpool-growth) to ensure that your thread pool scales up quickly under burst scenarios.
+You can use the `redis-cli --bigkeys` command to check for large keys in your cache. For more information, see [redis-cli, the Redis command line interface--Redis](https://redis.io/topics/rediscli).
-You can use the following steps to eliminate possible root causes.
+- Increase the size of your VM to get higher bandwidth capabilities
+ - More bandwidth on your client or server VM may reduce data transfer times for larger responses.
+ - Compare your current network usage on both machines to the limits of your current VM size. More bandwidth on only the server or only on the client may not be enough.
+- Increase the number of connection objects your application uses.
+ - Use a round-robin approach to make requests over different connection objects
+
+### High CPU on client hosts
+
+High client CPU usage indicates the system can't keep up with the work it's been asked to do. Even though the cache sent the response quickly, the client may fail to process the response in a timely fashion. Our recommendation is to keep client CPU below 80%. Check the metric "Errors" (Type: `UnresponsiveClients`) to determine if your client hosts can process responses from Redis server in time.
+
+Monitor the client's system-wide CPU usage using metrics available in the Azure portal or through performance counters on the machine. Be careful not to monitor *process* CPU because a single process can have low CPU usage but the system-wide CPU can be high. Watch for spikes in CPU usage that correspond with timeouts. High CPU may also cause high `in: XXX` values in `TimeoutException` error messages as described in the [[Traffic burst](#traffic-burst-and-thread-pool-configuration)] section.
+
+> [!NOTE]
+> StackExchange.Redis 1.1.603 and later includes the `local-cpu` metric in `TimeoutException` error messages. Ensure you are using the latest version of the [StackExchange.Redis NuGet package](https://www.nuget.org/packages/StackExchange.Redis/). Bugs are regularly fixed in the code to make it more robust to timeouts. Having the latest version is important.
+>
-1. As a best practice, make sure you're using the ForceReconnect pattern to detect and replace stalled connections as described in the article [Connection resilience](cache-best-practices-connection.md#using-forcereconnect-with-stackexchangeredis).
+To mitigate a client's high CPU usage:
+
+- Investigate what is causing CPU spikes.
+- Upgrade your client to a larger VM size with more CPU capacity.
+
+### Network bandwidth limitation on client hosts
+
+Depending on the architecture of client machines, they may have limitations on how much network bandwidth they have available. If the client exceeds the available bandwidth by overloading network capacity, then data isn't processed on the client side as quickly as the server is sending it. This situation can lead to timeouts.
+
+Monitor how your Bandwidth usage change over time using [an example `BandwidthLogger`](https://github.com/JonCole/SampleCode/blob/master/BandWidthMonitor/BandwidthLogger.cs). This code may not run successfully in some environments with restricted permissions (like Azure web sites).
+
+To mitigate, reduce network bandwidth consumption or increase the client VM size to one with more network capacity. For more information, see [Large request or response size](cache-best-practices-development.md#large-request-or-response-size).
+
+### TCP settings for Linux based client applications
+
+Because of optimistic TCP settings in Linux, client applications hosted on Linux could experience connectivity issues. For more information, see [TCP settings for Linux-hosted client applications](cache-best-practices-connection.md#tcp-settings-for-linux-hosted-client-applications)
+
+### RedisSessionStateProvider retry timeout
+
+If you're using `RedisSessionStateProvider`, ensure you have set the retry timeout correctly. The `retryTimeoutInMilliseconds` value should be higher than the `operationTimeoutInMilliseconds` value. Otherwise, no retries occur. In the following example, `retryTimeoutInMilliseconds` is set to 3000. For more information, see [ASP.NET Session State Provider for Azure Cache for Redis](cache-aspnet-session-state-provider.md) and [How to use the configuration parameters of Session State Provider and Output Cache Provider](https://github.com/Azure/aspnet-redis-providers/wiki/Configuration).
+
+ ```xml
+
+ <add
+ name="AFRedisCacheSessionStateProvider"
+ type="Microsoft.Web.Redis.RedisSessionStateProvider"
+ host="enbwcache.redis.cache.windows.net"
+ port="6380"
+ accessKey="..."
+ ssl="true"
+ databaseId="0"
+ applicationName="AFRedisCacheSessionState"
+ connectionTimeoutInMilliseconds = "5000"
+ operationTimeoutInMilliseconds = "1000"
+ retryTimeoutInMilliseconds="3000"
+ >
+```
- For more information on using StackExchange.Redis, see [Connect to the cache using StackExchange.Redis](cache-dotnet-how-to-use-azure-redis-cache.md#connect-to-the-cache).
+## Server-side troubleshooting
-1. Ensure that your server and the client application are in the same region in Azure. For example, you might be getting timeouts when your cache is in East US but the client is in West US and the request doesn't complete within the `synctimeout` interval or you might be getting timeouts when you're debugging from your local development machine.
+### Server maintenance
- ItΓÇÖs highly recommended to have the cache and in the client in the same Azure region. If you have a scenario that includes cross region calls, you should set the `synctimeout` interval to a value higher than the default 5000-ms interval by including a `synctimeout` property in the connection string. The following example shows a snippet of a connection string for StackExchange.Redis provided by Azure Cache for Redis with a `synctimeout` of 8000 ms.
+Planned or unplanned maintenance can cause disruptions with client connections. The number and type of exceptions depends on the location of the request in the code path, and when the cache closes its connections. For instance, an operation that sends a request but hasn't received a response when the failover occurs might get a time-out exception. New requests on the closed connection object receive connection exceptions until the reconnection happens successfully.
- ```output
- synctimeout=8000,cachename.redis.cache.windows.net,abortConnect=false,ssl=true,password=...
- ```
+For information, check these other sections:
-1. Ensure you using the latest version of the [StackExchange.Redis NuGet package](https://www.nuget.org/packages/StackExchange.Redis/). There are bugs constantly being fixed in the code to make it more robust to timeouts so having the latest version is important.
-1. If your requests are bound by bandwidth limitations on the server or client, it takes longer for them to complete and can cause timeouts. To see if your timeout is because of network bandwidth on the server, see [Server-side bandwidth limitation](cache-troubleshoot-server.md#server-side-bandwidth-limitation). To see if your timeout is because of client network bandwidth, see [Client-side bandwidth limitation](cache-troubleshoot-client.md#client-side-bandwidth-limitation).
-1. Are you getting CPU bound on the server or on the client?
+- [Scheduling updates](cache-administration.md#schedule-updates)
+- [Connection resilience](cache-best-practices-connection.md#connection-resilience)
+- `AzureRedisEvents` [notifications](cache-failover.md#can-i-be-notified-in-advance-of-planned-maintenance)
- - Check if you're getting bound by CPU on your client. High CPU could cause the request to not be processed within the `synctimeout` interval and cause a request to time out. Moving to a larger client size or distributing the load can help to control this problem.
- - Check if you're getting CPU bound on the server by monitoring the CPU [cache performance metric](cache-how-to-monitor.md#available-metrics-and-reporting-intervals). Requests coming in while Redis is CPU bound can cause those requests to time out. To address this condition, you can distribute the load across multiple shards in a premium cache, or upgrade to a larger size or pricing tier. For more information, see [Server-side bandwidth limitation](cache-troubleshoot-server.md#server-side-bandwidth-limitation).
-1. Are there commands taking long time to process on the server? Long-running commands that are taking long time to process on the redis-server can cause timeouts. For more information about long-running commands, see [Long-running commands](cache-troubleshoot-server.md#long-running-commands). You can connect to your Azure Cache for Redis instance using the redis-cli client or the [Redis Console](cache-configure.md#redis-console). Then, run the [SLOWLOG](https://redis.io/commands/slowlog) command to see if there are requests slower than expected. Redis Server and StackExchange.Redis are optimized for many small requests rather than fewer large requests. Splitting your data into smaller chunks may improve things here.
+To check whether your Azure Cache for Redis had a failover during when timeouts occurred, check the metric **Errors**. On the Resource menu of the Azure portal, select **Metrics**. Then create a new chart measuring the `Errors` metric, split by `ErrorType`. Once you have created this chart, you see a count for **Failover**.
- For information on connecting to your cache's TLS/SSL endpoint using redis-cli and stunnel, see the blog post [Announcing ASP.NET Session State Provider for Redis Preview Release](https://devblogs.microsoft.com/aspnet/announcing-asp-net-session-state-provider-for-redis-preview-release/).
-1. High Redis server load can cause timeouts. You can monitor the server load by monitoring the `Redis Server Load` [cache performance metric](cache-how-to-monitor.md#available-metrics-and-reporting-intervals). A server load of 100 (maximum value) signifies that the redis server has been busy, with no idle time, processing requests. To see if certain requests are taking up all of the server capability, run the SlowLog command, as described in the previous paragraph. For more information, see High CPU usage / Server Load.
-1. Was there any other event on the client side that could have caused a network blip? Common events include: scaling the number of client instances up or down, deploying a new version of the client, or autoscale enabled. In our testing, we have found that autoscale or scaling up/down can cause outbound network connectivity to be lost for several seconds. StackExchange.Redis code is resilient to such events and reconnects. While reconnecting, any requests in the queue can time out.
-1. Was there a large request preceding several small requests to the cache that timed out? The parameter `qs` in the error message tells you how many requests were sent from the client to the server, but haven't processed a response. This value can keep growing because StackExchange.Redis uses a single TCP connection and can only read one response at a time. Even though the first operation timed out, it doesn't stop more data from being sent to or from the server. Other requests will be blocked until the large request is finished and can cause time outs. One solution is to minimize the chance of timeouts by ensuring that your cache is large enough for your workload and splitting large values into smaller chunks. Another possible solution is to use a pool of `ConnectionMultiplexer` objects in your client, and choose the least loaded `ConnectionMultiplexer` when sending a new request. Loading across multiple connection objects should prevent a single timeout from causing other requests to also time out.
-1. If you're using `RedisSessionStateProvider`, ensure you have set the retry timeout correctly. `retryTimeoutInMilliseconds` should be higher than `operationTimeoutInMilliseconds`, otherwise no retries occur. In the following example `retryTimeoutInMilliseconds` is set to 3000. For more information, see [ASP.NET Session State Provider for Azure Cache for Redis](cache-aspnet-session-state-provider.md) and [How to use the configuration parameters of Session State Provider and Output Cache Provider](https://github.com/Azure/aspnet-redis-providers/wiki/Configuration).
+For more information on failovers, see [Failover and patching for Azure Cache for Redis](cache-failover.md).
- ```xml
- <add
- name="AFRedisCacheSessionStateProvider"
- type="Microsoft.Web.Redis.RedisSessionStateProvider"
- host="enbwcache.redis.cache.windows.net"
- port="6380"
- accessKey="…"
- ssl="true"
- databaseId="0"
- applicationName="AFRedisCacheSessionState"
- connectionTimeoutInMilliseconds = "5000"
- operationTimeoutInMilliseconds = "1000"
- retryTimeoutInMilliseconds="3000" />
- ```
+### High server load
-1. Check memory usage on the Azure Cache for Redis server by [monitoring](cache-how-to-monitor.md#available-metrics-and-reporting-intervals) `Used Memory RSS` and `Used Memory`. If an eviction policy is in place, Redis starts evicting keys when `Used_Memory` reaches the cache size. Ideally, `Used Memory RSS` should be only slightly higher than `Used memory`. A large difference means there's memory fragmentation (internal or external). When `Used Memory RSS` is less than `Used Memory`, it means part of the cache memory has been swapped by the operating system. If this swapping occurs, you can expect some significant latencies. Because Redis doesn't have control over how its allocations are mapped to memory pages, high `Used Memory RSS` is often the result of a spike in memory usage. When Redis server frees memory, the allocator takes the memory but it may or may not give the memory back to the system. There may be a discrepancy between the `Used Memory` value and memory consumption as reported by the operating system. Memory may have been used and released by Redis but not given back to the system. To help mitigate memory issues, you can do the following steps:
+High server load means the Redis server is unable to keep up with the requests, leading to timeouts. The server might be slow to respond and unable to keep up with request rates.
- - Upgrade the cache to a larger size so that you aren't running against memory limitations on the system.
- - Set expiration times on the keys so that older values are evicted proactively.
- - Monitor the `used_memory_rss` cache metric. When this value approaches the size of their cache, you're likely to start seeing performance issues. Distribute the data across multiple shards if you're using a premium cache, or upgrade to a larger cache size.
+[Monitor metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor-metrics-explorer) such as server load. Watch for spikes in `Server Load` usage that correspond with timeouts. [Create alerts](cache-how-to-monitor.md#alerts) on metrics on server load to be notified early about potential impacts.
- For more information, see [Memory pressure on Redis server](cache-troubleshoot-server.md#memory-pressure-on-redis-server).
+There are several changes you can make to mitigate high server load:
+
+- Investigate what is causing high server load such as [long-running commands](#long-running-commands), noted below because of high memory pressure.
+- [Scale](cache-how-to-scale.md) out to more shards to distribute load across multiple Redis processes or scale up to a larger cache size with more CPU cores. For more information, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
+
+### High memory usage
+
+This section was moved. For more information, see [High memory usage](cache-troubleshoot-server.md#high-memory-usage).
+
+### Long running commands
+
+Some Redis commands are more expensive to execute than others. The [Redis commands documentation](https://redis.io/commands) shows the time complexity of each command. Redis command processing is single-threaded. Any command that takes a long time to run can block all others that come after it.
+
+Review the commands that you're issuing to your Redis server to understand their performance impacts. For instance, the [KEYS](https://redis.io/commands/keys) command is often used without knowing that it's an O(N) operation. You can avoid KEYS by using [SCAN](https://redis.io/commands/scan) to reduce CPU spikes.
+
+Using the [SLOWLOG GET](https://redis.io/commands/slowlog-get) command, you can measure expensive commands being executed against the server.
+
+Customers can use a console to run these Redis commands to investigate long running and expensive commands.
+
+- [SLOWLOG](https://redis.io/commands/slowlog) is used to read and reset the Redis slow queries log. It can be used to investigate long running commands on client side.
+The Redis Slow Log is a system to log queries that exceeded a specified execution time. The execution time does not include I/O operations like talking with the client, sending the reply, and so forth, but just the time needed to actually execute the command. Using the SLOWLOG command, Customers can measure/log expensive commands being executed against their Redis server.
+- [MONITOR](https://redis.io/commands/monitor) is a debugging command that streams back every command processed by the Redis server. It can help in understanding what is happening to the database. This command is demanding and can negatively affect performance. It can degrade performance.
+- [INFO](https://redis.io/commands/info) - command returns information and statistics about the server in a format that is simple to parse by computers and easy to read by humans. In this case, the CPU section could be useful to investigate the CPU usage. A **server_load** of 100 (maximum value) signifies that the Redis server has been busy all the time (has not been idle) processing the requests.
+
+Output sample:
+
+```azurecli-interactive
+# CPU
+used_cpu_sys:530.70
+used_cpu_user:445.09
+used_cpu_avg_ms_per_sec:0
+server_load:0.01
+event_wait:1
+event_no_wait:1
+event_wait_count:10
+event_no_wait_count:1
+```
+
+- [CLIENT LIST](https://redis.io/commands/client-list) - returns information and statistics about the client connections server in a mostly human readable format.
+
+### Network bandwidth limitation
+
+Different cache sizes have different network bandwidth capacities. If the server exceeds the available bandwidth, then data won't be sent to the client as quickly. Client requests could time out because the server can't push data to the client fast enough.
+
+The "Cache Read" and "Cache Write" metrics can be used to see how much server-side bandwidth is being used. You can [view these metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor-metrics-explorer) in the portal. [Create alerts](cache-how-to-monitor.md#alerts) on metrics like cache read or cache write to be notified early about potential impacts.
+
+To mitigate situations where network bandwidth usage is close to maximum capacity:
+
+- Change client call behavior to reduce network demand.
+- [Scale](cache-how-to-scale.md) to a larger cache size with more network bandwidth capacity. For more information, see [Azure Cache for Redis planning FAQs](cache-planning-faq.yml#azure-cache-for-redis-performance).
+
+## StackExchange.Redis timeout exceptions
+
+For more specific information to address timeouts when using StackExchange.Redis, see [Investigating timeout exceptions in StackExchange.Redis](https://azure.microsoft.com/blog/investigating-timeout-exceptions-in-stackexchange-redis-for-azure-redis-cache/).
## Additional information
+See these articles for additional information about latency issues and timeouts.
+ - [Troubleshoot Azure Cache for Redis client-side issues](cache-troubleshoot-client.md) - [Troubleshoot Azure Cache for Redis server-side issues](cache-troubleshoot-server.md) - [How can I benchmark and test the performance of my cache?](cache-management-faq.yml#how-can-i-benchmark-and-test-the-performance-of-my-cache-)
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
Starting from version 3.2.0, if you want to set a custom dimension programmatica
} ```
+## Instrumentation keys overrides (preview)
+
+This feature is in preview, starting from 3.2.3.
+
+Instrumentation key overrides allow you to override the [default instrumentation key](#connection-string), for example:
+* Set one instrumentation key for one http path prefix `/myapp1`.
+* Set another instrumentation key for another http path prefix `/myapp2/`.
+
+```json
+{
+ "preview": {
+ "instrumentationKeyOverrides": [
+ {
+ "httpPathPrefix": "/myapp1",
+ "instrumentationKey": "12345678-0000-0000-0000-0FEEDDADBEEF"
+ },
+ {
+ "httpPathPrefix": "/myapp2",
+ "instrumentationKey": "87654321-0000-0000-0000-0FEEDDADBEEF"
+ }
+ ]
+ }
+}
+```
+ ## Telemetry processors (preview) It allows you to configure rules that will be applied to request, dependency and trace telemetry, for example:
Starting from version 3.2.0, the following preview instrumentations can be enabl
> [!NOTE] > Akka instrumentation is available starting from version 3.2.2
-## Heartbeat
-
-By default, Application Insights Java 3.x sends a heartbeat metric once every 15 minutes.
-If you are using the heartbeat metric to trigger alerts, you can increase the frequency of this heartbeat:
-
-```json
-{
- "heartbeat": {
- "intervalSeconds": 60
- }
-}
-```
-
-> [!NOTE]
-> You cannot increase the interval to longer than 15 minutes,
-> because the heartbeat data is also used to track Application Insights usage.
-
-## HTTP Proxy
-
-If your application is behind a firewall and cannot connect directly to Application Insights
-(see [IP addresses used by Application Insights](./ip-addresses.md)),
-you can configure Application Insights Java 3.x to use an HTTP proxy:
-
-```json
-{
- "proxy": {
- "host": "myproxy",
- "port": 8080
- }
-}
-```
-
-Application Insights Java 3.x also respects the global `https.proxyHost` and `https.proxyPort` system properties
-if those are set (and `http.nonProxyHosts` if needed).
- ## Metric interval This feature is in preview.
The setting applies to all of these metrics:
* Configured JMX metrics ([see above](#jmx-metrics)) * Micrometer metrics ([see above](#auto-collected-micrometer-metrics-including-spring-boot-actuator-metrics))
+## Heartbeat
-[//]: # "NOTE OpenTelemetry support is in private preview until OpenTelemetry API reaches 1.0"
-
-[//]: # "## Support for OpenTelemetry API pre-1.0 releases"
+By default, Application Insights Java 3.x sends a heartbeat metric once every 15 minutes.
+If you are using the heartbeat metric to trigger alerts, you can increase the frequency of this heartbeat:
-[//]: # "Support for pre-1.0 versions of OpenTelemetry API is opt-in, because the OpenTelemetry API is not stable yet"
-[//]: # "and so each version of the agent only supports a specific pre-1.0 versions of OpenTelemetry API"
-[//]: # "(this limitation will not apply once OpenTelemetry API 1.0 is released)."
+```json
+{
+ "heartbeat": {
+ "intervalSeconds": 60
+ }
+}
+```
-[//]: # "```json"
-[//]: # "{"
-[//]: # " \"preview\": {"
-[//]: # " \"openTelemetryApiSupport\": true"
-[//]: # " }"
-[//]: # "}"
-[//]: # "```"
+> [!NOTE]
+> You cannot increase the interval to longer than 15 minutes,
+> because the heartbeat data is also used to track Application Insights usage.
## Authentication (preview) > [!NOTE]
The setting applies to all of these metrics:
It allows you to configure agent to generate [token credentials](/java/api/overview/azure/identity-readme#credentials) that are required for Azure Active Directory Authentication. For more information, check out the [Authentication](./azure-ad-authentication.md) documentation.
-## Instrumentation keys overrides (preview)
-
-This feature is in preview, starting from 3.2.3.
+## HTTP Proxy
-Instrumentation key overrides allow you to override the [default instrumentation key](#connection-string), for example:
-* Set one instrumentation key for one http path prefix `/myapp1`.
-* Set another instrumentation key for another http path prefix `/myapp2/`.
+If your application is behind a firewall and cannot connect directly to Application Insights
+(see [IP addresses used by Application Insights](./ip-addresses.md)),
+you can configure Application Insights Java 3.x to use an HTTP proxy:
```json {
- "preview": {
- "instrumentationKeyOverrides": [
- {
- "httpPathPrefix": "/myapp1",
- "instrumentationKey": "12345678-0000-0000-0000-0FEEDDADBEEF"
- },
- {
- "httpPathPrefix": "/myapp2",
- "instrumentationKey": "87654321-0000-0000-0000-0FEEDDADBEEF"
- }
- ]
+ "proxy": {
+ "host": "myproxy",
+ "port": 8080
} } ```
+Application Insights Java 3.x also respects the global `https.proxyHost` and `https.proxyPort` system properties
+if those are set (and `http.nonProxyHosts` if needed).
+ ## Self-diagnostics "Self-diagnostics" refers to internal logging from Application Insights Java 3.x.
azure-monitor Javascript Sdk Load Failure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript-sdk-load-failure.md
The stack details include the basic information with the URLs being used by the
| Name | Description | ||--| | &lt;CDN&nbsp;Endpoint&gt; | The URL that was used (and failed) to download the SDK. |
-| &lt;Help&nbsp;Link&gt; | A URL that links to troubleshooting documentation (this page). |
+| &lt;Help&nbsp;Link&gt; | A URL that links to troubleshooting documentation (this page). |
| &lt;Host&nbsp;URL&gt; | The complete URL of the page that the end user was using. | | &lt;Endpoint&nbsp;URL&gt; | The URL that was used to report the exception, this value may be helpful in identifying whether the hosting page was accessed from the public internet or a private cloud.
The most common reasons for this exception to occur:
The following sections will describe how to troubleshoot each potential root cause of this error. > [!NOTE]
-> Several of the troubleshooting steps assume that your application has direct control of the Snippet &lt;script /&gt; tag and it's configuration that are returned as part of the hosting HTML page. If you don't then those identified steps will not apply for your scenario.
+> Several of the troubleshooting steps assume that your application has direct control of the Snippet &lt;script /&gt; tag and its configuration that is returned as part of the hosting HTML page. If you don't then those identified steps will not apply for your scenario.
## Intermittent network connectivity failure
If there are exceptions being reported in the SDK script (for example ai.2.min.j
To check for faulty configuration, change the configuration passed into the snippet (if not already) so that it only includes your instrumentation key as a string value.
-> src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js",<br />
-> cfg:{<br />
-> instrumentationKey: "INSTRUMENTATION_KEY"<br />
-> }});<br />
+```js
+src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js",
+cfg: {
+ instrumentationKey: "INSTRUMENTATION_KEY"
+}});
+```
If when using this minimal configuration you are still seeing a JavaScript exception in the SDK script, [create a new support ticket](https://azure.microsoft.com/support/create-ticket/) as this will require the faulty build to be rolled back as it's probably an issue with a newly deployed version.
If your configuration was previously deployed and working but just started repor
Assuming there are no exceptions being thrown the next step is to enabling console debugging by adding the `loggingLevelConsole` setting to the configuration, this will send all initialization errors and warnings to the browsers console (normally available via the developer tools (F12)). Any reported errors should be self-explanatory and if you need further assistance [file an issue on GitHub](https://github.com/Microsoft/ApplicationInsights-JS/issues).
-> cfg:{<br />
-> instrumentationKey: "INSTRUMENTATION_KEY",<br />
-> loggingLevelConsole: 2<br />
-> }});<br />
+```js
+cfg: {
+ instrumentationKey: "INSTRUMENTATION_KEY",
+ loggingLevelConsole: 2
+}});
+```
> [!NOTE] > During initialization the SDK performs some basic checks for known major dependencies. If these are not provided by the current runtime it will report the failures out as warning messages to the console, but only if the `loggingLevelConsole` is greater than zero.
-If it still fails to initialize, try enabling the ```enableDebug``` configuration setting. This will cause all internal errors to be thrown as an exception (which will cause telemetry to be lost). As this is a developer only setting it will probably get noisy with exceptions getting thrown as part of some internal checks, so you will need to review each exception to determine which issue is causing the SDK to fail. Use the non-minified version of the script (note the extension below it's ".js" and not ".min.js") otherwise the exceptions will be unreadable.
+If it still fails to initialize, try enabling the ```enableDebug``` configuration setting. This will cause all internal errors to be thrown as an exception (which will cause telemetry to be lost). As this is a developer only setting it will probably get noisy with exceptions getting thrown as part of some internal checks, so you will need to review each exception to determine which issue is causing the SDK to fail. Use the non-minified version of the script (note the extension below is ".js" and not ".min.js") otherwise the exceptions will be unreadable.
> [!WARNING] > This is a developer only setting and should NEVER be enabled in a full production environment as you will lose telemetry.
-> src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js",<br />
-> cfg:{<br />
-> instrumentationKey: "INSTRUMENTATION_KEY",<br />
-> enableDebug: true<br />
-> }});<br />
+```js
+src: "https://js.monitor.azure.com/scripts/b/ai.2.js",
+cfg:{
+ instrumentationKey: "INSTRUMENTATION_KEY",
+ enableDebug: true
+}});
+```
If this still does not provide any insights, you should [file an issue on GitHub](https://github.com/Microsoft/ApplicationInsights-JS/issues) with the details and an example site if you have one. Include the browser version, operating system, and JS framework details to help identify the issue.
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/activity-log.md
For some events, you can view the Change history, which shows what changes happe
![Change history list for an event](media/activity-log/change-history-event.png)
-If there are any associated changes with the event, you'll see a list of changes that you can select. This opens up the **Change history (Preview)** page. On this page you see the changes to the resource. In the following example, you can see not only that the VM changed sizes, but what the previous VM size was before the change and what it was changed to. To learn more about change history, see [Get resource changes](../../governance/resource-graph/how-to/get-resource-changes.md).
+If there are any associated changes with the event, you'll see a list of changes that you can select. This opens up the **Change history (Preview)** page. On this page, you see the changes to the resource. In the following example, you can see not only that the VM changed sizes, but what the previous VM size was before the change and what it was changed to. To learn more about change history, see [Get resource changes](../../governance/resource-graph/how-to/get-resource-changes.md).
![Change history page showing differences](media/activity-log/change-history-event-details.png)
You can also access Activity log events using the following methods.
- Use log alerts with Activity entries allowing for more complex alerting logic. - Store Activity log entries for longer than the Activity Log retention period. - No data ingestion charges for Activity log data stored in a Log Analytics workspace.-- No data retention charges until after the Activity Log retention period expires for given entires.
+- No data retention charges for the first 90 days for Activity log data stored in a Log Analytics workspace.
+ [Create a diagnostic setting](./diagnostic-settings.md) to send the Activity log to a Log Analytics workspace. You can send the Activity log from any single subscription to up to five workspaces.
AzureActivity
## Send to Azure Event Hubs
-Send the Activity Log to Azure Event Hubs to send entries outside of Azure, for example to a third-party SIEM or other log analytics solutions. Activity log events from event hubs are consumed in JSON format with a `records` element containing the records in each payload. The schema depends on the category and is described in [Schema from storage account and event hubs](activity-log-schema.md).
+Send the Activity Log to Azure Event Hubs to send entries outside of Azure, for example to a third-party SIEM or other log analytics solutions. Activity log events from Event Hubs are consumed in JSON format with a `records` element containing the records in each payload. The schema depends on the category and is described in [Schema from Storage Account and Event Hubs](activity-log-schema.md).
Following is sample output data from Event Hubs for an Activity log:
Following is sample output data from Event Hubs for an Activity log:
} ``` - ## Send to Azure storage
-Send the Activity Log to an Azure Storage account for audit, static analysis, or backup if you want to retain your log data longer than the Activity Log retention period. There is no need to set up Azure storage unless you need to retain the entries for one of these reasons.
+Send the Activity Log to an Azure Storage Account if you want to retain your log data longer than 90 days for audit, static analysis, or backup. If you only need to retain your events for 90 days or less you do not need to set up archival to a Storage Account, since Activity Log events are retained in the Azure platform for 90 days.
-When you send the Activity log to Azure, a storage container is created in the storage account as soon as an event occurs. The blobs in the container use the following naming convention:
+When you send the Activity log to Azure, a storage container is created in the Storage Account as soon as an event occurs. The blobs in the container use the following naming convention:
``` insights-activity-logs/resourceId=/SUBSCRIPTIONS/{subscription ID}/y={four-digit numeric year}/m={two-digit numeric month}/d={two-digit numeric day}/h={two-digit 24-hour clock hour}/m=00/PT1H.json
insights-logs-networksecuritygrouprulecounter/resourceId=/SUBSCRIPTIONS/00000000
Each PT1H.json blob contains a JSON blob of events that occurred within the hour specified in the blob URL (for example, h=12). During the present hour, events are appended to the PT1H.json file as they occur. The minute value (m=00) is always 00, since resource log events are broken into individual blobs per hour.
-Each event is stored in the PT1H.json file with the following format that uses a common top level schema but is otherwise unique for each category as described in [Activity log schema](activity-log-schema.md).
+Each event is stored in the PT1H.json file with the following format that uses a common top-level schema but is otherwise unique for each category as described in [Activity log schema](activity-log-schema.md).
``` JSON { "time": "2020-06-12T13:07:46.766Z", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/MY-RESOURCE-GROUP/PROVIDERS/MICROSOFT.COMPUTE/VIRTUALMACHINES/MV-VM-01", "correlationId": "0f0cb6b4-804b-4129-b893-70aeeb63997e", "operationName": "Microsoft.Resourcehealth/healthevent/Updated/action", "level": "Information", "resultType": "Updated", "category": "ResourceHealth", "properties": {"eventCategory":"ResourceHealth","eventProperties":{"title":"This virtual machine is starting as requested by an authorized user or process. It will be online shortly.","details":"VirtualMachineStartInitiatedByControlPlane","currentHealthStatus":"Unknown","previousHealthStatus":"Unknown","type":"Downtime","cause":"UserInitiated"}}}
Each event is stored in the PT1H.json file with the following format that uses a
## Legacy collection methods
-This section describes legacy methods for collecting the Activity log that were used prior to diagnostic settings. If you're using these methods, you should consider transitioning to diagnostic settings which provide better functionality and consistency with resource logs.
+This section describes legacy methods for collecting the Activity log that were used prior to diagnostic settings. If you're using these methods, you should consider transitioning to diagnostic settings that provide better functionality and consistency with resource logs.
### Log profiles
-Log profiles are the legacy method for sending the Activity log to Azure storage or event hubs. Use the following procedure to continue working with a log profile or to disable it in preparation for migrating to a diagnostic setting.
+Log profiles are the legacy method for sending the Activity log to Azure storage or Event Hubs. Use the following procedure to continue working with a log profile or to disable it in preparation for migrating to a diagnostic setting.
1. From the **Azure Monitor** menu in the Azure portal, select **Activity log**. 3. Click **Diagnostic settings**.
Log profiles are the legacy method for sending the Activity log to Azure storage
![Legacy experience](media/activity-log/legacy-experience.png) - ### Configure log profile using PowerShell If a log profile already exists, you first need to remove the existing log profile and then create a new one.
If a log profile already exists, you first need to remove the existing log profi
| | | | | Name |Yes |Name of your log profile. | | StorageAccountId |No |Resource ID of the Storage Account where the Activity Log should be saved. |
- | serviceBusRuleId |No |Service Bus Rule ID for the Service Bus namespace you would like to have event hubs created in. This is a string with the format: `{service bus resource ID}/authorizationrules/{key name}`. |
+ | serviceBusRuleId |No |Service Bus Rule ID for the Service Bus namespace you would like to have Event Hubs created in. This is a string with the format: `{service bus resource ID}/authorizationrules/{key name}`. |
| Location |Yes |Comma-separated list of regions for which you would like to collect Activity Log events. |
- | RetentionInDays |Yes |Number of days for which events should be retained in the storage account, between 1 and 365. A value of zero stores the logs indefinitely. |
+ | RetentionInDays |Yes |Number of days for which events should be retained in the Storage Account, between 1 and 365. A value of zero stores the logs indefinitely. |
| Category |No |Comma-separated list of event categories that should be collected. Possible values are _Write_, _Delete_, and _Action_. | ### Example script
-Following is a sample PowerShell script to create a log profile that writes the Activity Log to both a storage account and event hub.
+Following is a sample PowerShell script to create a log profile that writes the Activity Log to both a Storage Account and Event Hub.
```powershell # Settings needed for the new log profile
Following is a sample PowerShell script to create a log profile that writes the
$locations = (Get-AzLocation).Location $locations += "global" $subscriptionId = "<your Azure subscription Id>"
- $resourceGroupName = "<resource group name your event hub belongs to>"
- $eventHubNamespace = "<event hub namespace>"
+ $resourceGroupName = "<resource group name your Event Hub belongs to>"
+ $eventHubNamespace = "<Event Hub namespace>"
# Build the service bus rule Id from the settings above $serviceBusRuleId = "/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.EventHub/namespaces/$eventHubNamespace/authorizationrules/RootManageSharedAccessKey"
- # Build the storage account Id from the settings above
+ # Build the Storage Account Id from the settings above
$storageAccountId = "/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Storage/storageAccounts/$storageAccountName" Add-AzLogProfile -Name $logProfileName -Location $locations -StorageAccountId $storageAccountId -ServiceBusRuleId $serviceBusRuleId
If a log profile already exists, you first need to remove the existing log profi
3. Use `az monitor log-profiles create` to create a new log profile: ```azurecli-interactive
- az monitor log-profiles create --name "default" --location null --locations "global" "eastus" "westus" --categories "Delete" "Write" "Action" --enabled false --days 0 --service-bus-rule-id "/subscriptions/<YOUR SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.EventHub/namespaces/<EVENT HUB NAME SPACE>/authorizationrules/RootManageSharedAccessKey"
+ az monitor log-profiles create --name "default" --location null --locations "global" "eastus" "westus" --categories "Delete" "Write" "Action" --enabled false --days 0 --service-bus-rule-id "/subscriptions/<YOUR SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.EventHub/namespaces/<Event Hub NAME SPACE>/authorizationrules/RootManageSharedAccessKey"
``` | Property | Required | Description | | | | |
To disable the setting, perform the same procedure and click **Disconnect** to r
### Data structure changes Diagnostic settings send the same data as the legacy method used to send the Activity log with some changes to the structure of the *AzureActivity* table.
-The columns in the following table have been deprecated in the updated schema. They still exist in *AzureActivity* but they will have no data. The replacement for these columns are not new, but they contain the same data as the deprecated column. They are in a different format, so you may need to modify log queries that use them.
+The columns in the following table have been deprecated in the updated schema. They still exist in *AzureActivity* but they will have no data. The replacements for these columns are not new, but they contain the same data as the deprecated column. They are in a different format, so you may need to modify log queries that use them.
|Activity Log JSON | Log Analytics column name<br/>*(older deprecated)* | New Log Analytics column name | Notes | |:|:|:|:|
The columns in the following table have been deprecated in the updated schema. T
|operationName | OperationName | OperationNameValue |REST API localizes operation name value. Log Analytics UI always shows English. | |resourceProviderName | ResourceProvider | ResourceProviderValue ||
-> [!IMPORTANT]
+> [!Important]
> In some cases, the values in these columns may be in all uppercase. If you have a query that includes these columns, you should use the [=~ operator](/azure/kusto/query/datatypes-string-operators) to do a case insensitive comparison. The following column have been added to *AzureActivity* in the updated schema:
The following column have been added to *AzureActivity* in the updated schema:
- Properties_d ## Activity Log Analytics monitoring solution
-The Azure Log Analytics monitoring solution will be deprecated soon and replaced by a workbook using the updated schema in the Log Analytics workspace. You can still use the solution if you already have it enabled, but it can only be used if you're collecting the Activity log using legacy settings.
+> [!Note]
+> The Azure Log Analytics monitoring solution will be deprecated soon and replaced by a workbook using the updated schema in the Log Analytics workspace. You can still use the solution if you already have it enabled, but it can only be used if you're collecting the Activity log using legacy settings.
Monitoring solutions are accessed from the **Monitor** menu in the Azure portal.
![Azure Activity Logs tile](media/activity-log/azure-activity-logs-tile.png)
-Click the **Azure Activity Logs** tile to open the **Azure Activity Logs** view. The view includes the visualization parts in the following table. Each part lists up to 10 items matching that parts's criteria for the specified time range. You can run a log query that returns all matching records by clicking **See all** at the bottom of the part.
+Click the **Azure Activity Logs** tile to open the **Azure Activity Logs** view. The view includes the visualization parts in the following table. Each part lists up to 10 items matching that part's criteria for the specified time range. You can run a log query that returns all matching records by clicking **See all** at the bottom of the part.
![Azure Activity Logs dashboard](media/activity-log/activity-log-dash.png) ### Enable the solution for new subscriptions
-You will soon no longer be able to add the Activity Logs Analytics solution to your subscription using the Azure portal. You can add it using the following procedure with a Resource Manager template.
+> [!NOTE]
+>You will soon no longer be able to add the Activity Logs Analytics solution to your subscription using the Azure portal. You can add it using the following procedure with a Resource Manager template.
1. Copy the following json into a file called *ActivityLogTemplate*.json.
azure-netapp-files Configure Kerberos Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/configure-kerberos-encryption.md
na Previously updated : 01/07/2022 Last updated : 01/10/2022 # Configure NFSv4.1 Kerberos encryption for Azure NetApp Files
The following requirements apply to NFSv4.1 client encryption:
* DNS A/PTR record creation for both the client and Azure NetApp Files NFS server IP addresses * A Linux client: This article provides guidance for RHEL and Ubuntu clients. Other clients will work with similar configuration steps. * NTP server access: You can use one of the commonly used Active Directory Domain Controller (AD DC) domain controllers.
+* To leverage Domain or LDAP user authentication, ensure that NFSv4.1 volumes are enabled for LDAP. See [Configure ADDS LDAP with extended groups](configure-ldap-extended-groups.md).
* Ensure that User Principal Names for user accounts do *not* end with a `$` symbol (for example, user$@REALM.COM). <!-- Not using 'contoso.com' in this example; per Mark, A customers REALM namespace may be different from their AD domain name space. --> For [Group managed service accounts](/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts) (gMSA), you need to remove the trailing `$` from the User Principal Name before the account can be used with the Azure NetApp Files Kerberos feature.
azure-resource-manager Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/troubleshooting/common-deployment-errors.md
Title: Troubleshoot common Azure deployment errors
description: Describes common errors for Azure resources deployed with Azure Resource Manager templates (ARM templates) or Bicep files. tags: top-support-issue Previously updated : 11/02/2021 Last updated : 01/10/2022
If your error code isn't listed, submit a GitHub issue. On the right side of the
| AccountNameInvalid | Follow naming restrictions for storage accounts. | [Resolve storage account name](error-storage-account-name.md) | | AccountPropertyCannotBeSet | Check available storage account properties. | [storageAccounts](/azure/templates/microsoft.storage/storageaccounts) | | AllocationFailed | The cluster or region doesn't have resources available or can't support the requested VM size. Retry the request at a later time, or request a different VM size. | [Provisioning and allocation issues for Linux](/troubleshoot/azure/virtual-machines/troubleshoot-deployment-new-vm-linux) <br><br> [Provisioning and allocation issues for Windows](/troubleshoot/azure/virtual-machines/troubleshoot-deployment-new-vm-windows) <br><br> [Troubleshoot allocation failures](/troubleshoot/azure/virtual-machines/allocation-failure)|
-| AnotherOperationInProgress | Wait for concurrent operation to complete. | |
-| AuthorizationFailed | Your account or service principal doesn't have sufficient access to complete the deployment. Check the role your account belongs to, and its access for the deployment scope.<br><br>You might receive this error when a required resource provider isn't registered. | [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md)<br><br>[Resolve registration](error-register-resource-provider.md) |
-| BadRequest | You sent deployment values that don't match what is expected by Resource Manager. Check the inner status message for help with troubleshooting. | [Template reference](/azure/templates/) <br><br> [Supported locations](../templates/resource-location.md) |
-| Conflict | You're requesting an operation that isn't allowed in the resource's current state. For example, disk resizing is allowed only when creating a VM or when the VM is deallocated. | |
+| AnotherOperationInProgress | Wait for concurrent operation to complete. | |
+| AuthorizationFailed | Your account or service principal doesn't have sufficient access to complete the deployment. Check the role your account belongs to, and its access for the deployment scope.<br><br>You might receive this error when a required resource provider isn't registered. | [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md)<br><br>[Resolve registration](error-register-resource-provider.md) |
+| BadRequest | You sent deployment values that don't match what is expected by Resource Manager. Check the inner status message for help with troubleshooting. | [Template reference](/azure/templates/) <br><br> [Resource location in ARM template](../templates/resource-location.md) <br><br> [Resource location in Bicep file](../bicep/resource-declaration.md#location) |
+| Conflict | You're requesting an operation that isn't allowed in the resource's current state. For example, disk resizing is allowed only when creating a VM or when the VM is deallocated. | |
| DeploymentActiveAndUneditable | Wait for concurrent deployment to this resource group to complete. | | | DeploymentFailedCleanUp | When you deploy in complete mode, any resources that aren't in the template are deleted. You get this error when you don't have adequate permissions to delete all of the resources not in the template. To avoid the error, change the deployment mode to incremental. | [Azure Resource Manager deployment modes](../templates/deployment-modes.md) | | DeploymentNameInvalidCharacters | The deployment name can only contain letters, digits, hyphen `(-)`, dot `(.)` or underscore `(_)`. | |
If your error code isn't listed, submit a GitHub issue. On the right side of the
| DeploymentFailed | The DeploymentFailed error is a general error that doesn't provide the details you need to solve the error. Look in the error details for an error code that provides more information. | [Find error code](find-error-code.md) | | DeploymentQuotaExceeded | If you reach the limit of 800 deployments per resource group, delete deployments from the history that are no longer needed. | [Resolve error when deployment count exceeds 800](deployment-quota-exceeded.md) | | DeploymentJobSizeExceeded | Simplify your template to reduce size. | [Resolve template size errors](error-job-size-exceeded.md) |
-| DnsRecordInUse | The DNS record name must be unique. Enter a different name. | |
-| ImageNotFound | Check VM image settings. | |
+| DnsRecordInUse | The DNS record name must be unique. Enter a different name. | |
+| ImageNotFound | Check VM image settings. | |
| InternalServerError | Caused by a temporary problem. Retry the deployment. | |
-| InUseSubnetCannotBeDeleted | This error can occur when you try to update a resource, if the request process deletes and creates the resource. Make sure to specify all unchanged values. | [Update resource](/azure/architecture/guide/azure-resource-manager/advanced-templates/update-resource) |
-| InvalidAuthenticationTokenTenant | Get access token for the appropriate tenant. You can only get the token from the tenant that your account belongs to. | |
-| InvalidContentLink | You've most likely attempted to link to a nested template that isn't available. Double check the URI you provided for the nested template. If the template exists in a storage account, make sure the URI is accessible. You might need to pass a SAS token. Currently, you can't link to a template that is in a storage account behind an [Azure Storage firewall](../../storage/common/storage-network-security.md). Consider moving your template to another repository, like GitHub. | [Linked templates](../templates/linked-templates.md) |
-| InvalidDeploymentLocation | When deploying at the subscription level, you've provided a different location for a previously used deployment name. | [Subscription level deployments](../templates/deploy-to-subscription.md) |
-| InvalidParameter | One of the values you provided for a resource doesn't match the expected value. This error can result from many different conditions. For example, a password may be insufficient, or a blob name may be incorrect. The error message should indicate which value needs to be corrected. | |
-| InvalidRequestContent | The deployment values either include values that aren't recognized, or required values are missing. Confirm the values for your resource type. | [Template reference](/azure/templates/) |
-| InvalidRequestFormat | Enable debug logging when running the deployment, and verify the contents of the request. | [Debug logging](enable-debug-logging.md) |
-| InvalidResourceLocation | Provide a unique name for the storage account. | [Resolve storage account name](error-storage-account-name.md) |
-| InvalidResourceNamespace | Check the resource namespace you specified in the **type** property. | [Template reference](/azure/templates/) |
-| InvalidResourceReference | The resource either doesn't yet exist or is incorrectly referenced. Check whether you need to add a dependency. Verify that your use of the **reference** function includes the required parameters for your scenario. | [Resolve dependencies](error-not-found.md) |
-| InvalidResourceType | Check the resource type you specified in the **type** property. | [Template reference](/azure/templates/) |
-| InvalidSubscriptionRegistrationState | Register your subscription with the resource provider. | [Resolve registration](error-register-resource-provider.md) |
-| InvalidTemplateDeployment <br> InvalidTemplate | Check your template syntax for errors. | [Resolve invalid template](error-invalid-template.md) |
+| InUseSubnetCannotBeDeleted | This error can occur when you try to update a resource, if the request process deletes and creates the resource. Make sure to specify all unchanged values. | [Update resource](/azure/architecture/guide/azure-resource-manager/advanced-templates/update-resource) |
+| InvalidAuthenticationTokenTenant | Get access token for the appropriate tenant. You can only get the token from the tenant that your account belongs to. | |
+| InvalidContentLink | You've most likely attempted to link to a nested template that isn't available. Double check the URI you provided for the nested template. If the template exists in a storage account, make sure the URI is accessible. You might need to pass a SAS token. Currently, you can't link to a template that is in a storage account behind an [Azure Storage firewall](../../storage/common/storage-network-security.md). Consider moving your template to another repository, like GitHub. | [Linked and nested ARM templates](../templates/linked-templates.md) <br><br> [Bicep modules](../bicep/modules.md) |
+| InvalidDeploymentLocation | When deploying at the subscription level, you've provided a different location for a previously used deployment name. | [ARM template subscription deployment](../templates/deploy-to-subscription.md) <br><br> [Bicep subscription deployment](../bicep/deploy-to-subscription.md) |
+| InvalidParameter | One of the values you provided for a resource doesn't match the expected value. This error can result from many different conditions. For example, a password may be insufficient, or a blob name may be incorrect. The error message should indicate which value needs to be corrected. | [ARM template parameters](../templates/parameters.md) <br><br> [Bicep parameters](../bicep/parameters.md) |
+| InvalidRequestContent | The deployment values either include values that aren't recognized, or required values are missing. Confirm the values for your resource type. | [Template reference](/azure/templates/) |
+| InvalidRequestFormat | Enable debug logging when running the deployment, and verify the contents of the request. | [Debug logging](enable-debug-logging.md) |
+| InvalidResourceLocation | Provide a unique name for the storage account. | [Resolve storage account name](error-storage-account-name.md) |
+| InvalidResourceNamespace | Check the resource namespace you specified in the **type** property. | [Template reference](/azure/templates/) |
+| InvalidResourceReference | The resource either doesn't yet exist or is incorrectly referenced. Check whether you need to add a dependency. Verify that your use of the **reference** function includes the required parameters for your scenario. | [Resolve dependencies](error-not-found.md) |
+| InvalidResourceType | Check the resource type you specified in the **type** property. | [Template reference](/azure/templates/) |
+| InvalidSubscriptionRegistrationState | Register your subscription with the resource provider. | [Resolve registration](error-register-resource-provider.md) |
+| InvalidTemplateDeployment <br> InvalidTemplate | Check your template syntax for errors. | [Resolve invalid template](error-invalid-template.md) |
| InvalidTemplateCircularDependency | Remove unnecessary dependencies. | [Resolve circular dependencies](error-invalid-template.md#circular-dependency) | | JobSizeExceeded | Simplify your template to reduce size. | [Resolve template size errors](error-job-size-exceeded.md) |
-| LinkedAuthorizationFailed | Check if your account belongs to the same tenant as the resource group that you're deploying to. | |
-| LinkedInvalidPropertyId | The resource ID for a resource isn't resolved. Check that you provided all required values for the resource ID. For example, subscription ID, resource group name, resource type, parent resource name (if needed), and resource name. | |
-| LocationRequired | Provide a location for the resource. | [Set location](../templates/resource-location.md) |
+| LinkedAuthorizationFailed | Check if your account belongs to the same tenant as the resource group that you're deploying to. | |
+| LinkedInvalidPropertyId | The resource ID for a resource isn't resolved. Check that you provided all required values for the resource ID. For example, subscription ID, resource group name, resource type, parent resource name (if needed), and resource name. | [Resolve errors for resource name and type](../troubleshooting/error-invalid-name-segments.md) |
+| LocationRequired | Provide a location for the resource. | [Resource location in ARM template](../templates/resource-location.md) <br><br> [Resource location in Bicep file](../bicep/resource-declaration.md#location) |
| MismatchingResourceSegments | Make sure a nested resource has the correct number of segments in name and type. | [Resolve resource segments](error-invalid-template.md#incorrect-segment-lengths) |
-| MissingRegistrationForLocation | Check resource provider registration status and supported locations. | [Resolve registration](error-register-resource-provider.md) |
-| MissingSubscriptionRegistration | Register your subscription with the resource provider. | [Resolve registration](error-register-resource-provider.md) |
-| NoRegisteredProviderFound | Check resource provider registration status. | [Resolve registration](error-register-resource-provider.md) |
-| NotFound | You might be attempting to deploy a dependent resource in parallel with a parent resource. Check if you need to add a dependency. | [Resolve dependencies](error-not-found.md) |
-| OperationNotAllowed | The deployment is attempting an operation that exceeds the quota for the subscription, resource group, or region. If possible, revise your deployment to stay within the quotas. Otherwise, consider requesting a change to your quotas. | [Resolve quotas](error-resource-quota.md) |
-| ParentResourceNotFound | Make sure a parent resource exists before creating the child resources. | [Resolve parent resource](error-parent-resource.md) |
+| MissingRegistrationForLocation | Check resource provider registration status and supported locations. | [Resolve registration](error-register-resource-provider.md) |
+| MissingSubscriptionRegistration | Register your subscription with the resource provider. | [Resolve registration](error-register-resource-provider.md) |
+| NoRegisteredProviderFound | Check resource provider registration status. | [Resolve registration](error-register-resource-provider.md) |
+| NotFound | You might be attempting to deploy a dependent resource in parallel with a parent resource. Check if you need to add a dependency. | [Resolve dependencies](error-not-found.md) |
+| OperationNotAllowed | The deployment is attempting an operation that exceeds the quota for the subscription, resource group, or region. If possible, revise your deployment to stay within the quotas. Otherwise, consider requesting a change to your quotas. | [Resolve quotas](error-resource-quota.md) |
+| ParentResourceNotFound | Make sure a parent resource exists before creating the child resources. | [Resolve parent resource](error-parent-resource.md) |
| PasswordTooLong | You might have selected a password with too many characters, or converted your password value to a secure string before passing it as a parameter. If the template includes a **secure string** parameter, you don't need to convert the value to a secure string. Provide the password value as text. | |
-| PrivateIPAddressInReservedRange | The specified IP address includes an address range required by Azure. Change IP address to avoid reserved range. | [IP addresses](../../virtual-network/ip-services/public-ip-addresses.md) |
-| PrivateIPAddressNotInSubnet | The specified IP address is outside of the subnet range. Change IP address to fall within subnet range. | [IP addresses](../../virtual-network/ip-services/public-ip-addresses.md) |
-| PropertyChangeNotAllowed | Some properties can't be changed on a deployed resource. When updating a resource, limit your changes to permitted properties. | [Update resource](/azure/architecture/guide/azure-resource-manager/advanced-templates/update-resource) |
+| PrivateIPAddressInReservedRange | The specified IP address includes an address range required by Azure. Change IP address to avoid reserved range. | [Private IP addresses](../../virtual-network/ip-services/private-ip-addresses.md)
+| PrivateIPAddressNotInSubnet | The specified IP address is outside of the subnet range. Change IP address to fall within subnet range. | [Private IP addresses](../../virtual-network/ip-services/private-ip-addresses.md) |
+| PropertyChangeNotAllowed | Some properties can't be changed on a deployed resource. When updating a resource, limit your changes to permitted properties. | [Update resource](/azure/architecture/guide/azure-resource-manager/advanced-templates/update-resource) |
| RequestDisallowedByPolicy | Your subscription includes a resource policy that prevents an action you're trying to do during deployment. Find the policy that blocks the action. If possible, change your deployment to meet the limitations from the policy. | [Resolve policies](error-policy-requestdisallowedbypolicy.md) | | ReservedResourceName | Provide a resource name that doesn't include a reserved name. | [Reserved resource names](error-reserved-resource-name.md) |
-| ResourceGroupBeingDeleted | Wait for deletion to complete. | |
-| ResourceGroupNotFound | Check the name of the target resource group for the deployment. The target resource group must already exist in your subscription. Check your subscription context. | [Azure CLI](/cli/azure/account?#az_account_set) [PowerShell](/powershell/module/Az.Accounts/Set-AzContext) |
-| ResourceNotFound | Your deployment references a resource that can't be resolved. Verify that your use of the **reference** function includes the parameters required for your scenario. | [Resolve references](error-not-found.md) |
-| ResourceQuotaExceeded | The deployment is trying to create resources that exceed the quota for the subscription, resource group, or region. If possible, revise your infrastructure to stay within the quotas. Otherwise, consider requesting a change to your quotas. | [Resolve quotas](error-resource-quota.md) |
-| SkuNotAvailable | Select SKU (such as VM size) that is available for the location you've selected. | [Resolve SKU](error-sku-not-available.md) |
-| StorageAccountAlreadyExists <br> StorageAccountAlreadyTaken | Provide a unique name for the storage account. | [Resolve storage account name](error-storage-account-name.md) |
-| StorageAccountNotFound | Check the subscription, resource group, and name of the storage account that you're trying to use. | |
-| SubnetsNotInSameVnet | A virtual machine can only have one virtual network. When deploying several NICs, make sure they belong to the same virtual network. | [Multiple NICs](../../virtual-machines/windows/multiple-nics.md) |
-| SubscriptionNotFound | A specified subscription for deployment can't be accessed. It could be the subscription ID is wrong, the user deploying the template doesn't have adequate permissions to deploy to the subscription, or the subscription ID is in the wrong format. When using nested deployments to [deploy across scopes](../templates/deploy-to-resource-group.md), provide the GUID for the subscription. | |
+| ResourceGroupBeingDeleted | Wait for deletion to complete. | |
+| ResourceGroupNotFound | Check the name of the target resource group for the deployment. The target resource group must already exist in your subscription. Check your subscription context. | [Azure CLI](/cli/azure/account#az-account-set) [PowerShell](/powershell/module/Az.Accounts/Set-AzContext) |
+| ResourceNotFound | Your deployment references a resource that can't be resolved. Verify that your use of the **reference** function includes the parameters required for your scenario. | [Resolve references](error-not-found.md) |
+| ResourceQuotaExceeded | The deployment is trying to create resources that exceed the quota for the subscription, resource group, or region. If possible, revise your infrastructure to stay within the quotas. Otherwise, consider requesting a change to your quotas. | [Resolve quotas](error-resource-quota.md) |
+| SkuNotAvailable | Select SKU (such as VM size) that is available for the location you've selected. | [Resolve SKU](error-sku-not-available.md) |
+| StorageAccountAlreadyExists <br> StorageAccountAlreadyTaken | Provide a unique name for the storage account. | [Resolve storage account name](error-storage-account-name.md) |
+| StorageAccountNotFound | Check the subscription, resource group, and name of the storage account that you're trying to use. | |
+| SubnetsNotInSameVnet | A virtual machine can only have one virtual network. When deploying several NICs, make sure they belong to the same virtual network. | [Windows VM multiple NICs](../../virtual-machines/windows/multiple-nics.md) <br><br> [Linux VM multiple NICs](../../virtual-machines/linux/multiple-nics.md) |
+| SubscriptionNotFound | A specified subscription for deployment can't be accessed. It could be the subscription ID is wrong, the user deploying the template doesn't have adequate permissions to deploy to the subscription, or the subscription ID is in the wrong format. When using ARM template nested deployments to deploy across scopes, provide the subscription's GUID. | [ARM template deploy across scopes](../templates/deploy-to-resource-group.md) <br><br> [Bicep file deploy across scopes](../bicep/deploy-to-resource-group.md) |
| SubscriptionNotRegistered | When deploying a resource, the resource provider must be registered for your subscription. When you use an Azure Resource Manager template for deployment, the resource provider is automatically registered in the subscription. Sometimes, the automatic registration doesn't complete in time. To avoid this intermittent error, register the resource provider before deployment. | [Resolve registration](error-register-resource-provider.md) | | TemplateResourceCircularDependency | Remove unnecessary dependencies. | [Resolve circular dependencies](error-invalid-template.md#circular-dependency) |
-| TooManyTargetResourceGroups | Reduce number of resource groups for a single deployment. | [Cross scope deployment](../templates/deploy-to-resource-group.md) |
+| TooManyTargetResourceGroups | Reduce number of resource groups for a single deployment. | [ARM template deploy across scopes](../templates/deploy-to-resource-group.md) <br><br> [Bicep file deploy across scopes](../bicep/deploy-to-resource-group.md) |
## Next steps
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automated-backups-overview.md
Previously updated : 08/28/2021 Last updated : 01/10/2022 # Automated backups - Azure SQL Database & Azure SQL Managed Instance
For more details about backup storage pricing visit [Azure SQL Database pricing
### Monitor costs
-To understand backup storage costs, go to **Cost Management + Billing** in the Azure portal, select **Cost Management**, and then select **Cost analysis**. Select the desired subscription as the **Scope**, and then filter for the time period and service that you're interested in.
+To understand backup storage costs, go to **Cost Management + Billing** in the Azure portal, select **Cost Management**, and then select **Cost analysis**. Select the desired subscription as the **Scope**, and then filter for the time period and service that you're interested in as follows:
-Add a filter for **Service name**, and then select **sql database** in the drop-down list. Use the **meter subcategory** filter to choose the billing counter for your service. For a single database or an elastic database pool, select **single/elastic pool PITR backup storage**. For a managed instance, select **mi PITR backup storage**. The **Storage** and **compute** subcategories might interest you as well, but they're not associated with backup storage costs.
+1. Add a filter for **Service name**.
+2. In the drop-down list select **sql database** for a single database or an elastic database pool, or select **sql managed instance** for managed instance.
+3. Add another filter for **Meter subcategory**.
+4. To monitor PITR backup costs, in the drop-down list select **single/elastic pool pitr backup storage** for a single database or an elastic database pool, or select **managed instance pitr backup storage** for managed instance. Meters will only show up if there exists consumption.
+5. To monitor LTR backup costs, in the drop-down list select **ltr backup storage** for a single database or an elastic database pool, or select **sql managed instance - ltr backup storage** for managed instance. Meters will only show up if there exists consumption.
+
+The **Storage** and **compute** subcategories might interest you as well, but they're not associated with backup storage costs.
![Backup storage cost analysis](./media/automated-backups-overview/check-backup-storage-cost-sql-mi.png)
- >[!NOTE]
- > Meters are only visible for counters that are currently in use. If a counter is not available, it is likely that the category is not currently being used. For example, managed instance counters will not be present for customers who do not have a managed instance deployed. Likewise, storage counters will not be visible for resources that are not consuming storage.
+ >[!IMPORTANT]
+ > Meters are only visible for counters that are currently in use. If a counter is not available, it is likely that the category is not currently being used. For example, managed instance counters will not be present for customers who do not have a managed instance deployed. Likewise, storage counters will not be visible for resources that are not consuming storage. For example, if there is no PITR or LTR backup storage consumption, these meters won't be shown.
For more information, see [Azure SQL Database cost management](cost-management.md).
azure-sql File Space Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/file-space-manage.md
Previously updated : 08/09/2021 Last updated : 1/4/2022 # Manage file space for databases in Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
Understanding the following storage space quantities are important for managing
|**Data max size**|The maximum amount of data space that can be used by the elastic pool for all of its databases.|The space allocated for the elastic pool should not exceed the elastic pool max size. If this condition occurs, then space allocated that is unused can be reclaimed by shrinking database data files.| > [!NOTE]
-> The error message "The elastic pool has reached its storage limit" indicates that the database objects have been allocated enough space to meet the elastic pool storage limit, but there may be unused space in the data space allocation. Consider increasing the elastic pool's storage limit, or as a short-term solution, freeing up data space using the [**Reclaim unused allocated space**](#reclaim-unused-allocated-space) section below. You should also be aware of the potential negative performance impact of shrinking database files, see [**Rebuild indexes**](#rebuild-indexes) section below.
+> The error message "The elastic pool has reached its storage limit" indicates that the database objects have been allocated enough space to meet the elastic pool storage limit, but there may be unused space in the data space allocation. Consider increasing the elastic pool's storage limit, or as a short-term solution, freeing up data space using the [Reclaim unused allocated space](#reclaim-unused-allocated-space) section below. You should also be aware of the potential negative performance impact of shrinking database files, see [Index maintenance after shrink](#rebuild-indexes) section below.
## Query an elastic pool for storage space information
ORDER BY end_time DESC;
> [!IMPORTANT] > Shrink commands impact database performance while running, and if possible should be run during periods of low usage.
-### Shrinking data files
+### <a name="shrinking-data-files"></a>Shrink data files
Because of a potential impact to database performance, Azure SQL Database does not automatically shrink data files. However, customers may shrink data files via self-service at a time of their choosing. This should not be a regularly scheduled operation, but rather, a one-time event in response to a major reduction in data file used space consumption.
-In Azure SQL Database, to shrink files you can use the `DBCC SHRINKDATABASE` or `DBCC SHRINKFILE` commands:
+> [!TIP]
+> It is not recommended to shrink data files if regular application workload will cause the files to grow to the same allocated size again.
-- `DBCC SHRINKDATABASE` will shrink all database data and log files, which is typically unnecessary. The command shrinks one file at a time. It will also [shrink the log file](#shrinking-transaction-log-file). Azure SQL Database automatically shrinks log files, if necessary.
+In Azure SQL Database, to shrink files you can use either `DBCC SHRINKDATABASE` or `DBCC SHRINKFILE` commands:
+
+- `DBCC SHRINKDATABASE` shrinks all data and log files in a database using a single command. The command shrinks one data file at a time, which can take a long time for larger databases. It also [shrinks the log file](#shrinking-transaction-log-file), which is usually unnecessary because Azure SQL Database shrinks log files automatically as needed.
- `DBCC SHRINKFILE` command supports more advanced scenarios: - It can target individual files as needed, rather than shrinking all files in the database.
- - Each `DBCC SHRINKFILE` command can run in parallel with other `DBCC SHRINKFILE` commands to shrink the database faster, at the expense of higher resource usage and a higher chance of blocking user queries, if they are executing during shrink.
- - If the tail of the file does not contain data, it can reduce allocated file size much faster by specifying the TRUNCATEONLY argument. This does not require data movement within the file.
-- For more information about these shrink commands, see [DBCC SHRINKDATABASE](/sql/t-sql/database-console-commands/dbcc-shrinkdatabase-transact-sql) or [DBCC SHRINKFILE](/sql/t-sql/database-console-commands/dbcc-shrinkfile-transact-sql).
+ - Each `DBCC SHRINKFILE` command can run in parallel with other `DBCC SHRINKFILE` commands to shrink multiple files at the same time and reduce the total time of shrink, at the expense of higher resource usage and a higher chance of blocking user queries, if they are executing during shrink.
+ - If the tail of the file does not contain data, it can reduce allocated file size much faster by specifying the `TRUNCATEONLY` argument. This does not require data movement within the file.
+- For more information about these shrink commands, see [DBCC SHRINKDATABASE](/sql/t-sql/database-console-commands/dbcc-shrinkdatabase-transact-sql) and [DBCC SHRINKFILE](/sql/t-sql/database-console-commands/dbcc-shrinkfile-transact-sql).
The following examples must be executed while connected to the target user database, not the `master` database.
To use `DBCC SHRINKDATABASE` to shrink all data and log files in a given databas
DBCC SHRINKDATABASE (N'database_name'); ```
-In Azure SQL Database, a database may have one or more data files. Additional data files can only be created automatically. To determine file layout of your database, query the `sys.database_files` catalog view using the following sample script:
+In Azure SQL Database, a database may have one or more data files, created automatically as data grows. To determine file layout of your database, including the used and allocated size of each file, query the `sys.database_files` catalog view using the following sample script:
```sql Review file properties, including file_id values to reference in shrink commands
+-- Review file properties, including file_id and name values to reference in shrink commands
SELECT file_id, name, CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8 / 1024. AS space_used_mb, CAST(size AS bigint) * 8 / 1024. AS space_allocated_mb,
- CAST(max_size AS bigint) * 8 / 1024. AS max_size_mb
+ CAST(max_size AS bigint) * 8 / 1024. AS max_file_size_mb
FROM sys.database_files WHERE type_desc IN ('ROWS','LOG');
-GO
```
-Execute a shrink against one file only via the `DBCC SHRINKFILE` command, for example:
+You can execute a shrink against one file only via the `DBCC SHRINKFILE` command, for example:
```sql -- Shrink database data file named 'data_0` by removing all unused at the end of the file, if any.
DBCC SHRINKFILE ('data_0', TRUNCATEONLY);
GO ```
-You should also be aware of the potential negative performance impact of shrinking database files, see the [Rebuild indexes](#rebuild-indexes) section below.
+Be aware of the potential negative performance impact of shrinking database files, see the [Index maintenance after shrink](#rebuild-indexes) section below.
### Shrinking transaction log file
In Premium and Business Critical service tiers, if the transaction log becomes l
The following example should be executed while connected to the target user database, not the master database.
-```tsql
Shrink the database log file (always file_id = 2), by removing all unused space at the end of the file, if any.
+```sql
+-- Shrink the database log file (always file_id 2), by removing all unused space at the end of the file, if any.
DBCC SHRINKFILE (2, TRUNCATEONLY); ``` ### Auto-shrink
-Alternatively, auto-shrink can be enabled for a database. However, auto shrink can be less effective in reclaiming file space than `DBCC SHRINKDATABASE` and `DBCC SHRINKFILE`.
-
-Auto-shrink can be helpful in the specific scenario where an elastic pool contains many databases that experience significant growth and reduction in data file space used. This is not a common scenario.
+As an alternative to shrinking data files manually, auto-shrink can be enabled for a database. However, auto shrink can be less effective in reclaiming file space than `DBCC SHRINKDATABASE` and `DBCC SHRINKFILE`.
By default, auto-shrink is disabled, which is recommended for most databases. If it becomes necessary to enable auto-shrink, it is recommended to disable it once space management goals have been achieved, instead of keeping it enabled permanently. For more information, see [Considerations for AUTO_SHRINK](/troubleshoot/sql/admin/considerations-autogrow-autoshrink#considerations-for-auto_shrink).
-To enable auto-shrink, execute the following command while connected to your database (not in the master database).
+For example, auto-shrink can be helpful in the specific scenario where an elastic pool contains many databases that experience significant growth and reduction in data file space used, causing the pool to approach its maximum size limit. This is not a common scenario.
+
+To enable auto-shrink, execute the following command while connected to your database (not the master database).
```sql -- Enable auto-shrink for the current database.
ALTER DATABASE CURRENT SET AUTO_SHRINK ON;
For more information about this command, see [DATABASE SET](/sql/t-sql/statements/alter-database-transact-sql-set-options) options.
-### <a name="rebuild-indexes"></a> Index maintenance before or after shrink
+### <a name="rebuild-indexes"></a> Index maintenance after shrink
+
+After a shrink operation is completed against data files, indexes may become fragmented. This reduces their performance optimization effectiveness for certain workloads, such as queries using large scans. If performance degradation occurs after the shrink operation is complete, consider index maintenance to rebuild indexes. Keep in mind that index rebuilds require free space in the database, and hence may cause the allocated space to increase, counteracting the effect of shrink.
+
+For more information about index maintenance, see [Optimize index maintenance to improve query performance and reduce resource consumption](/sql/relational-databases/indexes/reorganize-and-rebuild-indexes).
+
+## Shrink large databases
+
+When database allocated space is in hundreds of gigabytes or higher, shrink may require a significant time to complete, often measured in hours, or days for multi-terabyte databases. There are process optimizations and best practices you can use to make this process more efficient and less impactful to application workloads.
+
+### Capture space usage baseline
+
+Before starting shrink, capture the current used and allocated space in each database file by executing the following space usage query:
+
+```sql
+SELECT file_id,
+ CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8 / 1024. AS space_used_mb,
+ CAST(size AS bigint) * 8 / 1024. AS space_allocated_mb,
+ CAST(max_size AS bigint) * 8 / 1024. AS max_size_mb
+FROM sys.database_files
+WHERE type_desc = 'ROWS';
+```
+
+Once shrink has completed, you can execute this query again and compare the result to the initial baseline.
+
+### Truncate data files
+
+It is recommended to first execute shrink for each data file with the `TRUNCATEONLY` parameter. This way, if there is any allocated but unused space at the end of the file, it will be removed quickly and without any data movement. The following sample command truncates data file with file_id 4:
+
+```sql
+DBCC SHRINKFILE (4, TRUNCATEONLY);
+```
+
+Once this command is executed for every data file, you can rerun the space usage query to see the reduction in allocated space, if any. You can also view allocated space for the database in Azure portal.
+
+### Evaluate index page density
+
+If truncating data files did not result in a sufficient reduction in allocated space, you will need to shrink data files. However, as an optional but recommended step, you should first determine average page density for indexes in the database. For the same amount of data, shrink will complete faster if page density is high, because it will have to move fewer pages. If page density is low for some indexes, consider performing maintenance on these indexes to increase page density before shrinking data files. This will also let shrink achieve a deeper reduction in allocated storage space.
+
+To determine page density for all indexes in the database, use the following query. Page density is reported in the `avg_page_space_used_in_percent` column.
+
+```sql
+SELECT OBJECT_SCHEMA_NAME(ips.object_id) AS schema_name,
+ OBJECT_NAME(ips.object_id) AS object_name,
+ i.name AS index_name,
+ i.type_desc AS index_type,
+ ips.avg_page_space_used_in_percent,
+ ips.avg_fragmentation_in_percent,
+ ips.page_count,
+ ips.alloc_unit_type_desc,
+ ips.ghost_record_count
+FROM sys.dm_db_index_physical_stats(DB_ID(), default, default, default, 'SAMPLED') AS ips
+INNER JOIN sys.indexes AS i
+ON ips.object_id = i.object_id
+ AND
+ ips.index_id = i.index_id
+ORDER BY page_count DESC;
+```
+
+If there are indexes with high page count that have page density lower than 60-70%, consider rebuilding or reorganizing these indexes before shrinking data files.
+
+> [!NOTE]
+> For larger databases, the query to determine page density may take a long time (hours) to complete. Additionally, rebuilding or reorganizing large indexes also requires substantial time and resource usage. There is a tradeoff between spending extra time on increasing page density on one hand, and reducing shrink duration and achieving higher space savings on another.
+
+Following is a sample command to rebuild an index and increase its page density:
+
+```sql
+ALTER INDEX [index_name] ON [schema_name].[table_name] REBUILD WITH (FILLFACTOR = 100, MAXDOP = 8, ONLINE = ON (WAIT_AT_LOW_PRIORITY (MAX_DURATION = 5 MINUTES, ABORT_AFTER_WAIT = NONE)), RESUMABLE = ON);
+```
+
+This command initiates an online and resumable index rebuild. This lets concurrent workloads continue using the table while the rebuild is in progress, and lets you resume the rebuild if it gets interrupted for any reason. However, this type of rebuild is slower than an offline rebuild, which blocks access to the table. If no other workloads need to access the table during rebuild, set the `ONLINE` and `RESUMABLE` options to `OFF` and remove the `WAIT_AT_LOW_PRIORITY` clause.
+
+If there are multiple indexes with low page density, you may be able to rebuild them in parallel on multiple database sessions to speed up the process. However, make sure that you are not approaching database resource limits by doing so, and leave sufficient resource headroom for application workloads that may be running. Monitor resource consumption (CPU, Data IO, Log IO) in Azure portal or using the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view, and start additional parallel rebuilds only if resource utilization on each of these dimensions remains substantially lower than 100%. If CPU, Data IO, or Log IO utilization is at 100%, you can scale up the database to have more CPU cores and increase IO throughput. This may enable additional parallel rebuilds to complete the process faster.
+
+To learn more about index maintenance, see [Optimize index maintenance to improve query performance and reduce resource consumption](/sql/relational-databases/indexes/reorganize-and-rebuild-indexes).
+
+### Shrink multiple data files
+
+As noted earlier, shrink with data movement is a long-running process. If the database has multiple data files, you can speed up the process by shrinking multiple data files in parallel. You do this by opening multiple database sessions, and using `DBCC SHRINKFILE` on each session with a different `file_id` value. Similar to rebuilding indexes earlier, make sure you have sufficient resource headroom (CPU, Data IO, Log IO) before starting each new parallel shrink command.
+
+The following sample command shrinks data file with file_id 4, attempting to reduce its allocated size to 52000 MB by moving pages within the file:
+
+```sql
+DBCC SHRINKFILE (4, 52000);
+```
+
+If you want to reduce allocated space for the file to the minimum possible, execute the statement without specifying the target size:
+
+```sql
+DBCC SHRINKFILE (4);
+```
+
+If a workload is running concurrently with shrink, it may start using the storage space freed by shrink before shrink completes and truncates the file. In this case, shrink will not be able to reduce allocated space to the specified target.
+
+You can mitigate this by shrinking each file in smaller steps. This means that in the `DBCC SHRINKFILE` command, you set the target that is slightly smaller than the current allocated space for the file, as seen in the results of [baseline space usage query](#capture-space-usage-baseline). For example, if allocated space for file with file_id 4 is 200,000 MB, and you want to shrink it to 100,000 MB, you can first set the target to 170,000 MB:
+
+```sql
+DBCC SHRINKFILE (4, 170000);
+```
+
+Once this command completes, it will have truncated the file and reduced its allocated size to 170,000 MB. You can then repeat this command, setting target first to 140,000 MB, then to 110,000 MB, etc., until the file is shrunk to the desired size. If the command completes but the file is not truncated, use smaller steps, for example 15,000 MB rather than 30,000 MB.
+
+To monitor shrink progress for all concurrently running shrink sessions, you can use the following query:
+
+```sql
+SELECT command,
+ percent_complete,
+ status,
+ wait_resource,
+ session_id,
+ wait_type,
+ blocking_session_id,
+ cpu_time,
+ reads,
+ CAST(((DATEDIFF(s,start_time, GETDATE()))/3600) AS varchar) + ' hour(s), '
+ + CAST((DATEDIFF(s,start_time, GETDATE())%3600)/60 AS varchar) + 'min, '
+ + CAST((DATEDIFF(s,start_time, GETDATE())%60) AS varchar) + ' sec' AS running_time
+FROM sys.dm_exec_requests AS r
+LEFT JOIN sys.databases AS d
+ON r.database_id = d.database_id
+WHERE r.command IN ('DbccSpaceReclaim','DbccFilesCompact','DbccLOBCompact','DBCC');
+```
+
+> [!NOTE]
+> Shrink progress may be non-linear, and the value in the `percent_complete` column may remain virtually unchanged for long periods of time, even though shrink is still in progress.
+
+Once shrink has completed for all data files, rerun the [space usage query](#capture-space-usage-baseline) (or check in Azure portal) to determine the resulting reduction in allocated storage size. If is is insufficient and there is still a large difference between used space and allocated space, you can [rebuild indexes](#evaluate-index-page-density) as described earlier. This may temporarily increase allocated space further, however shrinking data files again after rebuilding indexes should result in a deeper reduction in allocated space.
+
+## Transient errors during shrink
+
+Occasionally, a shrink command may fail with various errors such as timeouts and deadlocks. In general, these errors are transient, and do not occur again if the same command is repeated. If shrink fails with an error, the progress it has made so far in moving data pages is retained, and the same shrink command can be executed again to continue shrinking the file.
+
+The following sample script shows how you can run shrink in a retry loop to automatically retry up to a configurable number of times when a timeout error or a deadlock error occurs. This retry approach is applicable to many other errors that may occur during shrink.
+
+```sql
+DECLARE @RetryCount int = 3; -- adjust to configure desired number of retries
+DECLARE @Delay char(12);
+
+-- Retry loop
+WHILE @RetryCount >= 0
+BEGIN
+
+BEGIN TRY
+
+DBCC SHRINKFILE (1); -- adjust file_id and other shrink parameters
+
+-- Exit retry loop on successful execution
+SELECT @RetryCount = -1;
+
+END TRY
+BEGIN CATCH
+ -- Retry for the declared number of times without raising an error if deadlocked or timed out waiting for a lock
+ IF ERROR_NUMBER() IN (1205, 49516) AND @RetryCount > 0
+ BEGIN
+ SELECT @RetryCount -= 1;
+
+ PRINT CONCAT('Retry at ', SYSUTCDATETIME());
+
+ -- Wait for a random period of time between 1 and 10 seconds before retrying
+ SELECT @Delay = '00:00:0' + CAST(CAST(1 + RAND() * 8.999 AS decimal(5,3)) AS varchar(5));
+ WAITFOR DELAY @Delay;
+ END
+ ELSE -- Raise error and exit loop
+ BEGIN
+ SELECT @RetryCount = -1;
+ THROW;
+ END
+END CATCH
+END;
+```
+
+In addition to timeouts and deadlocks, shrink may encounter errors due to certain known issues.
+
+The errors returned and mitigation steps are as follows:
+
+- **Error number: 49503**, error message: _%.*ls: Page %d:%d could not be moved because it is an off-row persistent version store page. Page holdup reason: %ls. Page holdup timestamp: %I64d._
+
+This error occurs when there are long running active transactions that have generated row versions in persistent version store (PVS). The pages containing these row versions cannot be moved by shrink, hence it cannot make progress and fails with this error.
+
+To mitigate, you have to wait until these long running transactions have completed. Alternatively, you can identify and terminate these long running transactions, but this can impact your application if it does not handle transaction failures gracefully. One way to find long running transactions is by running the following query in the database where you ran the shrink command:
+
+```sql
+-- Transactions sorted by duration
+SELECT st.session_id,
+ dt.database_transaction_begin_time,
+ DATEDIFF(second, dt.database_transaction_begin_time, CURRENT_TIMESTAMP) AS transaction_duration_seconds,
+ dt.database_transaction_log_bytes_used,
+ dt.database_transaction_log_bytes_reserved,
+ st.is_user_transaction,
+ st.open_transaction_count,
+ ib.event_type,
+ ib.parameters,
+ ib.event_info
+FROM sys.dm_tran_database_transactions AS dt
+INNER JOIN sys.dm_tran_session_transactions AS st
+ON dt.transaction_id = st.transaction_id
+OUTER APPLY sys.dm_exec_input_buffer(st.session_id, default) AS ib
+WHERE dt.database_id = DB_ID()
+ORDER BY transaction_duration_seconds DESC;
+```
+
+You can terminate a transaction by using the `KILL` command and specifying the associated `session_id` value from query result:
+
+```sql
+KILL 4242; -- replace 4242 with the session_id value from query results
+```
+
+> [!CAUTION]
+> Terminating a transaction may negatively impact workloads.
+
+Once long running transactions have been terminated or have completed, an internal background task will clean up no longer needed row versions after some time. You can monitor PVS size to gauge cleanup progress, using the following query. Run the query in the database where you ran the shrink command:
+
+```sql
+SELECT pvss.persistent_version_store_size_kb / 1024. / 1024 AS persistent_version_store_size_gb,
+ pvss.online_index_version_store_size_kb / 1024. / 1024 AS online_index_version_store_size_gb,
+ pvss.current_aborted_transaction_count,
+ pvss.aborted_version_cleaner_start_time,
+ pvss.aborted_version_cleaner_end_time,
+ dt.database_transaction_begin_time AS oldest_transaction_begin_time,
+ asdt.session_id AS active_transaction_session_id,
+ asdt.elapsed_time_seconds AS active_transaction_elapsed_time_seconds
+FROM sys.dm_tran_persistent_version_store_stats AS pvss
+LEFT JOIN sys.dm_tran_database_transactions AS dt
+ON pvss.oldest_active_transaction_id = dt.transaction_id
+ AND
+ pvss.database_id = dt.database_id
+LEFT JOIN sys.dm_tran_active_snapshot_database_transactions AS asdt
+ON pvss.min_transaction_timestamp = asdt.transaction_sequence_num
+ OR
+ pvss.online_index_min_transaction_timestamp = asdt.transaction_sequence_num
+WHERE pvss.database_id = DB_ID();
+```
+
+Once PVS size reported in the `persistent_version_store_size_gb` column is substantially reduced compared to its original size, rerunning shrink should succeed.
+
+- **Error number: 5223**, error message: _%.*ls: Empty page %d:%d could not be deallocated._
+
+This error may occur if there are ongoing index maintenance operations such as `ALTER INDEX`. Retry the shrink command after these operations are complete.
+
+If this error persists, the associated index might have to be rebuilt. To find the index to rebuild, execute the following query in the same database where you ran the shrink command:
+
+```sql
+SELECT OBJECT_SCHEMA_NAME(pg.object_id) AS schema_name,
+ OBJECT_NAME(pg.object_id) AS object_name,
+ i.name AS index_name,
+ p.partition_number
+FROM sys.dm_db_page_info(DB_ID(), <file_id>, <page_id>, default) AS pg
+INNER JOIN sys.indexes AS i
+ON pg.object_id = i.object_id
+ AND
+ pg.index_id = i.index_id
+INNER JOIN sys.partitions AS p
+ON pg.partition_id = p.partition_id;
+```
+
+Before executing this query, replace the `<file_id>` and `<page_id>` placeholders with the actual values from the error message you received. For example, if the message is _Empty page 1:62669 could not be deallocated_, then `<file_id>` is `1` and `<page_id>` is `62669`.
+
+Rebuild the index identified by the query, and retry the shrink command.
-After a shrink operation is completed against data files, indexes may become fragmented and lose their performance optimization effectiveness for certain workloads, such as queries using large scans. If performance degradation occurs after the shrink operation is complete, consider index maintenance to rebuild indexes.
+- **Error number: 5201**, error message: _DBCC SHRINKDATABASE: File ID %d of database ID %d was skipped because the file does not have enough free space to reclaim._
-If page density in the database is low, a shrink will take longer because it will have to move more pages in each data file. Microsoft recommends determining average page density before executing shrink commands. If page density is low, rebuild or reorganize indexes to increase page density before running shrink. For more information, including a sample script to determine page density, see [Optimize index maintenance to improve query performance and reduce resource consumption](/sql/relational-databases/indexes/reorganize-and-rebuild-indexes).
+This error means that the data file cannot be shrunk further. You can move on to the next data file.
## Next steps
azure-sql Migrate Sqlite Db To Azure Sql Serverless Offline Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/migrate-sqlite-db-to-azure-sql-serverless-offline-tutorial.md
# How to migrate your SQLite database to Azure SQL Database serverless [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-For many people, SQLite provides their first experience of databases and SQL programming. It's inclusion in many operating systems and popular applications makes SQLite one the most widely deployed and used database engines in the world. And because it is likely the first database engine many people use, it can often end up as a central part of projects or applications. In such cases where the project or application outgrows the initial SQLite implementation, developers may need to migrate their data to a reliable, centralized data store.
+For many people, SQLite provides their first experience of databases and SQL programming. Its inclusion in many operating systems and popular applications makes SQLite one the most widely deployed and used database engines in the world. And because it is likely the first database engine many people use, it can often end up as a central part of projects or applications. In such cases where the project or application outgrows the initial SQLite implementation, developers may need to migrate their data to a reliable, centralized data store.
Azure SQL Database Serverless is a compute tier for single databases that automatically scales compute based on workload demand, and bills for the amount of compute used per second. The serverless compute tier also automatically pauses databases during inactive periods when only storage is billed and automatically resumes databases when activity returns.
Once you have followed the below steps, your database will be migrated into Azur
## Next steps - To get started, see [Quickstart: Create a single database in Azure SQL Database using the Azure portal](single-database-create-quickstart.md).-- For resource limits, see [Serverless compute tier resource limits](./resource-limits-vcore-single-databases.md#general-purposeserverless-computegen5).
+- For resource limits, see [Serverless compute tier resource limits](./resource-limits-vcore-single-databases.md#general-purposeserverless-computegen5).
azure-sql Recovery Using Backups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/recovery-using-backups.md
Previously updated : 01/07/2021 Last updated : 01/10/2022 # Recover using automated database backups - Azure SQL Database & SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
For a large or very active database, the restore might take several hours. If th
For a single subscription, there are limitations on the number of concurrent restore requests. These limitations apply to any combination of point-in-time restores, geo-restores, and restores from long-term retention backup.
+> [!NOTE]
+> Very large restores on Managed Instance lasting for more than 36 hours will be prolonged in case of pending critical system update. In such case current restore operation will be paused, critical system update will be applied, and restore resumed after the update has completed.
+ | **Deployment option** | **Max # of concurrent requests being processed** | **Max # of concurrent requests being submitted** | | : | --: | --: | |**Single database (per subscription)**|30|100|
For a sample PowerShell script showing how to restore a deleted instance databas
## Geo-restore > [!IMPORTANT]
-> Geo-restore is available only for SQL databases or managed instances configured with geo-redundant [backup storage](automated-backups-overview.md#backup-storage-redundancy).
+> - Geo-restore is available only for SQL databases or managed instances configured with geo-redundant [backup storage](automated-backups-overview.md#backup-storage-redundancy).
+> - Geo-restore can be performed on SQL databases or managed instances residing in the same subscription only.
You can restore a database on any SQL Database server or an instance database on any managed instance in any Azure region from the most recent geo-replicated backups. Geo-restore uses a geo-replicated backup as its source. You can request geo-restore even if the database or datacenter is inaccessible due to an outage.
azure-video-analyzer Policy Definitions Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/policy-definitions-security.md
Azure Video Analyzer provides several built-in [Azure Policy](../../governance/p
Video Analyzer provides several common use case definitions for Azure Policy that are built-in to help you get started. This article explains how to assign policies for a Video Analyzer account using the Azure portal.
-## Built-in Azure Policy definitions for Video Analyzer
+## Built-in Azure Policy definitions
The following built-in policy definitions are available for use with Video Analyzer.
The following built-in policy definitions are available for use with Video Analy
Use the Azure portal to [create a policy assignment](../../governance/policy/assign-policy-portal.md) for your Video Analyzer account using the built-in policy definition.
+> [!NOTE]
+> Follow the [quickstart to create a policy assignment](../../governance/policy/assign-policy-portal.md) but use the policy definition applicable for Video Analyzer by selecting **Type = Built-in** and typing "Video Analyzer" in the Search tab.
+ > [!div class="mx-imgBorder"] > :::image type="content" source="./media/security-policy/built-in-policy.png" alt-text="Screenshot to assign a built-in policy for Video Analyzer.":::
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/release-notes.md
The audio effects detection capability was improved to have a better detection r
For more information, see [Audio effects detection](audio-effects-detection.md).
+### New source languages support for STT, translation, and search on the website
+
+Video Analyzer for Media introduces source languages support for STT (speech-to-text), translation, and search in Hebrew (he-IL), Portuguese (pt-PT), and Persian (fa-IR) on the [Video Analyzer for Media](https://www.videoindexer.ai/) website.
+It means transcription, translation, and search features are also supported for these languages in Video Analyzer for Media web applications and widgets.
+ ## December 2021 ### The projects feature is now GA
azure-video-analyzer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-output-json-v2.md
Example:
"end":"0:00:17.03" } ]
-},
+}
``` #### ocr
Example:
|confidence|The recognition confidence.| |language|The OCR language.| |instances|A list of time ranges where this OCR appeared (the same OCR can appear multiple times).|
-|height|The height of the OCR rectangle|
-|top|The top location in px|
-|left| The left location in px|
-|width|The width of the OCR rectangle|
+|height|The height of the OCR rectangle.|
+|top|The top location in px.|
+|left|The left location in px.|
+|width|The width of the OCR rectangle.|
+|angle|The angle of the OCR rectangle, from -180 to 180. 0 means left to right horizontal, 90 means top to bottom vertical, 180 means right to left horizontal, and -90 means bottom to top vertical. 30 means from top left to bottom right. |
```json "ocr": [
Example:
"language": "en-US", "left": 31, "top": 97,
- "width": 400,
+ "width": 400,
+ "angle": 30,
"instances": [ { "start": "00:00:26",
Video Analyzer for Media makes inference of main topics from transcripts. When p
}, ` ` ` ```+ ## Next steps [Video Analyzer for Media Developer Portal](https://api-portal.videoindexer.ai)
cognitive-services Export Your Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/export-your-model.md
Custom Vision Service allows classifiers to be exported to run offline. You can
Custom Vision Service supports the following exports:
-* __Tensorflow__ for __Android__.
-* **TensorflowJS** for JavaScript frameworks like React, Angular, and Vue. This will run on both **Android** and **iOS** devices.
+* __TensorFlow__ for __Android__.
+* **TensorFlow.js** for JavaScript frameworks like React, Angular, and Vue. This will run on both **Android** and **iOS** devices.
* __CoreML__ for __iOS11__. * __ONNX__ for __Windows ML__, **Android**, and **iOS**. * __[Vision AI Developer Kit](https://azure.github.io/Vision-AI-DevKit-Pages/)__.
To export the model after retraining, use the following steps:
Integrate your exported model into an application by exploring one of the following articles or samples:
-* [Use your Tensorflow model with Python](export-model-python.md)
+* [Use your TensorFlow model with Python](export-model-python.md)
* [Use your ONNX model with Windows Machine Learning](custom-vision-onnx-windows-ml.md) * See the sample for [CoreML model in an iOS application](https://go.microsoft.com/fwlink/?linkid=857726) for real-time image classification with Swift.
-* See the sample for [Tensorflow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample) for real-time image classification on Android.
+* See the sample for [TensorFlow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample) for real-time image classification on Android.
* See the sample for [CoreML model with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel) for real-time image classification in a Xamarin iOS app.
cognitive-services How To Migrate To Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-migrate-to-custom-neural-voice.md
# Migrate from custom voice to custom neural voice > [!IMPORTANT]
-> We are retiring the standard/non-neural training tier of custom voice on February 29, 2024. During the retiring period (3/1/2021 - 2/29/2024), existing standard tier users can continue to use their non-neural models created, but all new users who sign up for speech resources from **3/1/2021** should move to the neural tier/custom neural voice. After 2/29/2024, all standard/non-neural custom voices will no longer be supported.
+> We are retiring the standard non-neural training tier of custom voice from March 1, 2021 through February 29, 2024. If you used a non-neural custom voice with your Speech resource prior to March 1, 2021 then you can continue to do so until February 29, 2024. All other Speech resources can only use custom neural voice. After February 29, 2024, the non-neural custom voices won't be supported with any Speech resource.
The custom neural voice lets you build higher-quality voice models while requiring less data. You can develop more realistic, natural, and conversational voices. Your customers and end users will benefit from the latest Text-to-Speech technology, in a responsible way.
cognitive-services How To Migrate To Prebuilt Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-migrate-to-prebuilt-neural-voice.md
# Migrate from prebuilt standard voice to prebuilt neural voice > [!IMPORTANT]
-> We are retiring the standard voices on August 31, 2024. During the retiring period (9/1/2021 - 8/31/2024), existing standard voice users can continue to use standard voices, but all new users who sign up for speech resources from **9/1/2021** should choose [neural voice names](language-support.md#prebuilt-neural-voices) in your speech synthesis request. After 8/31/2024, the standard voices will no longer be supported in your speech synthesis request.
+> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md#prebuilt-neural-voices). After August 31, the standard voices won't be supported with any Speech resource.
The prebuilt neural voice provides more natural sounding speech output, and thus, a better end-user experience.
cognitive-services Migration Overview Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/migration-overview-neural-voice.md
We're retiring two features from [Text-to-Speech](index-text-to-speech.yml) capa
## Custom voice (non-neural training) > [!IMPORTANT]
-> We are retiring the standard/non-neural training tier of custom voice on February 29, 2024. During the retiring period (3/1/2021 - 2/29/2024), existing standard tier users can continue to use their non-neural models created, but all new users who sign up for speech resources from **3/1/2021** should move to the neural tier/custom neural voice. After 2/29/2024, all standard/non-neural custom voices will no longer be supported.
+> We are retiring the standard non-neural training tier of custom voice from March 1, 2021 through February 29, 2024. If you used a non-neural custom voice with your Speech resource prior to March 1, 2021 then you can continue to do so until February 29, 2024. All other Speech resources can only use custom neural voice. After February 29, 2024, the non-neural custom voices won't be supported with any Speech resource.
Go to [this article](how-to-migrate-to-custom-neural-voice.md) to learn how to migrate to custom neural voice.
Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-s
## Prebuilt standard voice > [!IMPORTANT]
-> We are retiring the standard voices on August 31, 2024. During the retiring period (9/1/2021 - 8/31/2024), existing standard voice users can continue to use standard voices, but all new users who sign up for speech resources from **9/1/2021** should choose [neural voice names](language-support.md#prebuilt-neural-voices) in your speech synthesis request. After 8/31/2024, the standard voices will no longer be supported in your speech synthesis request.
+> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md#prebuilt-neural-voices). After August 31, the standard voices won't be supported with any Speech resource.
Go to [this article](how-to-migrate-to-prebuilt-neural-voice.md) to learn how to migrate to prebuilt neural voice.
cognitive-services Art Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/big-data/recipes/art-explorer.md
-+ Previously updated : 07/06/2020 Last updated : 01/10/2022 ms.devlang: python
cognitive-services Azure Kubernetes Recipe https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/azure-kubernetes-recipe.md
-+ Previously updated : 10/11/2021 Last updated : 01/10/2022
cognitive-services Migrate Language Service Latest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/concepts/migrate-language-service-latest.md
-+ Previously updated : 12/03/2021 Last updated : 01/10/2022
cognitive-services Multiple Languages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/conversational-language-understanding/concepts/multiple-languages.md
-+ Previously updated : 11/02/2021 Last updated : 01/10/2022
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/conversational-language-understanding/faq.md
-+ Previously updated : 11/02/2021 Last updated : 01/10/2022
container-instances Container Instances Region Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-region-availability.md
The following regions and maximum resources are available to container groups wi
| Australia Southeast | 4 | 14 | N/A | N/A | 50 | N/A | N | | Brazil South | 4 | 16 | 2 | 8 | 50 | N/A | Y | | Canada Central | 4 | 16 | 4 | 16 | 50 | N/A | N |
-| Canada East | 4 | 16 | 4 | 16 | 50 | N/A | N |
+| Canada East | 4 | 16 | N/A | N/A | 50 | N/A | N |
| Central India | 4 | 16 | 4 | 4 | 50 | V100 | N | | Central US | 4 | 16 | 4 | 16 | 50 | N/A | Y | | East Asia | 4 | 16 | 4 | 16 | 50 | N/A | N |
cosmos-db How To Restrict User Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-restrict-user-data.md
Title: Restrict user access to data operations only with Azure Cosmos DB description: Learn how to restrict access to data operations only with Azure Cosmos DB-+ Last updated 12/9/2019-+
cosmos-db Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/certificate-based-authentication.md
Title: Certificate-based authentication with Azure Cosmos DB and Active Directory description: Learn how to configure an Azure AD identity for certificate-based authentication to access keys from Azure Cosmos DB.-+ Last updated 06/11/2019-+
cosmos-db Sql Query Equality Comparison Operators https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-equality-comparison-operators.md
+
+ Title: Equality and comparison operators in Azure Cosmos DB
+description: Learn about SQL equality and comparison operators supported by Azure Cosmos DB.
++++ Last updated : 01/07/2022+++
+# Equality and comparison operators in Azure Cosmos DB
+
+This article details the equality and comparison operators supported by Azure Cosmos DB.
+
+## Understanding equality comparisons
+
+The following table shows the result of equality comparisons in the SQL API between any two JSON types.
+
+| **Op** | **Undefined** | **Null** | **Boolean** | **Number** | **String** | **Object** | **Array** |
+|||||||||
+| **Undefined** | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined |
+| **Null** | Undefined | **Ok** | Undefined | Undefined | Undefined | Undefined | Undefined |
+| **Boolean** | Undefined | Undefined | **Ok** | Undefined | Undefined | Undefined | Undefined |
+| **Number** | Undefined | Undefined | Undefined | **Ok** | Undefined | Undefined | Undefined |
+| **String** | Undefined | Undefined | Undefined | Undefined | **Ok** | Undefined | Undefined |
+| **Object** | Undefined | Undefined | Undefined | Undefined | Undefined | **Ok** | Undefined |
+| **Array** | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined | **Ok** |
+
+For comparison operators such as `>`, `>=`, `!=`, `<`, and `<=`, comparison across types or between two objects or arrays produces `Undefined`.
+
+If the result of the scalar expression is `Undefined`, the item isn't included in the result, because `Undefined` doesn't equal `true`.
+
+For example, the following query's comparison between a number and string value produces `Undefined`. Therefore, the filter does not include any results.
+
+```sql
+SELECT *
+FROM c
+WHERE 7 = 'a'
+```
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Keywords](sql-query-keywords.md)
+- [SELECT clause](sql-query-select.md)
cosmos-db Sql Query Linq To Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-linq-to-sql.md
The LINQ provider included with the SQL .NET SDK supports the following operator
- **Array functions**: Supports translation from .NET `Concat`, `Contains`, and `Count` to the equivalent [built-in array functions](sql-query-array-functions.md). - **Geospatial Extension functions**: Supports translation from stub methods `Distance`, `IsValid`, `IsValidDetailed`, and `Within` to the equivalent [built-in geospatial functions](sql-query-geospatial-query.md). - **User-Defined Function Extension function**: Supports translation from the stub method [CosmosLinq.InvokeUserDefinedFunction](/dotnet/api/microsoft.azure.cosmos.linq.cosmoslinq.invokeuserdefinedfunction?view=azure-dotnet&preserve-view=true) to the corresponding [user-defined function](sql-query-udfs.md).-- **Miscellaneous**: Supports translation of `Coalesce` and conditional [operators](sql-query-operators.md). Can translate `Contains` to String CONTAINS, ARRAY_CONTAINS, or IN, depending on context.
+- **Miscellaneous**: Supports translation of `Coalesce` and [conditional operators](sql-query-logical-operators.md). Can translate `Contains` to String CONTAINS, ARRAY_CONTAINS, or IN, depending on context.
## Examples
cosmos-db Sql Query Logical Operators https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-logical-operators.md
+
+ Title: Logical operators in Azure Cosmos DB
+description: Learn about SQL logical operators supported by Azure Cosmos DB.
++++ Last updated : 01/07/2022+++
+# Logical operators in Azure Cosmos DB
+
+This article details the logical operators supported by Azure Cosmos DB.
+
+## Understanding logical (AND, OR and NOT) operators
+
+Logical operators operate on Boolean values. The following tables show the logical truth tables for these operators:
+
+**OR operator**
+
+Returns `true` when either of the conditions is `true`.
+
+| | **True** | **False** | **Undefined** |
+| | | | |
+| **True** |True |True |True |
+| **False** |True |False |Undefined |
+| **Undefined** |True |Undefined |Undefined |
+
+**AND operator**
+
+Returns `true` when both expressions are `true`.
+
+| | **True** | **False** | **Undefined** |
+| | | | |
+| **True** |True |False |Undefined |
+| **False** |False |False |False |
+| **Undefined** |Undefined |False |Undefined |
+
+**NOT operator**
+
+Reverses the value of any Boolean expression.
+
+| | **NOT** |
+| | |
+| **True** |False |
+| **False** |True |
+| **Undefined** |Undefined |
+
+**Operator Precedence**
+
+The logical operators `OR`, `AND`, and `NOT` have the precedence level shown below:
+
+| **Operator** | **Priority** |
+| | |
+| **NOT** |1 |
+| **AND** |2 |
+| **OR** |3 |
+
+## * operator
+
+The special operator * projects the entire item as is. When used, it must be the only projected field. A query like `SELECT * FROM Families f` is valid, but `SELECT VALUE * FROM Families f` and `SELECT *, f.id FROM Families f` are not valid.
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Keywords](sql-query-keywords.md)
+- [SELECT clause](sql-query-select.md)
cosmos-db Sql Query Operators https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-operators.md
- Title: SQL query operators for Azure Cosmos DB
-description: Learn about SQL operators such as equality, comparison, and logical operators supported by Azure Cosmos DB.
---- Previously updated : 07/29/2020---
-# Operators in Azure Cosmos DB
-
-This article details the various operators supported by Azure Cosmos DB.
-
-## Equality and Comparison Operators
-
-The following table shows the result of equality comparisons in the SQL API between any two JSON types.
-
-| **Op** | **Undefined** | **Null** | **Boolean** | **Number** | **String** | **Object** | **Array** |
-|||||||||
-| **Undefined** | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined |
-| **Null** | Undefined | **Ok** | Undefined | Undefined | Undefined | Undefined | Undefined |
-| **Boolean** | Undefined | Undefined | **Ok** | Undefined | Undefined | Undefined | Undefined |
-| **Number** | Undefined | Undefined | Undefined | **Ok** | Undefined | Undefined | Undefined |
-| **String** | Undefined | Undefined | Undefined | Undefined | **Ok** | Undefined | Undefined |
-| **Object** | Undefined | Undefined | Undefined | Undefined | Undefined | **Ok** | Undefined |
-| **Array** | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined | **Ok** |
-
-For comparison operators such as `>`, `>=`, `!=`, `<`, and `<=`, comparison across types or between two objects or arrays produces `Undefined`.
-
-If the result of the scalar expression is `Undefined`, the item isn't included in the result, because `Undefined` doesn't equal `true`.
-
-For example, the following query's comparison between a number and string value produces `Undefined`. Therefore, the filter does not include any results.
-
-```sql
-SELECT *
-FROM c
-WHERE 7 = 'a'
-```
-
-## Logical (AND, OR and NOT) operators
-
-Logical operators operate on Boolean values. The following tables show the logical truth tables for these operators:
-
-**OR operator**
-
-Returns `true` when either of the conditions is `true`.
-
-| | **True** | **False** | **Undefined** |
-| | | | |
-| **True** |True |True |True |
-| **False** |True |False |Undefined |
-| **Undefined** |True |Undefined |Undefined |
-
-**AND operator**
-
-Returns `true` when both expressions are `true`.
-
-| | **True** | **False** | **Undefined** |
-| | | | |
-| **True** |True |False |Undefined |
-| **False** |False |False |False |
-| **Undefined** |Undefined |False |Undefined |
-
-**NOT operator**
-
-Reverses the value of any Boolean expression.
-
-| | **NOT** |
-| | |
-| **True** |False |
-| **False** |True |
-| **Undefined** |Undefined |
-
-**Operator Precedence**
-
-The logical operators `OR`, `AND`, and `NOT` have the precedence level shown below:
-
-| **Operator** | **Priority** |
-| | |
-| **NOT** |1 |
-| **AND** |2 |
-| **OR** |3 |
-
-## * operator
-
-The special operator * projects the entire item as is. When used, it must be the only projected field. A query like `SELECT * FROM Families f` is valid, but `SELECT VALUE * FROM Families f` and `SELECT *, f.id FROM Families f` are not valid.
-
-## ? and ?? operators
-
-You can use the Ternary (?) and Coalesce (??) operators to build conditional expressions, as in programming languages like C# and JavaScript.
-
-You can use the ? operator to construct new JSON properties on the fly. For example, the following query classifies grade levels into `elementary` or `other`:
-
-```sql
- SELECT (c.grade < 5)? "elementary": "other" AS gradeLevel
- FROM Families.children[0] c
-```
-
-You can also nest calls to the ? operator, as in the following query:
-
-```sql
- SELECT (c.grade < 5)? "elementary": ((c.grade < 9)? "junior": "high") AS gradeLevel
- FROM Families.children[0] c
-```
-
-As with other query operators, the ? operator excludes items if the referenced properties are missing or the types being compared are different.
-
-Use the ?? operator to efficiently check for a property in an item when querying against semi-structured or mixed-type data. For example, the following query returns `lastName` if present, or `surname` if `lastName` isn't present.
-
-```sql
- SELECT f.lastName ?? f.surname AS familyName
- FROM Families f
-```
-
-## Next steps
--- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [Keywords](sql-query-keywords.md)-- [SELECT clause](sql-query-select.md)
cosmos-db Sql Query Scalar Expressions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-scalar-expressions.md
The [SELECT clause](sql-query-select.md) supports scalar expressions. A scalar e
- `unary_operator <scalar_expression>`
- Represents an operator that is applied to a single value. See [Operators](sql-query-operators.md) section for details.
+ Represents an operator that is applied to a single value.
- `<scalar_expression> binary_operator <scalar_expression>`
- Represents an operator that is applied to two values. See [Operators](sql-query-operators.md) section for details.
+ Represents an operator that is applied to two values.
- `<scalar_function_expression>`
cosmos-db Sql Query Ternary Coalesce Operators https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-ternary-coalesce-operators.md
+
+ Title: Ternary and coalesce operators in Azure Cosmos DB
+description: Learn about SQL ternary and coalesce operators supported by Azure Cosmos DB.
++++ Last updated : 01/07/2022+++
+# Ternary and coalesce operators in Azure Cosmos DB
+
+This article details the ternary and coalesce operators supported by Azure Cosmos DB.
+
+## Understanding ternary and coalesce operators
+
+You can use the Ternary (?) and Coalesce (??) operators to build conditional expressions, as in programming languages like C# and JavaScript.
+
+You can use the ? operator to construct new JSON properties on the fly. For example, the following query classifies grade levels into `elementary` or `other`:
+
+```sql
+ SELECT (c.grade < 5)? "elementary": "other" AS gradeLevel
+ FROM Families.children[0] c
+```
+
+You can also nest calls to the ? operator, as in the following query:
+
+```sql
+ SELECT (c.grade < 5)? "elementary": ((c.grade < 9)? "junior": "high") AS gradeLevel
+ FROM Families.children[0] c
+```
+
+As with other query operators, the ? operator excludes items if the referenced properties are missing or the types being compared are different.
+
+Use the ?? operator to efficiently check for a property in an item when querying against semi-structured or mixed-type data. For example, the following query returns `lastName` if present, or `surname` if `lastName` isn't present.
+
+```sql
+ SELECT f.lastName ?? f.surname AS familyName
+ FROM Families f
+```
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Keywords](sql-query-keywords.md)
+- [SELECT clause](sql-query-select.md)
cosmos-db Sql Query Working With Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-working-with-json.md
Azure Cosmos DB supports two helpful type checking system functions for `null` a
* [IS_NULL](sql-query-is-null.md) - checks if a property value is `null` * [IS_DEFINED](sql-query-is-defined.md) - checks if a property value is defined
-You can learn about [supported operators](sql-query-operators.md) and their behavior for `null` and `undefined` values.
+You can learn about [supported operators](sql-query-equality-comparison-operators.md) and their behavior for `null` and `undefined` values.
## Reserved keywords and special characters in JSON
cost-management-billing Aws Integration Set Up Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/aws-integration-set-up-configure.md
Title: Set up AWS integration with Azure Cost Management
description: This article walks you through setting up and configuring AWS Cost and Usage report integration with Cost Management. Previously updated : 10/07/2021 Last updated : 01/10/2022
With Amazon Web Services (AWS) Cost and Usage report (CUR) integration, you moni
Cost Management processes the AWS Cost and Usage report stored in an S3 bucket by using your AWS access credentials to get report definitions and download report GZIP CSV files.
-Watch the video [How to set up Connectors for AWS in Cost Management](https://www.youtube.com/watch?v=Jg5KC1cx5cA) to learn more about how to set up AWS report integration. To watch other videos, visit the [Cost Management YouTube channel](https://www.youtube.com/c/AzureCostManagement).
-
->[!VIDEO https://www.youtube.com/embed/Jg5KC1cx5cA]
- ## Create a Cost and Usage report in AWS Using a Cost and Usage report is the AWS-recommended way to collect and process AWS costs. The Cost Management cross cloud connector supports cost and usage reports configured at the management (consolidated) account level. For more information, see the [AWS Cost and Usage Report](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-reports-costusage.html) documentation.
Add permission for AWS Organizations:
1. Enter **Organizations**. 2. Select **Access level** > **List** > **ListAccounts**. This action gets the names of the accounts.
-3. In **Review Policy**, enter a name for the new policy. Check that you entered the correct information, and then select **Create Policy**.
-4. Go back to the previous tab and refresh your browser's webpage. On the search bar, search for your new policy.
-5. Select **Next: Review**.
-6. Enter a name for the new role. Check that you entered the correct information, and then select **Create Role**.
+3. Select **Add Additional permissions**.
+
+Configure permissions for Policies
+
+1. Enter **IAM**.
+1. Select Access level > List > **ListAttachedRolePolicies** and **ListPolicyVersions** and **ListRoles**.
+1. Select Access level > Read > **GetPolicyVersion**.
+1. Select **Resources** > policy, and then select **Any**. These actions allow verification that only the minimal required set of permissions were granted to the connector.
+1. Select role - **Add ARN**. The account number should be automatically populated.
+1. In **Role name with path** enter a role name and note it. You need to use it in the final role creation step.
+1. Select **Add**.
+1. Select **Next: Tags**. You may enter tags you wish to use or skip this step. This step isn't required to create a connector in Cost Management.
+1. Select **Next: Review Policy**.
+1. In Review Policy, enter a name for the new policy. Verify that you entered the correct information, and then select **Create Policy**.
+1. Go back to the previous tab and refresh the policies list. On the search bar, search for your new policy.
+1. Select **Next: Review**.
+1. Enter the same role name you defined and noted while configuring the IAM permissions. Verify that you entered the correct information, and then select **Create Role**.
- Note the role ARN and the external ID used in the preceding steps when you created the role. You'll use them later when you set up the Cost Management connector.
+Note the role ARN and the external ID used in the preceding steps when you created the role. You'll use them later when you set up the Cost Management connector.
-The policy JSON should resemble the following example. Replace _bucketname_ with the name of your S3 bucket.
+The policy JSON should resemble the following example. Replace `bucketname` with the name of your S3 bucket, `accountname` with your account number and `rolename` with the role name you created.
-```JSON
+```json
{ "Version": "2012-10-17", "Statement": [
The policy JSON should resemble the following example. Replace _bucketname_ with
"Sid": "VisualEditor0", "Effect": "Allow", "Action": [
-"organizations:ListAccounts",
- "ce:*",
- "cur:DescribeReportDefinitions"
+ "organizations:ListAccounts",
+ "iam:ListRoles",
+ "ce:*",
+ "cur:DescribeReportDefinitions"
], "Resource": "*" },
The policy JSON should resemble the following example. Replace _bucketname_ with
"Action": [ "s3:GetObject", "s3:ListBucket"
+ "iam:GetPolicyVersion",
+ "iam:ListPolicyVersions",
+ "iam:ListAttachedRolePolicies",
], "Resource": [ "arn:aws:s3:::bucketname", "arn:aws:s3:::bucketname/*"
+ "arn:aws:iam::accountnumber:policy/*",
+ "arn:aws:iam::accountnumber:role/rolename"
] } ]
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/managed-virtual-network-private-endpoint.md
Generally, managed Virtual network is available to all Azure Data Factory region
### Outbound communications through public endpoint from ADF Managed Virtual Network - All ports are opened for outbound communications.-- Azure Storage and Azure Data Lake Gen2 are not supported to be connected through public endpoint from ADF Managed Virtual Network. ### Linked Service creation of Azure Key Vault - When you create a Linked Service for Azure Key Vault, there is no Azure Integration Runtime reference. So you can't create Private Endpoint during Linked Service creation of Azure Key Vault. But when you create Linked Service for data stores which references Azure Key Vault Linked Service and this Linked Service references Azure Integration Runtime with Managed Virtual Network enabled, then you are able to create a Private Endpoint for the Azure Key Vault Linked Service during the creation.
databox Data Box Customer Managed Encryption Key Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-customer-managed-encryption-key-portal.md
Previously updated : 11/19/2020 Last updated : 01/10/2022 + # Use customer-managed keys in Azure Key Vault for Azure Data Box
-Azure Data Box protects the device unlock key (also known as the device password), which is used to lock a device, via an encryption key. By default, this encryption key is a Microsoft managed key. For additional control, you can use a customer-managed key.
+Azure Data Box protects the device unlock key (also known as the device password), which is used to lock a device, via an encryption key. By default, this encryption key is a Microsoft managed key. For more control, you can use a customer-managed key.
Using a customer-managed key doesn't affect how data on the device is encrypted. It only affects how the device unlock key is encrypted.
This article applies to Azure Data Box and Azure Data Box Heavy devices.
The customer-managed key for a Data Box order must meet the following requirements: - The key must be created and stored in an Azure Key Vault that has **Soft delete** and **Do not purge** enabled. For more information, see [What is Azure Key Vault?](../key-vault/general/overview.md). You can create a key vault and key while creating or updating your order.- - The key must be an RSA key of 2048 size or larger.
+- You must enable the `Get`, `UnwrapKey`, and `WrapKey` permissions for the key in Azure Key Vault. The permissions must remain in place for the lifetime of the order. Otherwise, the customer-managed key can't be accessed at the start of the Data Copy phase.
## Enable key
To enable a customer-managed key for your existing Data Box order in the Azure p
![A selected user identity shown in Encryption type settings](./media/data-box-customer-managed-encryption-key-portal/customer-managed-key-15.png)
- 9. Select **Save** to save the updated **Encryption type** settings.
+ 8. Select **Save** to save the updated **Encryption type** settings.
![Save your customer-managed key](./media/data-box-customer-managed-encryption-key-portal/customer-managed-key-10.png) The key URL is displayed under **Encryption type**.
- ![Customer-managed key URL](./media/data-box-customer-managed-encryption-key-portal/customer-managed-key-11.png)<!--Probably need new screen from recent order. Can you provide one? I can't create an order using CMK with the subscription I'm using.-->
+ ![Customer-managed key URL](./media/data-box-customer-managed-encryption-key-portal/customer-managed-key-11.png)
+
+> [!IMPORTANT]
+> You must enable the `Get`, `UnwrapKey`, and `WrapKey` permissions on the key. To set the permissions in Azure CLI, see [az keyvault set-policy](/cli/azure/keyvault?view=azure-cli-latest#az_keyvault_set_policy).
## Change key
To change the key vault, key, and/or key version for the customer-managed key yo
![Save updated encryption settings - 1](./media/data-box-customer-managed-encryption-key-portal/customer-managed-key-17-a.png)
+> [!IMPORTANT]
+> You must enable the `Get`, `UnwrapKey`, and `WrapKey` permissions on the key. To set the permissions in Azure CLI, see [az keyvault set-policy](/cli/azure/keyvault?view=azure-cli-latest#az_keyvault_set_policy).
+ ## Change identity
-To change the identity used to manage access to the customer-managed key for this order, follow these steps:
+To change the identity that's used to manage access to the customer-managed key for this order, follow these steps:
1. On the **Overview** screen for your completed Data Box order, go to **Settings** > **Encryption**.
To change the identity used to manage access to the customer-managed key for thi
![Save updated encryption settings - 2](./media/data-box-customer-managed-encryption-key-portal/customer-managed-key-17-a.png) + ## Use Microsoft managed key To change from using a customer-managed key to the Microsoft managed key for your order, follow these steps:
If you receive any errors related to your customer-managed key, use the followin
| Error code| Error details| Recoverable?| |-|--||
-| SsemUserErrorEncryptionKeyDisabled| Could not fetch the passkey as the customer managed key is disabled.| Yes, by enabling the key version.|
-| SsemUserErrorEncryptionKeyExpired| Could not fetch the passkey as the customer managed key has expired.| Yes, by enabling the key version.|
-| SsemUserErrorKeyDetailsNotFound| Could not fetch the passkey as the customer managed key could not be found.| If you deleted the key vault, you can't recover the customer-managed key. If you migrated the key vault to a different tenant, see [Change a key vault tenant ID after a subscription move](../key-vault/general/move-subscription.md). If you deleted the key vault:<ol><li>Yes, if it is in the purge-protection duration, using the steps at [Recover a key vault](../key-vault/general/key-vault-recovery.md?tabs=azure-powershell#key-vault-powershell).</li><li>No, if it is beyond the purge-protection duration.</li></ol><br>Else if the key vault underwent a tenant migration, yes, it can be recovered using one of the below steps: <ol><li>Revert the key vault back to the old tenant.</li><li>Set `Identity = None` and then set the value back to `Identity = SystemAssigned`. This deletes and recreates the identity once the new identity has been created. Enable `Get`, `Wrap`, and `Unwrap` permissions to the new identity in the key vault's Access policy.</li></ol> |
-| SsemUserErrorKeyVaultBadRequestException | Applied a customer managed key but the key access has not been granted or has been revoked, or unable to access key vault due to firewall being enabled. | Add the identity selected to your key vault to enable access to the customer managed key. If key vault has firewall enabled, switch to a system assigned identity and then add a customer managed key. For more information, see how to [Enable the key](#enable-key). |
-| SsemUserErrorKeyVaultDetailsNotFound| Could not fetch the passkey as the associated key vault for the customer managed key could not be found. | If you deleted the key vault, you can't recover the customer-managed key. If you migrated the key vault to a different tenant, see [Change a key vault tenant ID after a subscription move](../key-vault/general/move-subscription.md). If you deleted the key vault:<ol><li>Yes, if it is in the purge-protection duration, using the steps at [Recover a key vault](../key-vault/general/key-vault-recovery.md?tabs=azure-powershell#key-vault-powershell).</li><li>No, if it is beyond the purge-protection duration.</li></ol><br>Else if the key vault underwent a tenant migration, yes, it can be recovered using one of the below steps: <ol><li>Revert the key vault back to the old tenant.</li><li>Set `Identity = None` and then set the value back to `Identity = SystemAssigned`. This deletes and recreates the identity once the new identity has been created. Enable `Get`, `Wrap`, and `Unwrap` permissions to the new identity in the key vault's Access policy.</li></ol> |
-| SsemUserErrorSystemAssignedIdentityAbsent | Could not fetch the passkey as the customer managed key could not be found.| Yes, check if: <ol><li>Key vault still has the MSI in the access policy.</li><li>Identity is of type System assigned.</li><li>Enable Get, Wrap and Unwrap permissions to the identity in the key vaultΓÇÖs Access policy.</li></ol>|
-| SsemUserErrorUserAssignedLimitReached | Adding new User Assigned Identity failed as you have reached the limit on the total number of user assigned identities that can be added. | Please retry the operation with fewer user identities or remove some user assigned identities from the resource before retrying. |
-| SsemUserErrorCrossTenantIdentityAccessForbidden | Managed identity access operation failed. <br> Note: This is for the scenario when subscription is moved to different tenant. Customer has to manually move the identity to new tenant. PFA mail for more details. | Please move the identity selected to the new tenant under which the subscription is present. For more information, see how to [Enable the key](#enable-key). |
-| SsemUserErrorKekUserIdentityNotFound | Applied a customer managed key but the user assigned identity that has access to the key was not found in the active directory. <br> Note: This is for the case when user identity is deleted from Azure.| Please try adding a different user assigned identity selected to your key vault to enable access to the customer managed key. For more information, see how to [Enable the key](#enable-key). |
-| SsemUserErrorUserAssignedIdentityAbsent | Could not fetch the passkey as the customer managed key could not be found. | Could not access the customer managed key. Either the User Assigned Identity (UAI) associated with the key is deleted or the UAI type has changed. |
-| SsemUserErrorCrossTenantIdentityAccessForbidden | Managed identity access operation failed. <br> Note: This is for the scenario when subscription is moved to different tenant. Customer has to manually move the identity to new tenant. PFA mail for more details. | Please try adding a different user assigned identity selected to your key vault to enable access to the customer managed key. For more information, see how to [Enable the key](#enable-key).|
-| SsemUserErrorKeyVaultBadRequestException | Applied a customer managed key but the key access has not been granted or has been revoked, or unable to access key vault due to firewall being enabled. | Add the identity selected to your key vault to enable access to the customer managed key. If key vault has firewall enabled, switch to a system assigned identity and then add a customer managed key. For more information, see how to [Enable the key](#enable-key). |
-| Generic error | Could not fetch the passkey.| This is a generic error. Contact Microsoft Support to troubleshoot the error and determine the next steps.|
+| SsemUserErrorEncryptionKeyDisabled| Could not fetch the passkey as the customer-managed key is disabled.| Yes, by enabling the key version.|
+| SsemUserErrorEncryptionKeyExpired| Could not fetch the passkey as the customer-managed key has expired.| Yes, by enabling the key version.|
+| SsemUserErrorKeyDetailsNotFound| Could not fetch the passkey as the customer-managed key could not be found.| If you deleted the key vault, you can't recover the customer-managed key. If you migrated the key vault to a different tenant, see [Change a key vault tenant ID after a subscription move](../key-vault/general/move-subscription.md). If you deleted the key vault:<ol><li>Yes, if it is in the purge-protection duration, using the steps at [Recover a key vault](../key-vault/general/key-vault-recovery.md?tabs=azure-powershell#key-vault-powershell).</li><li>No, if it is beyond the purge-protection duration.</li></ol><br>Else if the key vault underwent a tenant migration, yes, it can be recovered using one of the below steps: <ol><li>Revert the key vault back to the old tenant.</li><li>Set `Identity = None` and then set the value back to `Identity = SystemAssigned`. This deletes and recreates the identity once the new identity has been created. Enable `Get`, `WrapKey`, and `UnwrapKey` permissions to the new identity in the key vault's Access policy.</li></ol> |
+| SsemUserErrorKeyVaultBadRequestException | Applied a customer-managed key but the key access has not been granted or has been revoked, or unable to access key vault due to firewall being enabled. | Add the identity selected to your key vault to enable access to the customer-managed key. If key vault has firewall enabled, switch to a system assigned identity and then add a customer-managed key. For more information, see how to [Enable the key](#enable-key). |
+| SsemUserErrorKeyVaultDetailsNotFound| Could not fetch the passkey as the associated key vault for the customer-managed key could not be found. | If you deleted the key vault, you can't recover the customer-managed key. If you migrated the key vault to a different tenant, see [Change a key vault tenant ID after a subscription move](../key-vault/general/move-subscription.md). If you deleted the key vault:<ol><li>Yes, if it is in the purge-protection duration, using the steps at [Recover a key vault](../key-vault/general/key-vault-recovery.md?tabs=azure-powershell#key-vault-powershell).</li><li>No, if it is beyond the purge-protection duration.</li></ol><br>Else if the key vault underwent a tenant migration, yes, it can be recovered using one of the below steps: <ol><li>Revert the key vault back to the old tenant.</li><li>Set `Identity = None` and then set the value back to `Identity = SystemAssigned`. This deletes and recreates the identity once the new identity has been created. Enable `Get`, `WrapKey`, and `UnwrapKey` permissions to the new identity in the key vault's Access policy.</li></ol> |
+| SsemUserErrorSystemAssignedIdentityAbsent | Could not fetch the passkey as the customer-managed key could not be found.| Yes, check if: <ol><li>Key vault still has the MSI in the access policy.</li><li>Identity is of type System assigned.</li><li>Enable `Get`, `WrapKey`, and `UnwrapKey` permissions to the identity in the key vaultΓÇÖs access policy. These permissions must remain for the lifetime of the order. They're used during order creation and at the beginning of the Data Copy phase.</li></ol>|
+| SsemUserErrorUserAssignedLimitReached | Adding new User Assigned Identity failed as you have reached the limit on the total number of user assigned identities that can be added. | Retry the operation with fewer user identities, or remove some user-assigned identities from the resource before retrying. |
+| SsemUserErrorCrossTenantIdentityAccessForbidden | Managed identity access operation failed. <br> Note: This error can occur when a subscription is moved to different tenant. The customer has to manually move the identity to the new tenant. PFA mail for more details. | Move the identity selected to the new tenant under which the subscription is present. For more information, see how to [Enable the key](#enable-key). |
+| SsemUserErrorKekUserIdentityNotFound | Applied a customer-managed key but the user assigned identity that has access to the key was not found in the active directory. <br> Note: This error can occur when a user identity is deleted from Azure.| Try adding a different user-assigned identity to your key vault to enable access to the customer-managed key. For more information, see how to [Enable the key](#enable-key). |
+| SsemUserErrorUserAssignedIdentityAbsent | Could not fetch the passkey as the customer-managed key could not be found. | Could not access the customer-managed key. Either the User Assigned Identity (UAI) associated with the key is deleted or the UAI type has changed. |
+| SsemUserErrorCrossTenantIdentityAccessForbidden | Managed identity access operation failed. <br> Note: This error can occur when a subscription is moved to different tenant. The customer has to manually move the identity to the new tenant. PFA mail for more details. | Try adding a different user-assigned identity to your key vault to enable access to the customer-managed key. For more information, see how to [Enable the key](#enable-key).|
+| SsemUserErrorKeyVaultBadRequestException | Applied a customer-managed key, but key access has not been granted or has been revoked, or the key vault couldn't be accessed because a firewall is enabled. | Add the identity selected to your key vault to enable access to the customer-managed key. If the key vault has a firewall enabled, switch to a system-assigned identity and then add a customer-managed key. For more information, see how to [Enable the key](#enable-key). |
+| Generic error | Could not fetch the passkey. | This error is a generic error. Contact Microsoft Support to troubleshoot the error and determine the next steps.|
+| SsemUserErrorEncryptionKeyTypeNotSupported | The encryption key type isn't supported for the operation. | Enable a supported encryption type on the key - for example, RSA or RSA-HSM. For more information, see [Key types, algorithms, and operations](/azure/key-vault/keys/about-keys-details). |
+| SsemUserErrorSoftDeleteAndPurgeProtectionNotEnabled | Key vault does not have soft delete or purge protection enabled. | Ensure that both soft delete and purge protection are enabled on the key vault. |
+| SsemUserErrorInvalidKeyVaultUrl<br>(Command-line only) | An invalid key vault URI was used. | Get the correct key vault URI. To get the key vault URI, use [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault?view=azps-7.1.0) in PowerShell. |
+| SsemUserErrorKeyVaultUrlWithInvalidScheme | Only HTTPS is supported for passing the key vault URI. | Pass the key vault URI over HTTPS. |
+| SsemUserErrorKeyVaultUrlInvalidHost | The key vault URI host is not an allowed host in the geographical region. | In the public cloud, the key vault URI should end with `vault.azure.net`. In the Azure Government cloud, the key vault URI should end with `vault.usgovcloudapi.net`. |
+ ## Next steps
databox Data Box Deploy Ordered https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-ordered.md
Previously updated : 08/26/2021 Last updated : 01/10/2022 -+ #Customer intent: As an IT admin, I need to be able to order Data Box to upload on-premises data from my server onto Azure. # Tutorial: Order Azure Data Box
-Azure Data Box is a hybrid solution that allows you to import your on-premises data into Azure in a quick, easy, and reliable way. You transfer your data to a Microsoft-supplied 80-TB (usable capacity) storage device, and then ship the device back. This data is then uploaded to Azure.
+Azure Data Box is a hybrid solution that allows you to import your on-premises data into Azure in a quick, easy, and reliable way. You transfer your data to a Microsoft-supplied storage device with 80 TB of usable capacity, and then ship the device back. This data is then uploaded to Azure.
-This tutorial describes how you can order an Azure Data Box. In this tutorial, you learn about:
+This tutorial describes how you can order an Azure Data Box. In this tutorial, you learn about:
> [!div class="checklist"] >
You can sign in to Azure and run Azure CLI commands in one of two ways:
* You can install the CLI and run CLI commands locally. * You can run CLI commands from within the Azure portal, in Azure Cloud Shell.
-We use Azure CLI through Windows PowerShell for the tutorial, but you are free to choose either option.
+We use Azure CLI through Windows PowerShell for the tutorial, but you're free to choose either option.
### For Azure CLI
Before you begin, make sure that:
#### Install the CLI locally
-* Install [Azure CLI](/cli/azure/install-azure-cli) version 2.0.67 or later. Alternatively, you may [install using MSI](https://aka.ms/installazurecliwindows).
+* Install [Azure CLI](/cli/azure/install-azure-cli) version 2.0.67 or later. Or [install using MSI](https://aka.ms/installazurecliwindows) instead.
**Sign in to Azure**
You have logged in. Now let us find all the subscriptions to which you have acce
**Install the Azure Data Box CLI extension**
-Before you can use the Azure Data Box CLI commands, you need to install the extension. Azure CLI extensions give you access to experimental and pre-release commands that have not yet shipped as part of the core CLI. For more information about extensions, see [Use extensions with Azure CLI](/cli/azure/azure-cli-extensions-overview).
+Before you can use the Azure Data Box CLI commands, you need to install the extension. Azure CLI extensions give you access to experimental and pre-release commands that haven't yet shipped as part of the core CLI. For more information about extensions, see [Use extensions with Azure CLI](/cli/azure/azure-cli-extensions-overview).
To install the extension for Azure Data Box, run the following command: `az extension add --name databox`:
Before you begin, make sure that you:
**Install or upgrade Windows PowerShell**
-You will need to have Windows PowerShell version 6.2.4 or higher installed. To find out what version of PowerShell you have installed, run: `$PSVersionTable`.
+You'll need to have Windows PowerShell version 6.2.4 or higher installed. To find out what version of PowerShell is installed, run: `$PSVersionTable`.
-You will see the following output:
+You'll see the following output:
```azurepowershell PS C:\users\gusp> $PSVersionTable
If your version is lower than 6.2.4, you need to upgrade your version of Windows
**Install Azure PowerShell and Data Box modules**
-You will need to install the Azure PowerShell modules to use Azure PowerShell to order an Azure Data Box. To install the Azure PowerShell modules:
+You'll need to install the Azure PowerShell modules to use Azure PowerShell to order an Azure Data Box. To install the Azure PowerShell modules:
1. Install the [Azure PowerShell Az module](/powershell/azure/new-azureps-module-az). 2. Then install Az.DataBox using the command `Install-Module -Name Az.DataBox`.
For detailed information on how to sign in to Azure using Windows PowerShell, se
Do the following steps using Azure CLI to order a device:
-1. Write down your settings for your Data Box order. These settings include your personal/business information, subscription name, device information, and shipping information. You will need to use these settings as parameters when running the CLI command to create the Data Box order. The following table shows the parameter settings used for `az databox job create`:
+1. Write down your settings for your Data Box order. These settings include your personal/business information, subscription name, device information, and shipping information. You'll need to use these settings as parameters when running the CLI command to create the Data Box order. The following table shows the parameter settings used for `az databox job create`:
| Setting (parameter) | Description | Sample value | |||| |resource-group| Use an existing or create a new one. A resource group is a logical container for the resources that can be managed or deployed together. | "myresourcegroup"|
- |name| The name of the order you are creating. | "mydataboxorder"|
+ |name| The name of the order you're creating. | "mydataboxorder"|
|contact-name| The name associated with the shipping address. | "Gus Poland"| |phone| The phone number of the person or business that will receive the order.| "14255551234" |location| The nearest Azure region to you that will be shipping your device.| "US West"|
- |sku| The specific Data Box device you are ordering. Valid values are: "DataBox", "DataBoxDisk", and "DataBoxHeavy"| "DataBox" |
+ |sku| The specific Data Box device you're ordering. Valid values are: "DataBox", "DataBoxDisk", and "DataBoxHeavy"| "DataBox" |
|email-list| The email addresses associated with the order.| "gusp@contoso.com" | |street-address1| The street address to where the order will be shipped. | "15700 NE 39th St" | |street-address2| The secondary address information, such as apartment number or building number. | "Building 123" |
Do the following steps using Azure PowerShell to order a device:
$storAcct = Get-AzStorageAccount -Name "mystorageaccount" -ResourceGroup "myresourcegroup" ```
-2. Write down your settings for your Data Box order. These settings include your personal/business information, subscription name, device information, and shipping information. You will need to use these settings as parameters when running the PowerShell command to create the Data Box order. The following table shows the parameter settings used for [New-AzDataBoxJob](/powershell/module/az.databox/New-AzDataBoxJob).
+2. Write down your settings for your Data Box order. These settings include your personal/business information, subscription name, device information, and shipping information. You'll need to use these settings as parameters when running the PowerShell command to create the Data Box order. The following table shows the parameter settings used for [New-AzDataBoxJob](/powershell/module/az.databox/New-AzDataBoxJob).
| Setting (parameter) | Description | Sample value | |||| |ResourceGroupName [Required]| Use an existing resource group. A resource group is a logical container for the resources that can be managed or deployed together. | "myresourcegroup"|
- |Name [Required]| The name of the order you are creating. | "mydataboxorder"|
+ |Name [Required]| The name of the order you're creating. | "mydataboxorder"|
|ContactName [Required]| The name associated with the shipping address. | "Gus Poland"| |PhoneNumber [Required]| The phone number of the person or business that will receive the order.| "14255551234" |Location [Required]| The nearest Azure region to you that will be shipping your device.| "WestUS"|
- |DataBoxType [Required]| The specific Data Box device you are ordering. Valid values are: "DataBox", "DataBoxDisk", and "DataBoxHeavy"| "DataBox" |
+ |DataBoxType [Required]| The specific Data Box device you're ordering. Valid values are: "DataBox", "DataBoxDisk", and "DataBoxHeavy"| "DataBox" |
|EmailId [Required]| The email addresses associated with the order.| "gusp@contoso.com" | |StreetAddress1 [Required]| The street address to where the order will be shipped. | "15700 NE 39th St" | |StreetAddress2| The secondary address information, such as apartment number or building number. | "Building 123" |
Do the following steps using Azure PowerShell to order a device:
# [Portal](#tab/portal)
-After you have placed the order, you can track the status of the order from Azure portal. Go to your Data Box order and then go to **Overview** to view the status. The portal shows the order in **Ordered** state.
+After you place the order, you can track the status of the order from Azure portal. Go to your Data Box order and then go to **Overview** to view the status. The portal shows the order in **Ordered** state.
If the device is not available, you receive a notification. If the device is available, Microsoft identifies the device for shipment and prepares the shipment. During device preparation, following actions occur:
To delete a canceled order, go to **Overview** and select **Delete** from the co
### Cancel an order
-To cancel an Azure Data Box order, run [`az databox job cancel`](/cli/azure/databox/job#az_databox_job_cancel). You are required to specify your reason for canceling the order.
+To cancel an Azure Data Box order, run [`az databox job cancel`](/cli/azure/databox/job#az_databox_job_cancel). You're required to specify your reason for canceling the order.
```azurecli az databox job cancel --resource-group <resource-group> --name <order-name> --reason <cancel-description>
To cancel an Azure Data Box order, run [`az databox job cancel`](/cli/azure/data
### Delete an order
-If you have canceled an Azure Data Box order, you can run [`az databox job delete`](/cli/azure/databox/job#az_databox_job_delete) to delete the order.
+After you cancel an Azure Data Box order, you can run [`az databox job delete`](/cli/azure/databox/job#az_databox_job_delete) to delete the order.
```azurecli az databox job delete --name [-n] <order-name> --resource-group <resource-group> [--yes] [--verbose]
Here is an example of the command with output:
### Cancel an order
-To cancel an Azure Data Box order, run [Stop-AzDataBoxJob](/powershell/module/az.databox/stop-azdataboxjob). You are required to specify your reason for canceling the order.
+To cancel an Azure Data Box order, run [Stop-AzDataBoxJob](/powershell/module/az.databox/stop-azdataboxjob). You're required to specify your reason for canceling the order.
```azurepowershell Stop-AzDataBoxJob -ResourceGroup <String> -Name <String> -Reason <String>
databox Data Box Heavy Deploy Ordered https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-heavy-deploy-ordered.md
Previously updated : 09/08/2021 Last updated : 01/04/2022 #Customer intent: As an IT admin, I need to be able to order Data Box Heavy to upload on-premises data from my server onto Azure.
Before you begin, make sure that:
- Your device is unpacked. - You should have a host computer connected to the datacenter network. Data Box Heavy will copy the data from this computer. Your host computer must run a supported operating system as described in [Azure Data Box Heavy system requirements](data-box-system-requirements.md). - You need to have a laptop with RJ-45 cable to connect to the local UI and configure the device. Use the laptop to configure each node of the device once.-- Your datacenter needs to have high-speed network. We strongly recommend that you have at least one 10 GbE connection.-- You need one 40 Gbps or 10 Gbps cable per device node. Choose cables that are compatible with the [Mellanox MCX314A-BCCT](https://store.mellanox.com/products/mellanox-mcx314a-bcct-connectx-3-pro-en-network-interface-card-40-56gbe-dual-port-qsfp-pcie3-0-x8-8gt-s-rohs-r6.html) network interface:
+- Your datacenter needs to have high-speed network. We strongly recommend that you have at least one 10-GbE connection.
+- You need one 40-Gbps or 10-Gbps cable per device node. Choose cables that are compatible with the [Mellanox MCX314A-BCCT](https://store.mellanox.com/products/mellanox-mcx314a-bcct-connectx-3-pro-en-network-interface-card-40-56gbe-dual-port-qsfp-pcie3-0-x8-8gt-s-rohs-r6.html) network interface:
- - For the 40 Gbps cable, device end of the cable needs to be QSFP+.
- - For the 10 Gbps cable, you need an SFP+ cable that plugs into a 10 G switch on one end, with a QSFP+ to SFP+ adapter (or the QSA adapter) for the end that plugs into the device.
+ - For the 40-Gbps cable, device end of the cable needs to be QSFP+.
+ - For the 10-Gbps cable, you need an SFP+ cable that plugs into a 10 G switch on one end, with a QSFP+ to SFP+ adapter (or the QSA adapter) for the end that plugs into the device.
- The power cables are included with the device. ## Order Data Box Heavy
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud Previously updated : 12/13/2021 Last updated : 01/10/2022 # Security alerts - a reference guide
At the bottom of this page, there's a table describing the Microsoft Defender fo
|**Possible backdoor detected [seen multiple times]**|Analysis of host data has detected a suspicious file being downloaded then run on %{Compromised Host} in your subscription. This activity has previously been associated with installation of a backdoor. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Possible credential access tool detected [seen multiple times]**|Machine logs indicate a possible known credential access tool was running on %{Compromised Host} launched by process: '%{Suspicious Process}'. This tool is often associated with attacker attempts to access credentials. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Possible credential access tool detected**<br>(VM_KnownLinuxCredentialAccessTool)|Machine logs indicate a possible known credential access tool was running on %{Compromised Host} launched by process: '%{Suspicious Process}'. This tool is often associated with attacker attempts to access credentials.|Credential Access|Medium|
+|**Possible data exfiltration [seen multiple times]**|Analysis of host data on %{Compromised Host} detected a possible data egress condition. Attackers will often egress data from machines they have compromised. This behavior was seen [x]] times today on the following machines: [Machine names]|-|Medium|
+|**Possible data exfiltration**<br>(VM_DataEgressArtifacts)|Analysis of host data on %{Compromised Host} detected a possible data egress condition. Attackers will often egress data from machines they have compromised.|Collection, Exfiltration|Medium|
|**Possible exploitation of Hadoop Yarn**<br>(VM_HadoopYarnExploit)|Analysis of host data on %{Compromised Host} detected the possible exploitation of the Hadoop Yarn service.|Exploitation|Medium| |**Possible exploitation of the mailserver detected**<br>(VM_MailserverExploitation )|Analysis of host data on %{Compromised Host} detected an unusual execution under the mail server account|Exploitation|Medium| |**Possible Log Tampering Activity Detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Possible Log Tampering Activity Detected**<br>(VM_SystemLogRemoval)|Analysis of host data on %{Compromised Host} detected possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files.|Defense Evasion|Medium|
-|**Possible loss of data detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected a possible data egress condition. Attackers will often egress data from machines they have compromised. This behavior was seen [x]] times today on the following machines: [Machine names]|-|Medium|
-|**Possible loss of data detected**<br>(VM_DataEgressArtifacts)|Analysis of host data on %{Compromised Host} detected a possible data egress condition. Attackers will often egress data from machines they have compromised.|Collection, Exfiltration|Medium|
|**Possible malicious web shell detected [seen multiple times]**<br>(VM_Webshell)|Analysis of host data on %{Compromised Host} detected a possible web shell. Attackers will often upload a web shell to a machine they have compromised to gain persistence or for further exploitation. This behavior was seen [x] times today on the following machines: [Machine names]|Persistence, Exploitation|Medium| |**Possible malicious web shell detected**|Analysis of host data on %{Compromised Host} detected a possible web shell. Attackers will often upload a web shell to a machine they have compromised to gain persistence or for further exploitation.|-|Medium| |**Possible password change using crypt-method detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected password change using crypt method. Attackers can make this change to continue access and gaining persistence after compromise. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
Microsoft Defender for Containers provides security alerts on the cluster level
|Alert (alert type)|Description|MITRE tactics<br>([Learn more](#intentions))|Severity| |-|-|:-:|--| | **A file was downloaded and executed (Preview)**<br>(K8S.NODE_LinuxSuspiciousActivity) | Analysis of processes running within a container indicates that a file has been downloaded to the container, given execution privileges and then executed. | Execution | Medium |
-| **Account added to sudo group (Preview)**<br>(K8S.NODE_NewSudoerAccount) | Analysis of host data indicates that a user was added to the sudoers group, which enables its members to run commands with high privileges. | PrivilegeEscalation | Low |
| **A history file has been cleared (Preview)**<br>(K8S.NODE_HistoryFileCleared) | Analysis of processes running within a container indicates that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium | | **An uncommon connection attempt detected (Preview)**<br>(K8S.NODE_SuspectConnection) | Analysis of processes running within a container detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium | | **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alertΓÇÖs extended properties. | Execution | Medium | | **Attempt to stop apt-daily-upgrade.timer service detected (Preview)**<br>(K8S.NODE_TimerServiceDisabled) | Analysis of host/device data detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational | | **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container detected execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium | | **Behavior similar to Fairware ransomware detected (Preview)**<br>(K8S.NODE_FairwareMalware) | Analysis of processes running within a container detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Execution | Medium |
-| **Behavior similar to ransomware detected (Preview)**<br>(K8S.NODE_LinuxRansomwareArtifacts) | Analysis of processes running within a container detected the execution of files that have resemblance to known ransomware that can prevents users from accessing their system or personal files and demand a ransom payment to regain access. | Execution | High |
-| **Burst of log deletions may indicate actions of an attacker (Preview)**<br>(K8S.NODE_SystemLogRemovalBurst) | Analysis of host/device data detected a large number of system log files being removed. Attackers often perform this for defense evasion. | DefenseEvasion | Low |
| **Command within a container running with high privileges (Preview)**<br>(K8S.NODE_PrivilegedExecutionInContainer) | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low | | **Container running in privileged mode (Preview)**<br>(K8S.NODE_PrivilegedContainerArtifacts) | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to the host's resources. If compromised, an attacker can use the privileged container to gain access to the host machine. | PrivilegeEscalation, Execution | Low |
-| **Container with a miner image detected (Preview)**<br>(K8S.NODE_MinerInContainerImage) | Machine logs indicate execution of a Docker container that ran an image associated with digital currency mining. | Execution | High |
| **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium | | **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the clusterΓÇÖs DNS server and poison it. | Lateral Movement | Low | | **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low | | **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of host data detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host. | Execution | Medium | | **Execution of hidden file (Preview)**<br>(K8S.NODE_ExecuteHiddenFile) | Analysis of host data indicates that a hidden file was executed by the specified user account. | Persistence, DefenseEvasion | Informational |
-| **Exploitation of Xorg vulnerability (Preview)**<br>(K8S.NODE_XorgExploit) | Analysis of processes running within a container detected the use of Xorg with suspicious arguments. Attackers may use this technique in privilege escalation attempts. | PrivilegeEscalation, Exploitation | Medium |
| **Exposed Docker daemon on TCP socket (Preview)**<br>(K8S.NODE_ExposedDocker) | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port. | Execution, Exploitation | Medium | | **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: https://aka.ms/exposedkubeflow-blog | Initial Access | Medium | | **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) | Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low | | **Local host reconnaissance detected (Preview)**<br>(K8S.NODE_LinuxReconnaissance) | Analysis of processes running within a container detected the execution of a command normally associated with common Linux bot reconnaissance. | Discovery | Medium | | **Manipulation of host firewall detected (Preview)**<br>(K8S.NODE_FirewallDisabled) | Analysis of processes running within a container detected possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium |
-| **Manipulation of scheduled tasks detected (Preview)**<br>(K8S.NODE_CronJobAccess) | Analysis of host/device data detected possible manipulation of scheduled tasks. Attackers will often add scheduled tasks to machines they have compromised to gain persistence. | Persistence | Informational |
| **Microsoft Defender for Cloud test alert (not a threat). (Preview)**<br>(K8S.NODE_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High | | **MITRE Caldera agent detected (Preview)**<br>(K8S.NODE_MitreCalderaTools) | Analysis of processes running within a container indicate that a suspicious process was running. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium | | **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) | Kubernetes audit log analysis detected a new container in the kube-system namespace that isnΓÇÖt among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low | | **New high privileges role detected**<br>(K8S_HighPrivilegesRole) | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low | | **Possible attack tool detected (Preview)**<br>(K8S.NODE_KnownLinuxAttackTool) | Analysis of processes running within a container indicates a suspicious tool ran. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium |
-| **Possible attempt to disable auditd logging detected (Preview)**<br>(K8S.NODE_AuditDLoggingDisabled) | The Linux Audit system provides a way to track security-relevant information on the system. It records as much information about the events that are happening on your system as possible. Disabling auditd logging could hamper discovering violations of security policies used on the system. | DefenseEvasion | Low |
| **Possible backdoor detected (Preview)**<br>(K8S.NODE_LinuxBackdoorArtifact) | Analysis of processes running within a container detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium | | **Possible command line exploitation attempt (Preview)**<br>(K8S.NODE_ExploitAttempt) | Analysis of processes running within a container detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium | | **Possible credential access tool detected (Preview)**<br>(K8S.NODE_KnownLinuxCredentialAccessTool) | Analysis of processes running within a container indicates a possible known credential access tool was running on the container, as identified by the specified process and commandline history item. This tool is often associated with attacker attempts to access credentials. | CredentialAccess | Medium | | **Possible Cryptocoinminer download detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerDownload) | Analysis of processes running within a container detected the download of a file normally associated with digital currency mining. | DefenseEvasion, Command And Control, Exploitation | Medium | | **Possible data exfiltration detected (Preview)**<br>(K8S.NODE_DataEgressArtifacts) | Analysis of host/device data detected a possible data egress condition. Attackers will often egress data from machines they have compromised. | Collection, Exfiltration | Medium | | **Possible Log Tampering Activity Detected (Preview)**<br>(K8S.NODE_SystemLogRemoval) | Analysis of processes running within a container detected possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. | DefenseEvasion | Medium |
-| **Possible manipulation of PostgreSQL database (Preview)**<br>(K8S.NODE_SuspectPostGreAdministration) | Analysis of processes running within a container detected the possible manipulation of authorization rules associated with a PostgreSQL database. It is possible to insert new lines into the pg_hba.conf file that will change how users access the database. | PrivilegeEscalation, Execution | Low |
| **Possible password change using crypt-method detected (Preview)**<br>(K8S.NODE_SuspectPasswordChange) | Analysis of processes running within a container detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium |
-| **Potential crypto coin miner started (Preview)**<br>(K8S.NODE_CryptoCoinMinerExecution) | Analysis of processes running within a container detected a process being started in a way normally associated with digital currency mining. | Execution | Medium |
| **Potential overriding of common files (Preview)**<br>(K8S.NODE_OverridingCommonFiles) | Analysis of processes running within a container detected common files as a way to obfuscate their actions or for persistence. | Persistence | Medium | | **Potential port forwarding to external IP address (Preview)**<br>(K8S.NODE_SuspectPortForwarding) | Analysis of processes running within a container detected the initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium | | **Potential reverse shell detected (Preview)**<br>(K8S.NODE_ReverseShell) | Analysis of processes running within a container detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium | | **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the nodeΓÇÖs resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low | | **Process associated with digital currency mining detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerArtifacts) | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium | | **Process seen accessing the SSH authorized keys file in an unusual way (Preview)**<br>(K8S.NODE_SshKeyAccess) | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low |
-| **Python encoded downloader detected (Preview)**<br>(K8S.NODE_RemoteEncodedPythonDownload) | Analysis of processes running within a container detected the execution of encoded Python that downloads and runs code from a remote location. This may be an indication of malicious activity. | Exploitation | Low |
| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low | | **Screenshot taken on host (Preview)**<br>(K8S.NODE_KnownLinuxScreenshotTool) | Analysis of host/device data detected the use of a screen capture tool. Attackers may use these tools to access private data. | Collection | Low | | **Script extension mismatch detected (Preview)**<br>(K8S.NODE_MismatchedScriptFeatures) | Analysis of processes running within a container detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions. | DefenseEvasion | Medium | | **Security-related process termination detected (Preview)**<br>(K8S.NODE_SuspectProcessTermination) | Analysis of processes running within a container detected attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
-| **Shellcode detected (Preview)**<br>(K8S.NODE_ShellcodeOnCommandLine) | Analysis of processes running within a container detected shellcode being generated from a command line. This command line could be legitimate activity, or an indication of a compromise. | Execution, Exploitation | Medium |
| **SSH server is running inside a container (Preview) (Preview)**<br>(K8S.NODE_ContainerSSH) | Analysis of processes running within a container detected an SSH server running inside the container. | Execution | Medium | | **Suspicious compilation detected (Preview)**<br>(K8S.NODE_SuspectCompilation) | Analysis of processes running within a container detected suspicious compilation. Attackers will often compile exploits to escalate privileges. | PrivilegeEscalation, Exploitation | Medium | | **Suspicious file timestamp modification (Preview)**<br>(K8S.NODE_TimestampTampering) | Analysis of host/device data detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low |
-| **Suspicious kernel module detected (Preview)**<br>(K8S.NODE_SharedObjectLoadedAsKM) | Analysis of host/device data detected a shared object file being loaded as a kernel module. This could be legitimate activity, or an indication that one of your machines has been compromised. | Persistence, DefenseEvasion | Medium |
-| **Suspicious password access (Preview)**<br>(K8S.NODE_SuspectPasswordFileAccess) | Analysis of processes running within a container detected suspicious access to encrypted user passwords. | Persistence | Informational |
| **Suspicious request to Kubernetes API (Preview)**<br>(K8S.NODE_KubernetesAPI) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes API. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | Execution | Medium | | **Suspicious request to the Kubernetes Dashboard (Preview)**<br>(K8S.NODE_KubernetesDashboard) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | Execution | Medium |
-| **Suspicious shell script detected (Preview)**<br>(K8S.NODE_ScriptGeneratedFromCommandLine) | Analysis of processes running within a container detected a shell script being generated from the command line. This process could be legitimate activity, or an indication of a compromise. | Persistence | Medium |
-| **Suspicious use of DNS over HTTPS (Preview)**<br>(K8S.NODE_SuspiciousDNSOverHttps) | Analysis of processes running within a container indicates the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium |
-| **Unusual access to bash history file (Preview)**<br>(K8S.NODE_ReadingHistoryFile) | Analysis of processes running within a container indicates an abnormal access to commands history file by the specified user account. | CredentialAccess | Informational |
-| **Unusual access to Bash profile file (Preview)**<br>(K8S.NODE_BashProfileAccess) | Analysis of host data indicates that a Bash Profile file was accessed by the specified user account. | Persistence | Low |
| | | | |
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/integration-defender-for-endpoint.md
Title: Using Microsoft Defender for Endpoint in Microsoft Defender for Cloud to protect native, on-premises, and AWS machines. description: Learn about deploying Microsoft Defender for Endpoint from Microsoft Defender for Cloud to protect Azure, hybrid, and multi-cloud machines. Previously updated : 12/06/2021 Last updated : 01/10/2022 # Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint [!INCLUDE [Banner for top of topics](./includes/banner.md)]
-Microsoft Defender for Endpoint is a holistic, cloud delivered endpoint security solution. Its main features are:
+Microsoft Defender for Endpoint is a holistic, cloud-delivered, endpoint security solution. Its main features are:
- Risk-based vulnerability management and assessment - Attack surface reduction
Confirm that your machine meets the necessary requirements for Defender for Endp
:::image type="content" source="./media/integration-defender-for-endpoint/enable-integration-with-edr.png" alt-text="Enable the integration between Microsoft Defender for Cloud and Microsoft's EDR solution, Microsoft Defender for Endpoint":::
- Microsoft Defender for Cloud will automatically onboard your machines to Microsoft Defender for Endpoint. Onboarding might take up to 24 hours.
+ Microsoft Defender for Cloud will automatically onboard your machines to Microsoft Defender for Endpoint. Onboarding might take up to 12 hours. For new machines created after the integration has been enabled, onboarding takes up to an hour.
-### [**Linux**)](#tab/linux)
+### [**Linux**](#tab/linux)
You'll deploy Defender for Endpoint to your Linux machines in one of two ways - depending on whether you've already deployed it to your Windows machines:
If you've already enabled the integration with **Defender for Endpoint for Windo
- Ignore any machines that are running other fanotify-based solutions (see details of the `fanotify` kernel option required in [Linux system requirements](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux#system-requirements)) - Detect any previous installations of Defender for Endpoint and reconfigure them to integrate with Defender for Cloud
- Onboarding might take up to 1 hour.
+ Microsoft Defender for Cloud will automatically onboard your machines to Microsoft Defender for Endpoint. Onboarding might take up to 12 hours. For new machines created after the integration has been enabled, onboarding takes up to an hour.
> [!NOTE] > The next time you return to this page of the Azure portal, the **Enable for Linux machines** button won't be shown. To disable the integration for Linux, you'll need to disable it for Windows too by clearing the checkbox for **Allow Microsoft Defender for Endpoint to access my data**, and selecting **Save**.
If you've never enabled the integration for Windows, the **Allow Microsoft Defen
1. Open the [Defender for Endpoint Security Center portal](https://securitycenter.windows.com/). Learn more about the portal's features and icons, in [Defender for Endpoint Security Center portal overview](/windows/security/threat-protection/microsoft-defender-atp/portal-overview). ---- ## Send a test alert To generate a benign test alert from Defender for Endpoint, select the tab for the relevant operating system of your endpoint:
For endpoints running Linux:
+## Remove Defender for Endpoint from a machine
+
+To remove the Defender for Endpoint solution from your machines:
+
+1. Disable the integration:
+ 1. From Defender for Cloud's menu, select **Environment settings** and select the subscription with the relevant machines.
+ 1. Open **Integrations** and clear the checkbox for **Allow Microsoft Defender for Endpoint to access my data**.
+ 1. Select **Save**.
+
+1. Remove the MDE.Windows/MDE.Linux extension from the machine.
+
+1. Follow the steps in [Offboard devices from the Microsoft Defender for Endpoint service](/microsoft-365/security/defender-endpoint/offboard-machines?view=o365-worldwide) from the Defender for Endpoint documentation.
+ ## FAQ - Microsoft Defender for Cloud integration with Microsoft Defender for Endpoint - [What's this "MDE.Windows" / "MDE.Linux" extension running on my machine?](#whats-this-mdewindows--mdelinux-extension-running-on-my-machine)
In the past, Microsoft Defender for Endpoint was provisioned by the Log Analytic
Defender for Cloud automatically deploys the extension to machines running: -- Windows Server 2019.
+- Windows Server 2019 & 2022.
- Windows 10 Virtual Desktop (WVD). - Other versions of Windows Server if Defender for Cloud doesn't recognize the OS version (for example, when a custom VM image is used). In this case, Microsoft Defender for Endpoint is still provisioned by the Log Analytics agent. - Linux. > [!IMPORTANT]
-> If you delete the MDE.Windows extension, it will not remove Microsoft Defender for Endpoint. to 'offboard', see [Offboard Windows servers.](/microsoft-365/security/defender-endpoint/configure-server-endpoints).
+> If you delete the MDE.Windows/MDE.Linux extension, it will not remove Microsoft Defender for Endpoint. to 'offboard', see [Offboard Windows servers.](/microsoft-365/security/defender-endpoint/configure-server-endpoints).
++
+### I've enabled the solution by the "MDE.Windows" / "MDE.Linux" extension isn't showing on my machine
+
+If you've enabled the integration, but still don't see the extension running on your machines, check the following:
+
+1. If 12 hours hasn't passed since you enabled the solution, you'll need to wait until the end of this period to be sure there's an issue to investigate.
+1. After 12 hours have passed, if you still don't see the extension running on your machines, check that you've met [Prerequisites](#prerequisites) for the integration.
+1. Ensure you've enabled the [Microsoft Defender for servers](defender-for-servers-introduction.md) plan for the subscriptions related to the machines you're investigating.
+1. If you've moved your Azure subscription between Azure tenants, some manual preparatory steps are required before Defender for Cloud will deploy Defender for Endpoint. For full details, [contact Microsoft support](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
+ ### What are the licensing requirements for Microsoft Defender for Endpoint? Defender for Endpoint is included at no extra cost with **Microsoft Defender for servers**. Alternatively, it can be purchased separately for 50 machines or more.
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/release-notes.md
Title: What's new in Microsoft Defender for IoT for device builders description: Learn about the latest updates for Defender for IoT device builders. Previously updated : 12/28/2021 Last updated : 01/10/2022 # What's new
This article lists new features and feature enhancements in Microsoft Defender f
Noted features are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-If you would like to be added to the Microsoft Defender for IoT device builders email distribution list, to get updates on new features, and release notes, send an email to: [defender_micro_agent@microsoft.com](mailto:defender_micro_agent@microsoft.com)
- ## Versioning and support Listed below are the support, breaking change policies for Defender for IoT, and the versions of Defender for IoT that are currently available.
+## November 2021
+
+**Version 3.13.1**:
+
+- DNS network activity on managed devices is now supported. Microsoft threat intelligence security graph can now detect suspicious activity based on DNS traffic.
+
+- [Leaf device proxying](../../iot-edge/how-to-connect-downstream-iot-edge-device.md#integrate-microsoft-defender-for-iot-with-iot-edge-gateway): There is now an enhanced integration with IoT Edge. This integration enhances the connectivity between the agent, and the cloud using leaf device proxying.
+
+## October 2021
+
+**Version 3.12.2**:
+
+- More CIS benchmark checks are now supported for Debian 9: These extra checks allow you to make sure your network is compliant with the CIS best practices used to protect against pervasive cyber threats.
+
+- **[Twin configuration](concept-micro-agent-configuration.md)**: The micro agentΓÇÖs behavior is configured by a set of module twin properties. You can configure the micro agent to best suit your needs.
+ ## September 2021 **Version 3.11**:
digital-twins How To Authenticate Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-authenticate-client.md
# Mandatory fields. Title: Write app authentication code
-description: See how to write authentication code in a client application
+description: Learn how to write authentication code in a client application
Previously updated : 9/1/2021 Last updated : 1/3/2022
This article describes how to obtain credentials using the `Azure.Identity` clie
## Prerequisites
-First, complete the setup steps in [Set up an instance and authentication](how-to-set-up-instance-portal.md). This setup will ensure that you have an Azure Digital Twins instance and that your user has access permissions. After that setup, you're ready to write client app code.
+First, complete the setup steps in [Set up an instance and authentication](how-to-set-up-instance-portal.md). This setup will ensure you have an Azure Digital Twins instance and your user has access permissions. After that setup, you're ready to write client app code.
To continue, you'll need a client app project in which you write your code. If you don't already have a client app project set up, create a basic project in your language of choice to use with this tutorial.
The rest of this article shows how to use these methods with the [.NET (C#) SDK]
To set up your .NET project to authenticate with `Azure.Identity`, complete the following steps:
-1. Include the SDK package `Azure.DigitalTwins.Core` and the `Azure.Identity` package in your project. Depending on your tools of choice, you can include the packages using the Visual Studio package manager or the `dotnet` command line tool.
+1. Include the SDK package `Azure.DigitalTwins.Core` and the `Azure.Identity` package in your project. Depending on your tools of choice, you can include the packages using the Visual Studio package manager or the `dotnet` command-line tool.
1. Add the following using statements to your project code:
digital-twins How To Create App Registration Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-app-registration-cli.md
# Mandatory fields. Title: Create an app registration (CLI)
-description: See how to create an Azure AD app registration, as an authentication option for client apps, using the CLI.
+description: Learn how to create an Azure AD app registration, as an authentication option for client apps, using the CLI.
Previously updated : 9/8/2021 Last updated : 1/5/2022
[!INCLUDE [digital-twins-create-app-registration-selector.md](../../includes/digital-twins-create-app-registration-selector.md)]
+This article describes how to create an app registration to use with Azure Digital Twins using the CLI. It includes instructions for creating a manifest file containing service information, creating the app registration, verifying success, collecting important values, and other possible steps that your organization may require.
+ When working with an Azure Digital Twins instance, it's common to interact with that instance through client applications, such as a custom client app or a sample like [Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md). Those applications need to authenticate with Azure Digital Twins to interact with it, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) **app registration**. The app registration isn't required for all authentication scenarios. However, if you're using an authentication strategy or code sample that does require an app registration, this article shows you how to set one up using the [Azure CLI](/cli/azure/what-is-azure-cli). It also covers how to [collect important values](#collect-important-values) that you'll need to use the app registration to authenticate.
For more information about app registration and its different setup options, see
In this article, you set up an Azure AD app registration that can be used to authenticate client applications with the Azure Digital Twins APIs.
-Next, read about authentication mechanisms, including one that uses app registrations and others that do not:
+Next, read about authentication mechanisms, including one that uses app registrations and others that don't:
* [Write app authentication code](how-to-authenticate-client.md)
digital-twins How To Create App Registration Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-app-registration-portal.md
# Mandatory fields. Title: Create an app registration (portal)
-description: See how to create an Azure AD app registration, as an authentication option for client apps, using the Azure portal.
+description: Learn how to create an Azure AD app registration, as an authentication option for client apps, using the Azure portal.
Previously updated : 9/8/2021 Last updated : 1/5/2022
[!INCLUDE [digital-twins-create-app-registration-selector.md](../../includes/digital-twins-create-app-registration-selector.md)]
+This article describes how to create an app registration to use with Azure Digital Twins using the Azure portal. It includes instructions for creating the app registration, collecting important values, providing Azure Digital Twins API permission, verifying success, and other possible steps that your organization may require.
+ When working with an Azure Digital Twins instance, it's common to interact with that instance through client applications, such as the custom client app built in [Code a client app](tutorial-code.md). Those applications need to authenticate with Azure Digital Twins to interact with it, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) **app registration**. The app registration isn't required for all authentication scenarios. However, if you're using an authentication strategy or code sample that does require an app registration, this article shows you how to set one up using the [Azure portal](https://portal.azure.com). It also covers how to [collect important values](#collect-important-values) that you'll need to use the app registration to authenticate.
digital-twins How To Enable Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-enable-private-link.md
# Mandatory fields. Title: Enable private access with Private Link (preview)
-description: See how to enable private access for Azure Digital Twins solutions with Private Link.
+description: Learn how to enable private access for Azure Digital Twins solutions with Private Link.
Previously updated : 7/12/2021 Last updated : 1/3/2022
# Enable private access with Private Link (preview)
-This article describes the different ways to [enable Private Link with a private endpoint for an Azure Digital Twins instance](concepts-security.md#private-network-access-with-azure-private-link-preview) (currently in preview). Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure, as well as avoid data exfiltration from your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
+This article describes the different ways to [enable Private Link with a private endpoint for an Azure Digital Twins instance](concepts-security.md#private-network-access-with-azure-private-link-preview) (currently in preview). Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure. Additionally, it helps avoid data exfiltration from your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
Here are the steps that are covered in this article: 1. Turn on Private Link and configure a private endpoint for an Azure Digital Twins instance.
Before you can set up a private endpoint, you'll need an [Azure Virtual Network
You can use either the [Azure portal](https://portal.azure.com) or the [Azure CLI](/cli/azure/what-is-azure-cli) to turn on Private Link with a private endpoint for an Azure Digital Twins instance.
-If you want to set up Private Link as part of the instance's initial setup, you'll need to use the Azure portal. If you want to enable Private Link on an instance after it's been created, you can use either the Azure portal or the Azure CLI. Any of these creation methods will give the same configuration options and the same end result for your instance.
+If you want to set up Private Link as part of the instance's initial setup, you'll need to use the Azure portal. Otherwise, if you want to enable Private Link on an instance after it's been created, you can use either the Azure portal or the Azure CLI. Any of these creation methods will give the same configuration options and the same end result for your instance.
Use the tabs in the sections below to select instructions for your preferred experience. >[!TIP] > You can also set up a Private Link endpoint through the Private Link service, instead of through your Azure Digital Twins instance. This also gives the same configuration options and the same end result. >
-> For more details about setting up Private Link resources, see Private Link documentation for the [Azure portal](../private-link/create-private-endpoint-portal.md), [Azure CLI](../private-link/create-private-endpoint-cli.md), [Azure Resource Manager](../private-link/create-private-endpoint-template.md), or [PowerShell](../private-link/create-private-endpoint-powershell.md).
+> For more information on setting up Private Link resources, see Private Link documentation for the [Azure portal](../private-link/create-private-endpoint-portal.md), [Azure CLI](../private-link/create-private-endpoint-cli.md), [Azure Resource Manager](../private-link/create-private-endpoint-template.md), or [PowerShell](../private-link/create-private-endpoint-powershell.md).
### Add a private endpoint during instance creation
The Private Link options are located in the **Networking** tab of instance setup
1. Begin setting up an Azure Digital Twins instance in the Azure portal. For instructions, see [Set up an instance and authentication](how-to-set-up-instance-portal.md). 1. When you reach the **Networking** tab of instance setup, you can enable private endpoints by selecting the **Private endpoint** option for the **Connectivity method**.
- This will add a section called **Private endpoint connections** where you can configure the details of your private endpoint. Select the **+ Add** button to continue.
+ Doing so will add a section called **Private endpoint connections** where you can configure the details of your private endpoint. Select the **+ Add** button to continue.
:::image type="content" source="media/how-to-enable-private-link/create-instance-networking-1.png" alt-text="Screenshot of the Azure portal showing the Networking tab of a new Azure Digital Twins instance, highlighting how to create a private endpoint. The 'Add' button is highlighted." lightbox="media/how-to-enable-private-link/create-instance-networking-1.png":::
The Private Link options are located in the **Networking** tab of instance setup
1. After filling out the configuration options, select **OK** to finish.
-1. This will return you to the **Networking** tab of the Azure Digital Twins instance setup. Verify that your new endpoint is visible under **Private endpoint connections**.
+1. Once you finish this process, the portal will return you to the **Networking** tab of the Azure Digital Twins instance setup. Verify that your new endpoint is visible under **Private endpoint connections**.
:::image type="content" source="media/how-to-enable-private-link/create-instance-networking-2.png" alt-text="Screenshot of the Azure portal showing the Networking tab of an Azure Digital Twins with a newly created private endpoint." lightbox="media/how-to-enable-private-link/create-instance-networking-2.png":::
The Private Link options are located in the **Networking** tab of instance setup
# [CLI](#tab/cli)
-You cannot add a Private Link endpoint during instance creation using the Azure CLI.
+You can't add a Private Link endpoint during instance creation using the Azure CLI.
You can switch to the Azure portal to add the endpoint during instance creation, or continue to the next section to use the CLI to add a private endpoint after the instance has been created.
In this section, you'll enable Private Link with a private endpoint for an Azure
:::image type="content" source="media/how-to-enable-private-link/add-endpoint-digital-twins.png" alt-text="Screenshot of the Azure portal showing the Networking page for an existing Azure Digital Twins instance, highlighting how to create private endpoints." lightbox="media/how-to-enable-private-link/add-endpoint-digital-twins.png":::
-1. In the **Basics** tab, enter or select the **Subscription** and **Resource group** of your project, and a **Name** and **Region** for your endpoint. The region needs to be the same as the region for the VNet you're using.
+1. In the **Basics** tab, enter or select the **Subscription** and **Resource group** of your project, and a **Name** and **Region** for your endpoint. The region needs to be the same as the region for the VNet you're using.
:::image type="content" source="media/how-to-enable-private-link/create-private-endpoint-1.png" alt-text="Screenshot of the Azure portal showing the first (Basics) tab of the Create a private endpoint dialog. It contains the fields described above."::: When you're finished, select the **Next : Resource >** button to go to the next tab.
-1. In the **Resource** tab, enter or select this information:
+1. In the **Resource** tab, enter or select this information:
* **Connection method**: Select **Connect to an Azure resource in my directory** to search for your Azure Digital Twins instance. * **Subscription**: Enter your subscription. * **Resource type**: Select **Microsoft.DigitalTwins/digitalTwinsInstances**
In this section, you'll enable Private Link with a private endpoint for an Azure
When you're finished, select the **Next : Configuration >** button to go to the next tab.
-1. In theΓÇ»**Configuration** tab, enter or select this information:
+1. In the **Configuration** tab, enter or select this information:
* **Virtual network**: Select your virtual network. * **Subnet**: Choose a subnet from your virtual network. * **Integrate with private DNS zone**: Select whether to **Integrate with private DNS zone**. You can use the default of **Yes** or, for help with this option, you can follow the link in the portal to [learn more about private DNS integration](../private-link/private-endpoint-overview.md#dns-configuration).
In this section, you'll enable Private Link with a private endpoint for an Azure
When you're finished, you can select the **Review + create** button to finish setup.
-1. In the **Review + create** tab, review your selections and select theΓÇ»**Create** button.
+1. In the **Review + create** tab, review your selections and select the **Create** button.
When the endpoint is finished deploying, it should show up in the private endpoint connections for your Azure Digital Twins instance.
When the endpoint is finished deploying, it should show up in the private endpoi
To create a private endpoint and link it to an Azure Digital Twins instance using the Azure CLI, use the [az network private-endpoint create](/cli/azure/network/private-endpoint#az_network_private_endpoint_create) command. Identify the Azure Digital Twins instance by using its fully qualified ID in the `--private-connection-resource-id` parameter.
-Here is an example that uses the command to create a private endpoint, with only the required parameters.
+Here's an example that uses the command to create a private endpoint, with only the required parameters.
```azurecli-interactive az network private-endpoint create --connection-name <private-link-service-connection> --name <name-for-private-endpoint> --resource-group <resource-group> --subnet <subnet-ID> --private-connection-resource-id "/subscriptions/<subscription-ID>/resourceGroups/<resource-group>/providers/Microsoft.DigitalTwins/digitalTwinsInstances/<Azure-Digital-Twins-instance-name>"
Once a private endpoint has been created for your Azure Digital Twins instance,
For more information and examples, see the [az dt network private-endpoint reference documentation](/cli/azure/dt/network/private-endpoint).
-### Get additional Private Link information
+### Get more Private Link information
-You can get additional information about the Private Link status of your instance with the [az dt network private-link](/cli/azure/dt/network/private-link) commands. Operations include:
+You can get more information about the Private Link status of your instance with the [az dt network private-link](/cli/azure/dt/network/private-link) commands. Operations include:
* List private links associated with an Azure Digital Twins instance * Show a private link associated with the instance
For more information and examples, see the [az dt network private-link reference
You can configure your Azure Digital Twins instance to deny all public connections and allow only connections through private endpoints to enhance the network security. This action is done with a **public network access flag**.
-This policy allows you to restrict API access to Private Link connections only. When the public network access flag is set to *disabled*, all REST API calls to the Azure Digital Twins instance data plane from the public cloud will return `403, Unauthorized`. Alternatively, when the policy is set to *disabled* and a request is made through a private endpoint, the API call will succeed.
+This policy allows you to restrict API access to Private Link connections only. When the public network access flag is set to *disabled*, all REST API calls to the Azure Digital Twins instance data plane from the public cloud will return `403, Unauthorized`. Otherwise, when the policy is set to *disabled* and a request is made through a private endpoint, the API call will succeed.
-You can update the value of the network flag using the [Azure portal](https://portal.azure.com), [Azure CLI](/cli/azure/) or [ARMClient command tool](https://github.com/projectkudu/ARMClient).
+You can update the value of the network flag using the [Azure portal](https://portal.azure.com), [Azure CLI](/cli/azure/), or [ARMClient command tool](https://github.com/projectkudu/ARMClient).
# [Portal](#tab/portal-2)
az dt create --dt-name <name-of-existing-instance> --resource-group <resource-gr
With the [ARMClient command tool](https://github.com/projectkudu/ARMClient), public network access is enabled or disabled using the commands below. To **disable** public network access:
-ΓÇ»
+
```cmd/sh
-armclient loginΓÇ»
+armclient login
-armclient PATCH /subscriptions/<your-Azure-subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.DigitalTwins/digitalTwinsInstances/<your-Azure-Digital-Twins-instance>?api-version=2020-12-01 "{ 'properties': { 'publicNetworkAccess': 'disabled' } }"ΓÇ»
+armclient PATCH /subscriptions/<your-Azure-subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.DigitalTwins/digitalTwinsInstances/<your-Azure-Digital-Twins-instance>?api-version=2020-12-01 "{ 'properties': { 'publicNetworkAccess': 'disabled' } }"
```
-To **enable** public network access:ΓÇ»
-ΓÇ»
+To **enable** public network access:
+
```cmd/sh
-armclient loginΓÇ»
+armclient login
-armclient PATCH /subscriptions/<your-Azure-subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.DigitalTwins/digitalTwinsInstances/<your-Azure-Digital-Twins-instance>?api-version=2020-12-01 "{ 'properties': { 'publicNetworkAccess': 'enabled' } }" 
+armclient PATCH /subscriptions/<your-Azure-subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.DigitalTwins/digitalTwinsInstances/<your-Azure-Digital-Twins-instance>?api-version=2020-12-01 "{ 'properties': { 'publicNetworkAccess': 'enabled' } }"
```
digital-twins How To Integrate Azure Signalr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-azure-signalr.md
# Mandatory fields. Title: Integrate with Azure SignalR Service
-description: See how to stream Azure Digital Twins telemetry to clients using Azure SignalR
+description: Learn how to stream Azure Digital Twins telemetry to clients using Azure SignalR
Previously updated : 02/12/2021 Last updated : 1/3/2022
Here are the prerequisites you should complete before proceeding:
* Before integrating your solution with Azure SignalR Service in this article, you should complete the Azure Digital Twins [Connect an end-to-end solution](tutorial-end-to-end.md), because this how-to article builds on top of it. The tutorial walks you through setting up an Azure Digital Twins instance that works with a virtual IoT device to trigger digital twin updates. This how-to article will connect those updates to a sample web app using Azure SignalR Service. * You'll need the following values from the tutorial:
- - Event grid topic
+ - Event Grid topic
- Resource group - App service/function app name
Be sure to sign in to the [Azure portal](https://portal.azure.com/) with your Az
You'll be attaching Azure SignalR Service to Azure Digital Twins through the path below. Sections A, B, and C in the diagram are taken from the architecture diagram of the [end-to-end tutorial prerequisite](tutorial-end-to-end.md). In this how-to article, you'll build section D on the existing architecture, which includes two new Azure functions that communicate with SignalR and client apps. ## Download the sample applications
Next, configure the function to communicate with your Azure SignalR instance. Yo
## Connect the function to Event Grid
-Next, subscribe the *broadcast* Azure function to the **event grid topic** you created during the [tutorial prerequisite](how-to-integrate-azure-signalr.md#prerequisites). This action will allow telemetry data to flow from the thermostat67 twin through the event grid topic and to the function. From here, the function can broadcast the data to all the clients.
+Next, subscribe the *broadcast* Azure function to the **Event Grid topic** you created during the [tutorial prerequisite](how-to-integrate-azure-signalr.md#prerequisites). This action will allow telemetry data to flow from the thermostat67 twin through the Event Grid topic and to the function. From here, the function can broadcast the data to all the clients.
-To broadcast the data, you'll create an **Event subscription** from your event grid topic to your *broadcast* Azure function as an endpoint.
+To broadcast the data, you'll create an **Event subscription** from your Event Grid topic to your *broadcast* Azure function as an endpoint.
-In the [Azure portal](https://portal.azure.com/), navigate to your event grid topic by searching for its name in the top search bar. Select *+ Event Subscription*.
+In the [Azure portal](https://portal.azure.com/), navigate to your Event Grid topic by searching for its name in the top search bar. Select *+ Event Subscription*.
:::image type="content" source="media/how-to-integrate-azure-signalr/event-subscription-1b.png" alt-text="Screenshot of how to create an event subscription in the Azure portal.":::
On the *Create Event Subscription* page, fill in the fields as follows (fields f
* *EVENT SUBSCRIPTION DETAILS* > **Name**: Give a name to your event subscription. * *ENDPOINT DETAILS* > **Endpoint Type**: Select *Azure Function* from the menu options. * *ENDPOINT DETAILS* > **Endpoint**: Select the *Select an endpoint* link, which will open a *Select Azure Function* window:
- - Fill in your **Subscription**, **Resource group**, **Function app**, and **Function** (*broadcast*). Some of these fields may auto-populate after selecting the subscription.
+ - Fill in your **Subscription**, **Resource group**, **Function app**, and **Function** (*broadcast*). Some of these fields may autopopulate after selecting the subscription.
- Select **Confirm Selection**. :::image type="content" source="media/how-to-integrate-azure-signalr/create-event-subscription.png" alt-text="Screenshot of the form for creating an event subscription in the Azure portal.":::
Back on the *Create Event Subscription* page, select **Create**.
At this point, you should see two event subscriptions in the *Event Grid Topic* page. ## Configure and run the web app
Running this command will open a browser window running the sample app, which di
If you no longer need the resources created in this article, follow these steps to delete them.
-Using the Azure Cloud Shell or local Azure CLI, you can delete all Azure resources in a resource group with the [az group delete](/cli/azure/group#az_group_delete) command. Removing the resource group will also remove...
-* the Azure Digital Twins instance (from the end-to-end tutorial)
-* the IoT hub and the hub device registration (from the end-to-end tutorial)
-* the event grid topic and associated subscriptions
-* the Azure Functions app, including all three functions and associated resources like storage
-* the Azure SignalR instance
+Using the Azure Cloud Shell or local Azure CLI, you can delete all Azure resources in a resource group with the [az group delete](/cli/azure/group#az_group_delete) command. Removing the resource group will also remove:
+* The Azure Digital Twins instance (from the end-to-end tutorial)
+* The IoT hub and the hub device registration (from the end-to-end tutorial)
+* The Event Grid topic and associated subscriptions
+* The Azure Functions app, including all three functions and associated resources like storage
+* The Azure SignalR instance
> [!IMPORTANT] > Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
digital-twins How To Integrate Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-logic-apps.md
# Mandatory fields. Title: Integrate with Logic Apps
-description: See how to connect Logic Apps to Azure Digital Twins, using a custom connector
+description: Learn how to connect Logic Apps to Azure Digital Twins, using a custom connector
Previously updated : 9/1/2021 Last updated : 1/3/2022
# Integrate with Logic Apps using a custom connector
+In this article, you'll use the [Azure portal](https://portal.azure.com) to **create a custom connector** that can be used to connect Logic Apps to an Azure Digital Twins instance. You'll then **create a logic app** that uses this connection for an example scenario, in which events triggered by a timer will automatically update a twin in your Azure Digital Twins instance.
+ [Azure Logic Apps](../logic-apps/logic-apps-overview.md) is a cloud service that helps you automate workflows across apps and services. By connecting Logic Apps to the Azure Digital Twins APIs, you can create such automated flows around Azure Digital Twins and their data. Azure Digital Twins doesn't currently have a certified (pre-built) connector for Logic Apps. Instead, the current process for using Logic Apps with Azure Digital Twins is to create a [custom Logic Apps connector](../logic-apps/custom-connector-overview.md), using a [custom Azure Digital Twins Swagger](/samples/azure-samples/digital-twins-custom-swaggers/azure-digital-twins-custom-swaggers/) definition file that has been modified to work with Logic Apps.
Azure Digital Twins doesn't currently have a certified (pre-built) connector for
> [!NOTE] > There are multiple versions of the Swagger definition file contained in the custom Swagger sample linked above. The latest version will be found in the subfolder with the most recent date, but earlier versions contained in the sample are also still supported.
-In this article, you'll use the [Azure portal](https://portal.azure.com) to **create a custom connector** that can be used to connect Logic Apps to an Azure Digital Twins instance. You'll then **create a logic app** that uses this connection for an example scenario, in which events triggered by a timer will automatically update a twin in your Azure Digital Twins instance.
- ## Prerequisites If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
You'll be taken to the deployment page for the connector. When it's finished dep
Next, you'll configure the connector you've created to reach Azure Digital Twins.
-First, download a custom Azure Digital Twins Swagger that has been modified to work with Logic Apps. Navigate to the sample at [Azure Digital Twins custom Swaggers (Logic Apps connector) sample](/samples/azure-samples/digital-twins-custom-swaggers/azure-digital-twins-custom-swaggers/) and select the **Browse code** button underneath the title to go to the GitHub repo for the sample. Get the sample on your machine by by selecting the selecting the **Code** button followed by **Download ZIP**.
+First, download a custom Azure Digital Twins Swagger that has been modified to work with Logic Apps. Navigate to the sample at [Azure Digital Twins custom Swaggers (Logic Apps connector) sample](/samples/azure-samples/digital-twins-custom-swaggers/azure-digital-twins-custom-swaggers/) and select the **Browse code** button underneath the title to go to the GitHub repo for the sample. Get the sample on your machine by selecting the **Code** button followed by **Download ZIP**.
:::image type="content" source="media/how-to-integrate-logic-apps/download-repo-zip.png" alt-text="Screenshot of the digital-twins-custom-swaggers repo on GitHub, highlighting the steps to download it as a zip." lightbox="media/how-to-integrate-logic-apps/download-repo-zip.png":::
digital-twins How To Integrate Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-maps.md
# Mandatory fields. Title: Integrate with Azure Maps
-description: See how to use Azure Functions to create a function that can use the twin graph and Azure Digital Twins notifications to update an Azure Maps indoor map.
+description: Learn how to use Azure Functions to create a function that can use the twin graph and Azure Digital Twins notifications to update an Azure Maps indoor map.
Previously updated : 8/27/2021 Last updated : 1/4/2022
This guide will cover:
### Prerequisites * Follow the Azure Digital Twins in [Connect an end-to-end solution](./tutorial-end-to-end.md).
- * You'll be extending this twin with an additional endpoint and route. You'll also be adding another function to your function app from that tutorial.
+ * You'll be extending this twin with another endpoint and route. You'll also be adding another function to your function app from that tutorial.
* Follow the Azure Maps in [Use Azure Maps Creator to create indoor maps](../azure-maps/tutorial-creator-indoor-maps.md) to create an Azure Maps indoor map with a *feature stateset*. * [Feature statesets](../azure-maps/creator-indoor-maps.md#feature-statesets) are collections of dynamic properties (states) assigned to dataset features such as rooms or equipment. In the Azure Maps tutorial above, the feature stateset stores room status that you'll be displaying on a map. * You'll need your feature *stateset ID* and Azure Maps *subscription key*.
The image below illustrates where the indoor maps integration elements in this t
## Create a function to update a map when twins update
-First, you'll create a route in Azure Digital Twins to forward all twin update events to an event grid topic. Then, you'll use a function to read those update messages and update a feature stateset in Azure Maps.
+First, you'll create a route in Azure Digital Twins to forward all twin update events to an Event Grid topic. Then, you'll use a function to read those update messages and update a feature stateset in Azure Maps.
## Create a route and filter to twin update notifications
Azure Digital Twins instances can emit twin update events whenever a twin's stat
This pattern reads from the room twin directly, rather than the IoT device, which gives you the flexibility to change the underlying data source for temperature without needing to update your mapping logic. For example, you can add multiple thermometers or set this room to share a thermometer with another room, all without needing to update your map logic.
-1. Create an event grid topic, which will receive events from your Azure Digital Twins instance.
+1. Create an Event Grid topic, which will receive events from your Azure Digital Twins instance.
```azurecli-interactive az eventgrid topic create --resource-group <your-resource-group-name> --name <your-topic-name> --location <region> ```
-2. Create an endpoint to link your event grid topic to Azure Digital Twins.
+2. Create an endpoint to link your Event Grid topic to Azure Digital Twins.
```azurecli-interactive az dt endpoint create eventgrid --endpoint-name <Event-Grid-endpoint-name> --eventgrid-resource-group <Event-Grid-resource-group-name> --eventgrid-topic <your-Event-Grid-topic-name> --dt-name <your-Azure-Digital-Twins-instance-name> ```
digital-twins How To Integrate Time Series Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-time-series-insights.md
# Mandatory fields. Title: Integrate with Azure Time Series Insights
-description: See how to set up event routes from Azure Digital Twins to Azure Time Series Insights.
+description: Learn how to set up event routes from Azure Digital Twins to Azure Time Series Insights.
Previously updated : 4/7/2021 Last updated : 1/4/2022
You will be attaching Time Series Insights to Azure Digital Twins through the pa
## Create event hub namespace
-Before creating the event hubs, you'll first create an event hub namespace that will receive events from your Azure Digital Twins instance. You can either use the Azure CLI instructions below, or use the Azure portal by following [Create an event hub using Azure portal](../event-hubs/event-hubs-create.md). To see what regions support event hubs, visit [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-hubs).
+Before creating the event hubs, you'll first create an event hub namespace that will receive events from your Azure Digital Twins instance. You can either use the Azure CLI instructions below, or use the Azure portal by following [Create an event hub using Azure portal](../event-hubs/event-hubs-create.md). To see what regions support Event Hubs, visit [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-hubs).
```azurecli-interactive az eventhubs namespace create --name <name-for-your-Event-Hubs-namespace> --resource-group <your-resource-group> --location <region>
az eventhubs namespace create --name <name-for-your-Event-Hubs-namespace> --reso
> [!TIP] > If you get an error stating `BadRequest: The specified service namespace is invalid.`, make sure the name you've chosen for your namespace meets the naming requirements described in this reference document: [Create Namespace](/rest/api/servicebus/create-namespace).
-You'll be using this event hubs namespace to hold the two event hubs that are needed for this article:
+You'll be using this Event Hubs namespace to hold the two event hubs that are needed for this article:
1. **Twins hub** - Event hub to receive twin change events 2. **Time series hub** - Event hub to stream events to Time Series Insights
Take note of the **primaryConnectionString** value from the result to configure
## Create time series hub
-The second event hub you'll create in this article is the **time series hub**. This is an event hub that will stream the Azure Digital Twins events to Time Series Insights.
+The second event hub you'll create in this article is the **time series hub**. This event hub is the one that will stream the Azure Digital Twins events to Time Series Insights.
To set up the time series hub, you'll complete these steps: 1. Create the time series hub
Also, take note of the following values to use them later to create a Time Serie
In this section, you'll create an Azure function that will convert twin update events from their original form as JSON Patch documents to JSON objects, containing only updated and added values from your twins.
-1. First, create a new function app project in Visual Studio. For instructions on how to do this, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project).
+1. First, create a new function app project in Visual Studio. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project).
2. Create a new Azure function called *ProcessDTUpdatetoTSI.cs* to update device telemetry events to the Time Series Insights. The function type will be **Event Hub trigger**.
In this section, you'll create an Azure function that will convert twin update e
Save your function code.
-5. Publish the project with the *ProcessDTUpdatetoTSI.cs* function to a function app in Azure. For instructions on how to do this, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
+5. Publish the project with the *ProcessDTUpdatetoTSI.cs* function to a function app in Azure. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
Save the function app name to use later to configure app settings for the two event hubs.
az functionapp config appsettings set --settings "EventHubAppSetting-TSI=<your-t
## Create and connect a Time Series Insights instance
-In this section, you'll set up Time Series Insights instance to receive data from your time series hub. For more details about this process, see [Set up an Azure Time Series Insights Gen2 PAYG environment](../time-series-insights/tutorial-set-up-environment.md). Follow the steps below to create a time series insights environment.
+In this section, you'll set up Time Series Insights instance to receive data from your time series hub. For more information about this process, see [Set up an Azure Time Series Insights Gen2 PAYG environment](../time-series-insights/tutorial-set-up-environment.md). Follow the steps below to create a time series insights environment.
1. In the [Azure portal](https://portal.azure.com), search for *Time Series Insights environments*, and select the **Create** button. Choose the following options to create the time series environment.
Now, data should be flowing into your Time Series Insights instance, ready to be
:::image type="content" source="media/how-to-integrate-time-series-insights/view-environment.png" alt-text="Screenshot of the Azure portal showing the Time Series Insights explorer URL in the overview tab of the Time Series Insights environment." lightbox="media/how-to-integrate-time-series-insights/view-environment.png":::
-2. In the explorer, you will see the twins in the Azure Digital Twins instance shown on the left. Select the twin you've edited properties for, choose the property you've changed, and select **Add**.
+2. In the explorer, you'll see the twins in the Azure Digital Twins instance shown on the left. Select the twin you've edited properties for, choose the property you've changed, and select **Add**.
:::image type="content" source="media/how-to-integrate-time-series-insights/add-data.png" alt-text="Screenshot of the Time Series Insights explorer with the steps to select thermostat67, select the property temperature, and select add highlighted." lightbox="media/how-to-integrate-time-series-insights/add-data.png":::
digital-twins How To Manage Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-graph.md
# Mandatory fields. Title: Manage the twin graph and relationships
-description: See how to manage a graph of digital twins by connecting them with relationships.
+description: Learn how to manage a graph of digital twins by connecting them with relationships.
Previously updated : 9/13/2021 Last updated : 1/5/2022
# Manage a graph of digital twins using relationships
-The heart of Azure Digital Twins is the [twin graph](concepts-twins-graph.md) representing your whole environment. The twin graph is made of individual digital twins connected via **relationships**.
+The heart of Azure Digital Twins is the [twin graph](concepts-twins-graph.md) representing your whole environment. The twin graph is made of individual digital twins connected via **relationships**. This article focuses on managing relationships and the graph as a whole; to work with individual digital twins, see [Manage digital twins](how-to-manage-twin.md).
Once you have a working [Azure Digital Twins instance](how-to-set-up-instance-portal.md) and have set up [authentication](how-to-authenticate-client.md) code in your client app, you can create, modify, and delete digital twins and their relationships in an Azure Digital Twins instance.
-This article focuses on managing relationships and the graph as a whole; to work with individual digital twins, see [Manage digital twins](how-to-manage-twin.md).
- ## Prerequisites [!INCLUDE [digital-twins-prereq-instance.md](../../includes/digital-twins-prereq-instance.md)]
This article focuses on managing relationships and the graph as a whole; to work
Relationships describe how different digital twins are connected to each other, which forms the basis of the twin graph.
-The types of relationships that can be created from one (source) twin to another (target) twin are defined as part of the source twin's [DTDL model](concepts-models.md#relationships). You can create an instance of a relationship by using the `CreateOrReplaceRelationshipAsync()` SDK call with twins and relationship details that comply with the DTDL definition.
+The types of relationships that can be created from one (source) twin to another (target) twin are defined as part of the source twin's [DTDL model](concepts-models.md#relationships). You can create an instance of a relationship by using the `CreateOrReplaceRelationshipAsync()` SDK call with twins and relationship details that follow the DTDL definition.
To create a relationship, you need to specify: * The source twin ID (`srcId` in the code sample below): The ID of the twin where the relationship originates.
Relationships can be classified as either:
There's no restriction on the number of relationships that you can have between two twinsΓÇöyou can have as many relationships between twins as you like.
-This means that you can express several different types of relationships between two twins at once. For example, Twin A can have both a *stored* relationship and *manufactured* relationship with Twin B.
+This fact means that you can express several different types of relationships between two twins at once. For example, Twin A can have both a *stored* relationship and *manufactured* relationship with Twin B.
You can even create multiple instances of the same type of relationship between the same two twins, if you want. In this example, Twin A could have two different *stored* relationships with Twin B, as long as the relationships have different relationship IDs.
You can now call this custom method to delete a relationship like this:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseDeleteRelationship":::
-## Create graph from a CSV file
+## Create graph from a CSV file
In practical use cases, twin hierarchies will often be created from data stored in a different database, or perhaps in a spreadsheet or a CSV file. This section illustrates how to read data from a CSV file and create a twin graph out of it.
-Consider the following data table, describing a set of digital twins and relationships.
+Consider the following data table, describing a set of digital twins and relationships.
-|  Model ID | Twin ID (must be unique) | Relationship name  | Target twin ID  | Twin init data |
+| Model ID | Twin ID (must be unique) | Relationship name | Target twin ID | Twin init data |
| | | | | | | dtmi:example:Floor;1 | Floor1 | contains | Room1 | | | dtmi:example:Floor;1 | Floor0 | contains | Room0 | |
-| dtmi:example:Room;1 | Room1 | | | {"Temperature": 80} |
-| dtmi:example:Room;1 | Room0 | | | {"Temperature": 70} |
+| dtmi:example:Room;1 | Room1 | | | {"Temperature": 80} |
+| dtmi:example:Room;1 | Room0 | | | {"Temperature": 70} |
One way to get this data into Azure Digital Twins is to convert the table to a CSV file. Once the table is converted, code can be written to interpret the file into commands to create twins and relationships. The following code sample illustrates reading the data from the CSV file and creating a twin graph in Azure Digital Twins.
digital-twins How To Manage Routes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes.md
# Mandatory fields. Title: Manage endpoints and routes
-description: See how to set up and manage endpoints and event routes for Azure Digital Twins data
+description: Learn how to set up and manage endpoints and event routes for Azure Digital Twins data
Previously updated : 7/30/2021 Last updated : 1/5/2022
# Manage endpoints and routes in Azure Digital Twins
-In Azure Digital Twins, you can route [event notifications](concepts-event-notifications.md) to downstream services or connected compute resources. This is done by first setting up **endpoints** that can receive the events. You can then create [event routes](concepts-route-events.md) that specify which events generated by Azure Digital Twins are delivered to which endpoints.
- This article walks you through the process of creating endpoints and routes using the [Azure portal](https://portal.azure.com), the [REST APIs](/rest/api/azure-digitaltwins/), the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true), and the [Azure Digital Twins CLI](/cli/azure/dt).
+In Azure Digital Twins, you can route [event notifications](concepts-event-notifications.md) to downstream services or connected compute resources. This process is done by first setting up **endpoints** that can receive the events. You can then create [event routes](concepts-route-events.md) that specify which events generated by Azure Digital Twins are delivered to which endpoints.
+ ## Prerequisites * You'll need an **Azure account**, which [can be set up for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
Follow the instructions below if you intend to use the Azure CLI while following
## Create an endpoint for Azure Digital Twins
-These are the supported types of endpoints that you can create for your instance:
+These services are the supported types of endpoints that you can create for your instance:
* [Event Grid](../event-grid/overview.md) topic * [Event Hubs](../event-hubs/event-hubs-about.md) hub * [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) topic
This section explains how to create an endpoint using the [Azure portal](https:/
### Prerequisite: Create endpoint resources
-To link an endpoint to Azure Digital Twins, the event grid topic, event hub, or Service Bus topic that you're using for the endpoint needs to exist already.
+To link an endpoint to Azure Digital Twins, the Event Grid topic, event hub, or Service Bus topic that you're using for the endpoint needs to exist already.
Use the following chart to see what resources should be set up before creating your endpoint. | Endpoint type | Required resources (linked to creation instructions) | | | |
-| Event Grid endpoint | [event grid topic](../event-grid/custom-event-quickstart-portal.md#create-a-custom-topic)<br/>*event schema must be Event Grid Schema or Cloud Event Schema v1.0 |
+| Event Grid endpoint | [Event Grid topic](../event-grid/custom-event-quickstart-portal.md#create-a-custom-topic)<br/>*event schema must be Event Grid Schema or Cloud Event Schema v1.0 |
| Event Hubs endpoint | [Event&nbsp;Hubs&nbsp;namespace](../event-hubs/event-hubs-create.md)<br/><br/>[event hub](../event-hubs/event-hubs-create.md)<br/><br/>(Optional) [authorization rule](../event-hubs/authorize-access-shared-access-signature.md) for key-based authentication | | Service Bus endpoint | [Service Bus namespace](../service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal.md)<br/><br/>[Service Bus topic](../service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal.md)<br/><br/> (Optional) [authorization rule](../service-bus-messaging/service-bus-authentication-and-authorization.md#shared-access-signature) for key-based authentication| ### Create the endpoint
-Once you have created the endpoint resources, you can use them for an Azure Digital Twins endpoint.
+Once you've created the endpoint resources, you can use them for an Azure Digital Twins endpoint.
# [Portal](#tab/portal) To create a new endpoint, go to your instance's page in the [Azure portal](https://portal.azure.com) (you can find the instance by entering its name into the portal search bar).
-1. From the instance menu, select _Endpoints_. Then from the *Endpoints* page that follows, select *+ Create an endpoint*. This will open the *Create an endpoint* page, where you'll fill in the fields in the following steps.
+1. From the instance menu, select _Endpoints_. Then from the *Endpoints* page that follows, select *+ Create an endpoint*. Doing so will open the *Create an endpoint* page, where you'll fill in the fields in the following steps.
:::image type="content" source="media/how-to-manage-routes/create-endpoint-event-grid.png" alt-text="Screenshot of creating an endpoint of type Event Grid in the Azure portal." lightbox="media/how-to-manage-routes/create-endpoint-event-grid.png"::: 1. Enter a **Name** for your endpoint and choose the **Endpoint type**. 1. Complete the other details that are required for your endpoint type, including your subscription and the endpoint resources described [above](#prerequisite-create-endpoint-resources).
- 1. For Event Hub and Service Bus endpoints only, you must select an **Authentication type**. You can use key-based authentication with a pre-created authorization rule, or identity-based authentication if you'll be using the endpoint with a [managed identity](concepts-security.md#managed-identity-for-accessing-other-resources) for your Azure Digital Twins instance.
+ 1. For Event Hubs and Service Bus endpoints only, you must select an **Authentication type**. You can use key-based authentication with a pre-created authorization rule, or identity-based authentication if you'll be using the endpoint with a [managed identity](concepts-security.md#managed-identity-for-accessing-other-resources) for your Azure Digital Twins instance.
:::row::: :::column:::
- :::image type="content" source="media/how-to-manage-routes/create-endpoint-event-hub-authentication.png" alt-text="Screenshot of creating an endpoint of type Event Hub in the Azure portal." lightbox="media/how-to-manage-routes/create-endpoint-event-hub-authentication.png":::
+ :::image type="content" source="media/how-to-manage-routes/create-endpoint-event-hub-authentication.png" alt-text="Screenshot of creating an endpoint of type Event Hubs in the Azure portal." lightbox="media/how-to-manage-routes/create-endpoint-event-hub-authentication.png":::
:::column-end::: :::column::: :::column-end:::
If the endpoint creation fails, observe the error message and retry after a few
You can also view the endpoint that was created back on the *Endpoints* page for your Azure Digital Twins instance.
-Now the event grid, event hub, or Service Bus topic is available as an endpoint inside of Azure Digital Twins, under the name you chose for the endpoint. You'll typically use that name as the target of an **event route**, which you'll create [later in this article](#create-an-event-route).
+Now the event grid, event hub, or Service Bus topic is available as an endpoint in Azure Digital Twins, under the name you chose for the endpoint. You'll typically use that name as the target of an **event route**, which you'll create [later in this article](#create-an-event-route).
# [CLI](#tab/cli)
To create a Service Bus topic endpoint (key-based authentication):
az dt endpoint create servicebus --endpoint-name <Service-Bus-endpoint-name> --servicebus-resource-group <Service-Bus-resource-group-name> --servicebus-namespace <Service-Bus-namespace> --servicebus-topic <Service-Bus-topic-name> --servicebus-policy <Service-Bus-topic-policy> --dt-name <your-Azure-Digital-Twins-instance-name> ```
-After successfully running these commands, the event grid, event hub, or Service Bus topic will be available as an endpoint inside of Azure Digital Twins, under the name you supplied with the `--endpoint-name` argument. You'll typically use that name as the target of an **event route**, which you'll create [later in this article](#create-an-event-route).
+After successfully running these commands, the event grid, event hub, or Service Bus topic will be available as an endpoint in Azure Digital Twins, under the name you supplied with the `--endpoint-name` argument. You'll typically use that name as the target of an **event route**, which you'll create [later in this article](#create-an-event-route).
#### Create an endpoint with identity-based authentication You can also create an endpoint that has identity-based authentication, to use the endpoint with a [managed identity](concepts-security.md#managed-identity-for-accessing-other-resources). This option is only available for Event Hubs and Service Bus-type endpoints (it's not supported for Event Grid). The CLI command to create this type of endpoint is below. You'll need the following values to plug into the placeholders in the command:
-* the Azure resource ID of your Azure Digital Twins instance
-* an endpoint name
-* an endpoint type
-* the endpoint resource's namespace
-* the name of the event hub or Service Bus topic
-* the location of your Azure Digital Twins instance
+* The Azure resource ID of your Azure Digital Twins instance
+* An endpoint name
+* An endpoint type
+* The endpoint resource's namespace
+* The name of the event hub or Service Bus topic
+* The location of your Azure Digital Twins instance
```azurecli-interactive az resource create --id <Azure-Digital-Twins-instance-Azure-resource-ID>/endpoints/<endpoint-name> --properties '{\"properties\": { \"endpointType\": \"<endpoint-type>\", \"authenticationType\": \"IdentityBased\", \"endpointUri\": \"sb://<endpoint-namespace>.servicebus.windows.net\", \"entityPath\": \"<name-of-event-hub-or-Service-Bus-topic>\"}, \"location\":\"<instance-location>\" }' --is-full-object
Next, create a **SAS token** for your storage account that the endpoint can use
:::image type="content" source="./media/how-to-manage-routes/generate-sas-token-2.png" alt-text="Screenshot of the storage account page in the Azure portal showing all the setting selection to generate a SAS token." lightbox="./media/how-to-manage-routes/generate-sas-token-2.png":::
-1. This will generate several SAS and connection string values at the bottom of the same page, underneath the setting selections. Scroll down to view the values, and use the *Copy to clipboard* icon to copy the **SAS token** value. Save it to use later.
+1. Doing so will generate several SAS and connection string values at the bottom of the same page, underneath the setting selections. Scroll down to view the values, and use the *Copy to clipboard* icon to copy the **SAS token** value. Save it to use later.
:::image type="content" source="./media/how-to-manage-routes/copy-sas-token.png" alt-text="Screenshot of the storage account page in the Azure portal highlighting how to copy the SAS token to use in the dead-letter secret." lightbox="./media/how-to-manage-routes/copy-sas-token.png":::
Next, create a **SAS token** for your storage account that the endpoint can use
# [Portal](#tab/portal)
-In order to create an endpoint with dead-lettering enabled, you must use the [CLI commands](/cli/azure/dt) or [control plane APIs](/rest/api/digital-twins/controlplane/endpoints/digitaltwinsendpoint_createorupdate) to create your endpoint, rather than the Azure portal.
+To create an endpoint with dead-lettering enabled, you must use the [CLI commands](/cli/azure/dt) or [control plane APIs](/rest/api/digital-twins/controlplane/endpoints/digitaltwinsendpoint_createorupdate) to create your endpoint, rather than the Azure portal.
-For instructions on how to do this with the Azure CLI, switch to the CLI tab for this section.
+For instructions on how to create this type of endpoint with the Azure CLI, switch to the CLI tab for this section.
# [CLI](#tab/cli)
The value for the parameter is the **dead letter SAS URI** made up of the storag
--deadletter-sas-uri https://<storage-account-name>.blob.core.windows.net/<container-name>?<SAS-token> ```
-Add this parameter to the end of the endpoint creation commands from the [Create the endpoint](#create-the-endpoint) section earlier to create an endpoint of your desired type that has dead-lettering enabled.
+Add this parameter to the end of the endpoint creation commands from the [Create the endpoint](#create-the-endpoint) section earlier to create an endpoint of your chosen type that has dead-lettering enabled.
-Alternatively, you can create dead letter endpoints using the [Azure Digital Twins control plane APIs](concepts-apis-sdks.md#overview-control-plane-apis) instead of the CLI. To do this, view the [DigitalTwinsEndpoint documentation](/rest/api/digital-twins/controlplane/endpoints/digitaltwinsendpoint_createorupdate) to see how to structure the request and add the dead letter parameters.
+You can also create dead letter endpoints using the [Azure Digital Twins control plane APIs](concepts-apis-sdks.md#overview-control-plane-apis) instead of the CLI. To do so, view the [DigitalTwinsEndpoint documentation](/rest/api/digital-twins/controlplane/endpoints/digitaltwinsendpoint_createorupdate) to see how to structure the request and add the dead letter parameters.
#### Create a dead-letter endpoint with identity-based authentication
You can also create a dead-lettering endpoint that has identity-based authentica
To create this type of endpoint, use the same CLI command from earlier to [create an endpoint with identity-based authentication](#create-an-endpoint-with-identity-based-authentication), with an extra field in the JSON payload for a `deadLetterUri`. Here are the values you'll need to plug into the placeholders in the command:
-* the Azure resource ID of your Azure Digital Twins instance
-* an endpoint name
-* an endpoint type
-* the endpoint resource's namespace
-* the name of the event hub or Service Bus topic
-* **dead letter SAS URI** details: storage account name, container name
-* the location of your Azure Digital Twins instance
+* The Azure resource ID of your Azure Digital Twins instance
+* An endpoint name
+* An endpoint type
+* The endpoint resource's namespace
+* The name of the event hub or Service Bus topic
+* **Dead letter SAS URI** details: storage account name, container name
+* The location of your Azure Digital Twins instance
```azurecli-interactive az resource create --id <Azure-Digital-Twins-instance-Azure-resource-ID>/endpoints/<endpoint-name> --properties '{\"properties\": { \"endpointType\": \"<endpoint-type>\", \"authenticationType\": \"IdentityBased\", \"endpointUri\": \"sb://<endpoint-namespace>.servicebus.windows.net\", \"entityPath\": \"<name-of-event-hub-or-Service-Bus-topic>\", \"deadLetterUri\": \"https://<storage-account-name>.blob.core.windows.net/<container-name>\"}, \"location\":\"<instance-location>\" }' --is-full-object
Once the endpoint with dead-lettering is set up, dead-lettered messages will be
Dead-lettered messages will match the schema of the original event that was intended to be delivered to your original endpoint.
-Here is an example of a dead-letter message for a [twin create notification](concepts-event-notifications.md#digital-twin-lifecycle-notifications):
+Here's an example of a dead-letter message for a [twin create notification](concepts-event-notifications.md#digital-twin-lifecycle-notifications):
```json {
Here is an example of a dead-letter message for a [twin create notification](con
To actually send data from Azure Digital Twins to an endpoint, you'll need to define an **event route**. These routes let developers wire up event flow, throughout the system and to downstream services. A single route can allow multiple notifications and event types to be selected. Read more about event routes in [Endpoints and event routes](concepts-route-events.md).
-**Prerequisite**: You need to create endpoints as described earlier in this article before you can move on to creating a route. You can proceed to creating an event route once your endpoints are finished setting up.
+**Prerequisite**: Create endpoints as described earlier in this article before you move on to creating a route. You can continue to create an event route once your endpoints are finished setting up.
>[!NOTE] >If you have recently deployed your endpoints, validate that they're finished deploying **before** attempting to use them for a new event route. If route deployment fails because the endpoints aren't ready, wait a few minutes and try again.
A route definition can contain these elements:
- To enable a route that has no specific filtering, use a filter value of `true` - For details on any other type of filter, see the [Filter events](#filter-events) section below
-If there is no route name, no messages are routed outside of Azure Digital Twins.
-If there is a route name and the filter is `true`, all messages are routed to the endpoint.
-If there is a route name and a different filter is added, messages will be filtered based on the filter.
+If there's no route name, no messages are routed outside of Azure Digital Twins.
+If there's a route name and the filter is `true`, all messages are routed to the endpoint.
+If there's a route name and a different filter is added, messages will be filtered based on the filter.
Event routes can be created with the [Azure portal](https://portal.azure.com), [EventRoutes data plane APIs](/rest/api/digital-twins/dataplane/eventroutes), or [az dt route CLI commands](/cli/azure/dt/route). The rest of this section walks through the creation process.
On the *Create an event route* page that opens up, choose at minimum:
* A name for your route in the _Name_ field * The _Endpoint_ you want to use to create the route
-For the route to be enabled, you must also **Add an event route filter** of at least `true`. (Leaving the default value of `false` will create the route, but no events will be sent to it.) To do this, toggle the switch for the _Advanced editor_ to enable it, and write `true` in the *Filter* box.
+For the route to be enabled, you must also **Add an event route filter** of at least `true`. (Leaving the default value of `false` will create the route, but no events will be sent to it.) To do so, toggle the switch for the _Advanced editor_ to enable it, and write `true` in the *Filter* box.
:::image type="content" source="media/how-to-manage-routes/create-event-route-no-filter.png" alt-text="Screenshot of creating an event route for your instance in the Azure portal." lightbox="media/how-to-manage-routes/create-event-route-no-filter.png":::
For more information about using the CLI and what commands are available, see [A
This section shows how to create an event route using the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
-`CreateOrReplaceEventRouteAsync` is the SDK call that is used to add an event route. Here is an example of its usage:
+`CreateOrReplaceEventRouteAsync` is the SDK call that is used to add an event route. Here's an example of its usage:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/eventRoute_operations.cs" id="CreateEventRoute":::
The following sample method shows how to create, list, and delete an event route
As described above, routes have a **filter** field. If the filter value on your route is `false`, no events will be sent to your endpoint.
-After enabling the minimal filter of `true`, endpoints will receive a variety of events from Azure Digital Twins:
+After enabling the minimal filter of `true`, endpoints will receive different kinds of events from Azure Digital Twins:
* Telemetry fired by [digital twins](concepts-twins-graph.md) using the Azure Digital Twins service API * Twin property change notifications, fired on property changes for any twin in the Azure Digital Twins instance * Life-cycle events, fired when twins or relationships are created or deleted
You can restrict the types of events being sent by defining a more-specific filt
# [Portal](#tab/portal3)
-To add an event filter while you are creating an event route, use the "Add an event route filter" section of the *Create an event route* page.
+To add an event filter while you're creating an event route, use the "Add an event route filter" section of the *Create an event route* page.
You can either select from some basic common filter options, or use the advanced filter options to write your own custom filters.
To use the basic filters, expand the _Event types_ option and select the checkbo
:::column-end::: :::row-end:::
-This will auto-populate the filter text box with the text of the filter you've selected:
+Doing so will autopopulate the filter text box with the text of the filter you've selected:
:::row::: :::column:::
- :::image type="content" source="media/how-to-manage-routes/create-event-route-filter-basic-2.png" alt-text="Screenshot of creating an event route with a basic filter in the Azure portal, highlighting the auto-populated filter text after selecting the events.":::
+ :::image type="content" source="media/how-to-manage-routes/create-event-route-filter-basic-2.png" alt-text="Screenshot of creating an event route with a basic filter in the Azure portal, highlighting the autopopulated filter text after selecting the events.":::
:::column-end::: :::column::: :::column-end:::
This will auto-populate the filter text box with the text of the filter you've s
### Use the advanced filters
-Alternatively, you can use the advanced filter option to write your own custom filters.
+You can also use the advanced filter option to write your own custom filters.
To create an event route with advanced filter options, toggle the switch for the _Advanced editor_ to enable it. You can then write your own event filters in the *Filter* box:
Here are the supported route filters.
| True / False | Allows creating a route with no filtering, or disabling a route so no events are sent | `<true/false>` | `true` = route is enabled with no filtering <br> `false` = route is disabled | | Type | The [type of event](concepts-route-events.md#types-of-event-messages) flowing through your digital twin instance | `type = '<event-type>'` | Here are the possible event type values: <br>`Microsoft.DigitalTwins.Twin.Create` <br> `Microsoft.DigitalTwins.Twin.Delete` <br> `Microsoft.DigitalTwins.Twin.Update`<br>`Microsoft.DigitalTwins.Relationship.Create`<br>`Microsoft.DigitalTwins.Relationship.Update`<br> `Microsoft.DigitalTwins.Relationship.Delete` <br> `microsoft.iot.telemetry` | | Source | Name of Azure Digital Twins instance | `source = '<host-name>'`| Here are the possible host name values: <br> **For notifications**: `<your-Digital-Twins-instance>.api.<your-region>.digitaltwins.azure.net` <br> **For telemetry**: `<your-Digital-Twins-instance>.api.<your-region>.digitaltwins.azure.net/<twin-ID>`|
-| Subject | A description of the event in the context of the event source above | `subject = '<subject>'` | Here are the possible subject values: <br>**For notifications**: The subject is `<twin-ID>` <br> or a URI format for subjects, which are uniquely identified by multiple parts or IDs:<br>`<twin-ID>/relationships/<relationship-ID>`<br> **For telemetry**: The subject is the component path (if the telemetry is emitted from a twin component), such as `comp1.comp2`. If the telemetry is not emitted from a component, then its subject field is empty. |
+| Subject | A description of the event in the context of the event source above | `subject = '<subject>'` | Here are the possible subject values: <br>**For notifications**: The subject is `<twin-ID>` <br> or a URI format for subjects, which are uniquely identified by multiple parts or IDs:<br>`<twin-ID>/relationships/<relationship-ID>`<br> **For telemetry**: The subject is the component path (if the telemetry is emitted from a twin component), such as `comp1.comp2`. If the telemetry isn't emitted from a component, then its subject field is empty. |
| Data schema | DTDL model ID | `dataschema = '<model-dtmi-ID>'` | **For telemetry**: The data schema is the model ID of the twin or the component that emits the telemetry. For example, `dtmi:example:com:floor4;2` <br>**For notifications (create/delete)**: Data schema can be accessed in the notification body at `$body.$metadata.$model`. <br>**For notifications (update)**: Data schema can be accessed in the notification body at `$body.modelId`| | Content type | Content type of data value | `datacontenttype = '<content-type>'` | The content type is `application/json` |
-| Spec version | The version of the event schema you are using | `specversion = '<version>'` | The version must be `1.0`. This indicates the CloudEvents schema version 1.0 |
+| Spec version | The version of the event schema you're using | `specversion = '<version>'` | The version must be `1.0`. This value indicates the CloudEvents schema version 1.0 |
| Notification body | Reference any property in the `data` field of a notification | `$body.<property>` | See [Event notifications](concepts-event-notifications.md) for examples of notifications. Any property in the `data` field can be referenced using `$body` >[!NOTE]
digital-twins How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-move-regions.md
# Mandatory fields. Title: Move instance to a different Azure region
-description: See how to move an Azure Digital Twins instance from one Azure region to another.
+description: Learn how to move an Azure Digital Twins instance from one Azure region to another.
Previously updated : 12/15/2021 Last updated : 1/5/2022
# Move an Azure Digital Twins instance to a different Azure region
-If you need to move your Azure Digital Twins instance from one region to another, the current process is to recreate your resources in the new region. Once the resources have been recreated in the new region, the original resources are deleted. At the end of this process, you'll be working with a new Azure Digital Twins instance that's identical to the first, except for the updated location.
+This article provides guidance on how to do a complete move of an Azure Digital Twins instance to a different Azure region and copy over everything you'll need to make the new instance match the original.
-This article provides guidance on how to do a complete move and copy over everything you'll need to make the new instance match the original.
+If you need to move your Azure Digital Twins instance from one region to another, the current process is to recreate your resources in the new region. Once the resources have been recreated in the new region, the original resources are deleted. At the end of this process, you'll be working with a new Azure Digital Twins instance that's identical to the first, except for the updated location.
## Prerequisites
You should see your graph with all its twins and relationships displayed in the
:::image type="content" source="media/how-to-move-regions/post-upload.png" alt-text="Screenshot of the Azure Digital Twins Explorer showing two models highlighted in the Models box and a graph highlighted in the Twin Graph box." lightbox="media/how-to-move-regions/post-upload.png":::
-These views confirm that your models, twins, and graph were re-uploaded to the new instance in the target region.
+These views confirm that your models, twins, and graph were reuploaded to the new instance in the target region.
#### Recreate endpoints and routes
digital-twins How To Parse Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-parse-models.md
description: Learn how to use the parser library to parse DTDL models. Previously updated : 9/2/2021 Last updated : 1/6/2022
# Parse and validate models with the DTDL parser library
+This article describes how to parse and validate Azure Digital Twins models using the DTDL validator sample or the .NET parser library.
+ [Models](concepts-models.md) in Azure Digital Twins are defined using the JSON-LD-based Digital Twins Definition language (DTDL). **It is recommended to validate your models offline before uploading them to your Azure Digital Twins instance.** To help you validate your models, a .NET client-side DTDL parsing library is provided on NuGet: [Microsoft.Azure.DigitalTwins.Parser](https://nuget.org/packages/Microsoft.Azure.DigitalTwins.Parser/).
digital-twins How To Provision Using Device Provisioning Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-provision-using-device-provisioning-service.md
# Mandatory fields. Title: Auto-manage devices using Device Provisioning Service
+ Title: Automanage devices using Device Provisioning Service
-description: See how to set up an automated process to provision and retire IoT devices in Azure Digital Twins using Device Provisioning Service (DPS).
+description: Learn how to set up an automated process to provision and retire IoT devices in Azure Digital Twins using Device Provisioning Service (DPS).
Previously updated : 3/21/2021 Last updated : 1/6/2022
#
-# Auto-manage devices in Azure Digital Twins using Device Provisioning Service (DPS)
+# Automanage devices in Azure Digital Twins using Device Provisioning Service (DPS)
In this article, you'll learn how to integrate Azure Digital Twins with [Device Provisioning Service (DPS)](../iot-dps/about-iot-dps.md).
For more information about the _provision_ and _retire_ stages, and to better un
## Prerequisites Before you can set up the provisioning, you'll need to set up the following resources:
-* an **Azure Digital Twins instance**. Follow the instructions in [Set up an instance and authentication](how-to-set-up-instance-portal.md) to create an Azure digital twins instance. Gather the instance's **_host name_** in the Azure portal ([instructions](how-to-set-up-instance-portal.md#verify-success-and-collect-important-values)).
-* an **IoT hub**. For instructions, see the "Create an IoT Hub" section of [the IoT Hub quickstart](../iot-hub/quickstart-send-telemetry-cli.md).
-* an [Azure function](../azure-functions/functions-overview.md) that updates digital twin information based on IoT Hub data. Follow the instructions in [Ingest IoT hub data](how-to-ingest-iot-hub-data.md) to create this Azure function. Gather the function **_name_** to use it in this article.
+* An **Azure Digital Twins instance**. Follow the instructions in [Set up an instance and authentication](how-to-set-up-instance-portal.md) to create an Azure digital twins instance. Gather the instance's **_host name_** in the Azure portal ([instructions](how-to-set-up-instance-portal.md#verify-success-and-collect-important-values)).
+* An **IoT hub**. For instructions, see the "Create an IoT Hub" section of [the IoT Hub quickstart](../iot-hub/quickstart-send-telemetry-cli.md).
+* An [Azure function](../azure-functions/functions-overview.md) that updates digital twin information based on IoT Hub data. Follow the instructions in [Ingest IoT hub data](how-to-ingest-iot-hub-data.md) to create this Azure function. Gather the function **_name_** to use it in this article.
This sample also uses a **device simulator** that includes provisioning using the Device Provisioning Service. The device simulator is located here: [Azure Digital Twins and IoT Hub Integration Sample](/samples/azure-samples/digital-twins-iothub-integration/adt-iothub-provision-sample/). Get the sample project on your machine by navigating to the sample link and selecting the **Browse code** button underneath the title. This button will take you to the GitHub repo for the sample, which you can download as a .zip file by selecting the **Code** button and **Download ZIP**.
The image below illustrates this architecture.
:::image type="content" source="media/how-to-provision-using-device-provisioning-service/flows.png" alt-text="Diagram of device and several Azure services in an end-to-end scenario showing the data flow." lightbox="media/how-to-provision-using-device-provisioning-service/flows.png"::: This article is divided into two sections, each focused on a portion of this full architecture:
-* [Auto-provision device using Device Provisioning Service](#auto-provision-device-using-device-provisioning-service)
-* [Auto-retire device using IoT Hub lifecycle events](#auto-retire-device-using-iot-hub-lifecycle-events)
+* [Autoprovision device using Device Provisioning Service](#autoprovision-device-using-device-provisioning-service)
+* [Autoretire device using IoT Hub lifecycle events](#autoretire-device-using-iot-hub-lifecycle-events)
-## Auto-provision device using Device Provisioning Service
+## Autoprovision device using Device Provisioning Service
-In this section, you'll be attaching Device Provisioning Service to Azure Digital Twins to auto-provision devices through the path below. This diagram is an excerpt from the full architecture shown [earlier](#solution-architecture).
+In this section, you'll be attaching Device Provisioning Service to Azure Digital Twins to autoprovision devices through the path below. This diagram is an excerpt from the full architecture shown [earlier](#solution-architecture).
:::image type="content" source="media/how-to-provision-using-device-provisioning-service/provision.png" alt-text="Diagram of Provision flowΓÇöan excerpt of the solution architecture diagram following data from a thermostat into Azure Digital Twins." lightbox="media/how-to-provision-using-device-provisioning-service/provision.png":::
Here's a description of the process flow:
1. Device contacts the DPS endpoint, passing identifying information to prove its identity. 2. DPS validates device identity by validating the registration ID and key against the enrollment list, and calls an [Azure function](../azure-functions/functions-overview.md) to do the allocation. 3. The Azure function creates a new [twin](concepts-twins-graph.md) in Azure Digital Twins for the device. The twin will have the same name as the device's **registration ID**.
-4. DPS registers the device with an IoT hub, and populates the device's desired twin state.
+4. DPS registers the device with an IoT hub, and populates the device's chosen twin state.
5. The IoT hub returns device ID information and the IoT hub connection information to the device. The device can now connect to the IoT hub.
-The following sections walk through the steps to set up this auto-provision device flow.
+The following sections walk through the steps to set up this autoprovision device flow.
### Create a Device Provisioning Service
While going through that flow, make sure you select the following options to lin
Next, choose the *Select a new function* button to link your function app to the enrollment group. Then, fill in the following values:
-* **Subscription**: Your Azure subscription is auto-populated. Make sure it's the right subscription.
+* **Subscription**: Your Azure subscription is autopopulated. Make sure it's the right subscription.
* **Function App**: Choose your function app name. * **Function**: Choose DpsAdtAllocationFunc.
az dt twin show --dt-name <Digital-Twins-instance-name> --twin-id "<Device-Regis
You should see the twin of the device being found in the Azure Digital Twins instance. :::image type="content" source="media/how-to-provision-using-device-provisioning-service/show-provisioned-twin.png" alt-text="Screenshot of the Command window showing newly created twin." lightbox="media/how-to-provision-using-device-provisioning-service/show-provisioned-twin.png":::
-## Auto-retire device using IoT Hub lifecycle events
+## Autoretire device using IoT Hub lifecycle events
-In this section, you'll be attaching IoT Hub lifecycle events to Azure Digital Twins to auto-retire devices through the path below. This diagram is an excerpt from the full architecture shown [earlier](#solution-architecture).
+In this section, you'll be attaching IoT Hub lifecycle events to Azure Digital Twins to autoretire devices through the path below. This diagram is an excerpt from the full architecture shown [earlier](#solution-architecture).
:::image type="content" source="media/how-to-provision-using-device-provisioning-service/retire.png" alt-text="Diagram of the Retire device flowΓÇöan excerpt of the solution architecture diagram, following data from a device deletion into Azure Digital Twins." lightbox="media/how-to-provision-using-device-provisioning-service/retire.png":::
Here's a description of the process flow:
2. IoT Hub deletes the device and generates a [device lifecycle](../iot-hub/iot-hub-device-management-overview.md#device-lifecycle) event that will be routed to an [event hub](../event-hubs/event-hubs-about.md). 3. An Azure function deletes the twin of the device in Azure Digital Twins.
-The following sections walk through the steps to set up this auto-retire device flow.
+The following sections walk through the steps to set up this autoretire device flow.
### Create an event hub
Follow these steps to create an event hub endpoint:
1. In the [Azure portal](https://portal.azure.com/), navigate to the IoT hub you created in the [Prerequisites section](#prerequisites) and select **Message routing** in the menu options on the left. 2. Select the **Custom endpoints** tab.
-3. Select **+ Add** and choose **Event hubs** to add an event hubs type endpoint.
+3. Select **+ Add** and choose **Event hubs** to add an Event Hubs type endpoint.
:::image type="content" source="media/how-to-provision-using-device-provisioning-service/event-hub-custom-endpoint.png" alt-text="Screenshot of the Visual Studio window showing how to add an event hub custom endpoint." lightbox="media/how-to-provision-using-device-provisioning-service/event-hub-custom-endpoint.png":::
Next, you'll add a route that connects to the endpoint you created in the above
2. In the *Add a route* page that opens, choose the following values: * **Name**: Choose a name for your route.
- * **Endpoint**: Choose the event hubs endpoint you created earlier from the dropdown.
+ * **Endpoint**: Choose the Event Hubs endpoint you created earlier from the dropdown.
* **Data source**: Choose *Device Lifecycle Events*. * **Routing query**: Enter `opType='deleteDeviceIdentity'`. This query limits the device lifecycle events to only send the delete events.
You can manually delete the device from IoT Hub with an [Azure CLI command](/cli
Follow the steps below to delete the device in the Azure portal: 1. Navigate to your IoT hub, and choose **IoT devices** in the menu options on the left.
-2. You'll see a device with the device registration ID you chose in the [first half of this article](#auto-provision-device-using-device-provisioning-service). You can also choose any other device to delete, as long as it has a twin in Azure Digital Twins so you can verify that the twin is automatically deleted after the device is deleted.
+2. You'll see a device with the device registration ID you chose in the [first half of this article](#autoprovision-device-using-device-provisioning-service). You can also choose any other device to delete, as long as it has a twin in Azure Digital Twins so you can verify that the twin is automatically deleted after the device is deleted.
3. Select the device and choose **Delete**. :::image type="content" source="media/how-to-provision-using-device-provisioning-service/delete-device-twin.png" alt-text="Screenshot of the Azure portal showing how to delete device twin from the IoT devices." lightbox="media/how-to-provision-using-device-provisioning-service/delete-device-twin.png":::
You should see that the twin of the device cannot be found in the Azure Digital
If you no longer need the resources created in this article, follow these steps to delete them.
-Using the Azure Cloud Shell or local Azure CLI, you can delete all Azure resources in a resource group with the [az group delete](/cli/azure/group#az_group_delete) command. This command removes the resource group; the Azure Digital Twins instance; the IoT hub and the hub device registration; the event grid topic and associated subscriptions; the event hubs namespace and both Azure Functions apps, including associated resources like storage.
+Using the Azure Cloud Shell or local Azure CLI, you can delete all Azure resources in a resource group with the [az group delete](/cli/azure/group#az_group_delete) command. This command removes the resource group; the Azure Digital Twins instance; the IoT hub and the hub device registration; the Event Grid topic and associated subscriptions; the Event Hubs namespace and both Azure Functions apps, including associated resources like storage.
> [!IMPORTANT] > Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
digital-twins How To Send Twin To Twin Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-send-twin-to-twin-events.md
# Mandatory fields. Title: Set up twin-to-twin event handling
-description: See how to create a function in Azure for propagating events through the twin graph.
+description: Learn how to create a function in Azure for propagating events through the twin graph.
Last updated 1/07/2022
# Set up twin-to-twin event handling
-This article shows how to **send events from twin to twin**, so that when one digital twin in the graph is updated, related twins in the graph that are affected by this information can update accordingly. This will help you create a fully-connected Azure Digital Twins graph, where data that arrives into Azure Digital Twins from external sources like IoT Hub is propagated through the entire graph.
+This article shows how to **send events from twin to twin**, so that when one digital twin in the graph is updated, related twins in the graph that are affected by this information can also update. This event handling will help you create a fully connected Azure Digital Twins graph, where data that arrives into Azure Digital Twins from external sources like IoT Hub is propagated through the entire graph.
-To set up this twin-to-twin event handling, you'll create an [Azure function](../azure-functions/functions-overview.md) that watches for twin life cycle events. The function recognizes which events should affect other twins in the graph, and uses the event data to update the affected twins accordingly.
+To set up this twin-to-twin event handling, you'll create an [Azure function](../azure-functions/functions-overview.md) that watches for twin life-cycle events. The function recognizes which events should affect other twins in the graph, and uses the event data to update the affected twins accordingly.
## Prerequisites
This article uses **Visual Studio**. You can download the latest version from [V
To set up twin-to-twin handling, you'll need an **Azure Digital Twins instance** to work with. For instructions on how to create an instance, see [Set up an Azure Digital Twins instance and authentication](./how-to-set-up-instance-portal.md). The instance should contain at least **two twins** that you want to send data between.
-Optionally, you may want to set up [automatic telemetry ingestion through IoT Hub](how-to-ingest-iot-hub-data.md) for your twins as well. This is not required in order to send data from twin to twin, but it's an important piece of a complete solution where the twin graph is driven by live telemetry.
+Optionally, you may want to set up [automatic telemetry ingestion through IoT Hub](how-to-ingest-iot-hub-data.md) for your twins as well. This process isn't required to send data from twin to twin, but it's an important piece of a complete solution where the twin graph is driven by live telemetry.
## Send twin events to an endpoint
To set up twin-to-twin event handling, start by creating an **endpoint** in Azur
Next, create an Azure function that will listen on the endpoint and receive twin events that are sent there via the route. The logic of the function should use the information in the events to determine what other twins need to be updated and then perform the updates.
-1. First, create an Azure Functions project in Visual Studio on your machine. For instructions on how to do this, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project).
+1. First, create an Azure Functions project in Visual Studio on your machine. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project).
2. Add the following packages to your project (you can use the Visual Studio NuGet package manager or `dotnet` commands in a command-line tool).
Before your function can access Azure Digital Twins, it needs some information a
## Connect the function to the endpoint
-Next, subscribe your Azure function to the Event Grid endpoint you created earlier. This will ensure that data can flow from an updated twin through the Event Grid topic to the function, which can use the event information to update other twins as needed.
+Next, subscribe your Azure function to the Event Grid endpoint you created earlier. Doing so will ensure that data can flow from an updated twin through the Event Grid topic to the function, which can use the event information to update other twins as needed.
-To do this, you'll create an **Event Grid subscription** that sends data from the Event Grid topic that you created earlier to your Azure function.
+To subscribe your Azure function, you'll create an **Event Grid subscription** that sends data from the Event Grid topic that you created earlier to your Azure function.
Use the following CLI command, filling in placeholders for your subscription ID, resource group, function app, and function name.
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
# Mandatory fields. Title: Use Azure Digital Twins Explorer
-description: Understand how to use the features of Azure Digital Twins Explorer
+description: Learn how to use the features of Azure Digital Twins Explorer
Previously updated : 10/19/2021 Last updated : 1/5/2022
Once in the application, you're also able to change which instance is currently
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/instance-url-1.png" alt-text="Screenshot of Azure Digital Twins Explorer. The instance name in the top toolbar is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/instance-url-1.png":::
-This will bring up the **Azure Digital Twins URL modal**, where you can enter the host name of another Azure Digital Twins instance after the *https://* to connect to that instance.
+Doing so will bring up the **Azure Digital Twins URL modal**, where you can enter the host name of another Azure Digital Twins instance after the *https://* to connect to that instance.
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/instance-url-2.png" alt-text="Screenshot of Azure Digital Twins Explorer. The Azure Digital Twins URL modal displays an editable box containing https:// and a host name." lightbox="media/how-to-use-azure-digital-twins-explorer/instance-url-2.png":::
This will bring up the **Azure Digital Twins URL modal**, where you can enter th
## Query your digital twin graph
-You can use the **Query Explorer** panel to perform [queries](concepts-query-language.md) on your graph.
+You can use the **Query Explorer** panel to run [queries](concepts-query-language.md) on your graph.
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/query-explorer-panel.png" alt-text="Screenshot of Azure Digital Twins Explorer. The Query Explorer panel is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/query-explorer-panel.png":::
-Enter the query you want to run and select the **Run Query** button. This will load the query results in the **Twin Graph** panel.
+Enter the query you want to run and select the **Run Query** button. Doing so will load the query results in the **Twin Graph** panel.
>[!NOTE] > Query results containing relationships can only be rendered in the **Twin Graph** panel if the results include at least one twin as well. While queries that return only relationships are possible in Azure Digital Twins, you can only view them in Azure Digital Twins Explorer by using the [Output panel](#accessibility-and-advanced-settings).
You can check the **Overlay results** box before running your query if you'd lik
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/query-explorer-panel-overlay-results.png" alt-text="Screenshot of Azure Digital Twins Explorer Query Explorer panel. The Overlay results box is checked, and two twins are highlighted in the larger graph that is shown in the Twin Graph panel." lightbox="media/how-to-use-azure-digital-twins-explorer/query-explorer-panel-overlay-results.png":::
-If the query result includes something that is not currently being shown in the **Twin Graph** panel, the element will be added onto the existing view.
+If the query result includes something that isn't currently being shown in the **Twin Graph** panel, the element will be added onto the existing view.
### Save and rerun queries
To save a query, enter it into the query box and select the **Save** icon at the
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/query-explorer-panel-save.png" alt-text="Screenshot of Azure Digital Twins Explorer Query Explorer panel. The Save icon is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/query-explorer-panel-save.png":::
-Once the query has been saved, it is available to select from the **Saved Queries** dropdown menu to easily run it again.
+Once the query has been saved, it's available to select from the **Saved Queries** dropdown menu to easily run it again.
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/query-explorer-panel-saved-queries.png" alt-text="Screenshot of Azure Digital Twins Explorer Query Explorer panel. The Saved Queries dropdown menu is highlighted and shows two sample queries." lightbox="media/how-to-use-azure-digital-twins-explorer/query-explorer-panel-saved-queries.png":::
Run a query using the [Query Explorer](#query-your-digital-twin-graph) to see th
#### View twin and relationship properties
-To view the property values of a twin or a relationship, select the twin or relationship in the **Twin Graph** and use the **Toggle property inspector** button to expand the **Twin Properties** or **Relationship Properties** panel, respectively. This panel will display all the properties associated with the element, along with their values. It also includes default values for properties that have not yet been set.
+To view the property values of a twin or a relationship, select the twin or relationship in the **Twin Graph** and use the **Toggle property inspector** button to expand the **Twin Properties** or **Relationship Properties** panel, respectively. This panel will display all the properties associated with the element, along with their values. It also includes default values for properties that haven't yet been set.
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-highlight-graph-properties.png" alt-text="Screenshot of Azure Digital Twins Explorer Twin Graph panel. The FactoryA twin is selected, and the Twin Properties panel is expanded, showing the properties of the twin." lightbox="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-highlight-graph-properties.png":::
The table below shows the possible data types and their corresponding icons. The
The Twin Properties panel will display error messages if the twin or some of its properties no longer match its model. There are two possible error scenarios that each give their own error message:
-* **One or many models used by the twin are missing**. As a result, all the properties associated with that model will be flagged as "missing" in the Twin Properties panel. This can happen if the model has been deleted since the twin was created.
-* **Some properties on the twin are not part of the twin's model**. Only these properties will be flagged as "missing" in the Twin Properties panel. This can happen if the model for the twin has been replaced or changed since the properties were set, and the properties no longer exist in the most recent version of the model.
+* **One or many models used by the twin are missing**. As a result, all the properties associated with that model will be flagged as "missing" in the Twin Properties panel. This error can happen if the model has been deleted since the twin was created.
+* **Some properties on the twin are not part of the twin's model**. Only these properties will be flagged as "missing" in the Twin Properties panel. This error can happen if the model for the twin has been replaced or changed since the properties were set, and the properties no longer exist in the most recent version of the model.
Both of these error messages are shown in the screenshot below:
Both of these error messages are shown in the screenshot below:
You can also quickly view the code of all relationships that involve a certain twin (including incoming and outgoing relationships).
-To do this, right-click a twin in the graph, and choose **Get relationships**. This brings up a **Relationship Information** modal displaying the [JSON representation](concepts-twins-graph.md#relationship-json-format) of all incoming and outgoing relationships.
+Right-click a twin in the graph, and choose **Get relationships**. Doing so brings up a **Relationship Information** modal displaying the [JSON representation](concepts-twins-graph.md#relationship-json-format) of all incoming and outgoing relationships.
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-get-relationships.png" alt-text="Screenshot of Azure Digital Twins Explorer Twin Graph panel. The center of the screen displays a Relationship Information modal showing Incoming and Outgoing relationships." lightbox="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-get-relationships.png":::
To change the display property, use the **Select Twin Display Name Property** dr
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/twin-graph-display-name-property.png" alt-text="Screenshot of Azure Digital Twins Explorer Twin Graph panel. The Select Twin Display Name Property button is highlighted, showing a menu that lists different properties of the twins in the graph." lightbox="media/how-to-use-azure-digital-twins-explorer/twin-graph-display-name-property.png":::
-If you choose a property that isn't present on every twin, any twins in the graph that do not have that property will display with an asterisk (*) followed by their `$dtId` value.
+If you choose a property that isn't present on every twin, any twins in the graph that don't have that property will display with an asterisk (*) followed by their `$dtId` value.
### Edit twin graph layout
To set the number of layers to expand, use the **Expansion Level** option. This
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-expansion-level.png" alt-text="Screenshot of Azure Digital Twins Explorer Twin Graph panel. The Expansion Level button is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-expansion-level.png":::
-To indicate which types of relationships to follow when expanding, use the **Expansion Direction** button. This allows you to select from just incoming, just outgoing, or both incoming and outgoing relationships.
+To indicate which types of relationships to follow when expanding, use the **Expansion Direction** button. Doing so allows you to select from incoming, outgoing, or *both* incoming and outgoing relationships.
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-expansion-direction.png" alt-text="Screenshot of Azure Digital Twins Explorer Twin Graph panel. The Expansion Direction button is highlighted, showing a menu with the options In, Out, and In/Out." lightbox="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-expansion-direction.png":::
To indicate which types of relationships to follow when expanding, use the **Exp
You can toggle the option to hide twins or relationships from the graph view.
-To hide a twin or relationship, right-click it in the **Twin Graph** window. This will bring up a menu with options to hide the element or other related elements.
+To hide a twin or relationship, right-click it in the **Twin Graph** window. Doing so will bring up a menu with options to hide the element or other related elements.
You can also hide multiple twins or relationships at once by using the CTRL/CMD or SHIFT keys to multi-select several elements of the same type in the graph. From here, follow the same right-click process to see the hide options.
To add property values to your twin, see [Edit twin and relationship properties]
To create a relationship between two twins, start by selecting the source twin for the relationship in the **Twin Graph** window. Next, hold down a CTRL/CMD or SHIFT key while you select a second twin to be the target of the relationship.
-Once the two twins are simultaneously selected, right-click the target twin. This will bring up a menu with an option to **Add relationships** between them.
+Once the two twins are simultaneously selected, right-click the target twin to bring up a menu with an option to **Add relationships** between them.
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-add-relationship.png" alt-text="Screenshot of Azure Digital Twins Explorer Twin Graph panel. The FactoryA and Consumer twins are selected, and a menu shows the option to Add relationships." lightbox="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-add-relationship.png":::
-This will bring up the **Create Relationship** dialog, which shows the source twin and target twin of the relationship, followed by a **Relationship** dropdown menu that contains the types of relationship that the source twin can have (defined in its DTDL model). Select an option for the relationship type, and **Save** the new relationship.
+Doing so will bring up the **Create Relationship** dialog, which shows the source twin and target twin of the relationship, followed by a **Relationship** dropdown menu that contains the types of relationship that the source twin can have (defined in its DTDL model). Select an option for the relationship type, and **Save** the new relationship.
### Edit twin and relationship properties
You can use this panel to directly edit writable properties. Update their values
### Delete twins and relationships
-To delete a twin or a relationship, right-click it in the **Twin Graph** window. This will bring up a menu with an option to delete the element.
+To delete a twin or a relationship, right-click it in the **Twin Graph** window. Doing so will bring up a menu with an option to delete the element.
You can delete multiple twins or multiple relationships at once, by using the CTRL/CMD or SHIFT keys to multi-select several elements of the same type in the graph. From here, follow the same right-click process to delete the elements.
You can use the **Model Graph** panel to view a graphical representation of the
#### View model definition
-To see the full definition of a model, find that model in the **Models** pane and select the menu dots next to the model name. Then, select **View Model**. This will display a **Model Information** modal showing the raw DTDL definition of the model.
+To see the full definition of a model, find that model in the **Models** pane and select the menu dots next to the model name. Then, select **View Model**. Doing so will display a **Model Information** modal showing the raw DTDL definition of the model.
:::row::: :::column:::
To upload an image for a single model, find that model in the **Models** panel a
You can also upload model images in bulk.
-First, use the following instructions to set the image file names before uploading. This enables Azure Digital Twins Explorer to automatically assign the images to the correct models after upload.
+First, use the following instructions to set the image file names before uploading. Doing so enables Azure Digital Twins Explorer to automatically assign the images to the correct models after upload.
1. Start with the model ID value (for example, `dtmi:example:Floor;1`) 1. Replace instances of ":" with "_" (the example becomes `dtmi_example_Floor;1`) 1. Replace instances of ";" with "-" (the example becomes `dtmi_example_Floor-1`)
To delete all of the models in your instance at once, choose the **Delete All Mo
When you open Azure Digital Twins Explorer, the Models panel should automatically show all available models in your environment.
-However, you can manually refresh the panel at any time to reload the list of all models in your Azure Digital Twins instance. To do this, select the **Refresh models** icon.
+However, you can manually refresh the panel at any time to reload the list of all models in your Azure Digital Twins instance. To do so, select the **Refresh models** icon.
:::row::: :::column:::
From the **Twin Graph** panel, you have the options to [import](#import-graph) a
### Import graph
-You can use the import feature to add twins, relationships, and models to your instance. This can be useful for creating many twins, relationships, and/or models at once.
+You can use the import feature to add twins, relationships, and models to your instance. This feature can be useful for creating many twins, relationships, and/or models at once.
#### Create import file The first step in importing a graph is creating a file representing the twins and relationships you want to add. The import file can be in either of these two formats:
-* The **custom Excel-based format** described in the remainder of this section. This allows you to upload twins and relationships.
-* The **JSON-based format** generated on [graph export](#export-graph-and-models). This can contain twins, relationships, and/or models.
+* The **custom Excel-based format** described in the rest of this section. This format allows you to upload twins and relationships.
+* The **JSON-based format** generated on [graph export](#export-graph-and-models). This format can contain twins, relationships, and/or models.
To create a custom graph in Excel, use the following format.
Use the following columns to structure the twin or relationship data. The column
| ModelID | ID | Relationship (source) | Relationship Name | Init Data | | | | | | |
-| *Optional*<br>The DTMI model ID for a twin that should be created.<br><br>You can leave this column blank for a row if you want that row to create only a relationship (no twins). | *Required*<br>The unique ID for a twin.<br><br>If a new twin is being created in this row, this will be the ID of the new twin.<br>If there is relationship information in the row, this ID will be used as the **target** of the relationship. | *Optional*<br>The ID of a twin that should be the **source** twin for a new relationship.<br><br>You can leave this column blank for a row if you want that row to create only a twin (no relationships). | *Optional*<br>The name for the new relationship to create. The relationship direction will be **from** the twin in column C **to** the twin in column B. | *Optional*<br>A JSON string containing property settings for the twin to be created. The properties must match those defined in the model from column A. |
+| *Optional*<br>The DTMI model ID for a twin that should be created.<br><br>You can leave this column blank for a row if you want that row to create only a relationship (no twins). | *Required*<br>The unique ID for a twin.<br><br>If a new twin is being created in this row, this value will be the ID of the new twin.<br>If there's relationship information in the row, this ID will be used as the **target** of the relationship. | *Optional*<br>The ID of a twin that should be the **source** twin for a new relationship.<br><br>You can leave this column blank for a row if you want that row to create only a twin (no relationships). | *Optional*<br>The name for the new relationship to create. The relationship direction will be **from** the twin in column C **to** the twin in column B. | *Optional*<br>A JSON string containing property settings for the twin to be created. The properties must match the ones defined in the model from column A. |
-Here is an example .xlsx file creating a small graph of two floors and two rooms.
+Here's an example .xlsx file creating a small graph of two floors and two rooms.
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/import-example.png" alt-text="Screenshot of graph data in Excel. The column headers correspond to the fields above, in order, and the rows contain corresponding data values." lightbox="media/how-to-use-azure-digital-twins-explorer/import-example.png":::
-You can view this file and additional .xlsx graph examples in the [Azure Digital Twins Explorer repository](https://github.com/Azure-Samples/digital-twins-explorer/tree/main/client/examples) on GitHub.
+You can view this file and other .xlsx graph examples in the [Azure Digital Twins Explorer repository](https://github.com/Azure-Samples/digital-twins-explorer/tree/main/client/examples) on GitHub.
>[!NOTE] >The properties and relationships described in the .xlsx must match what's defined in the model definitions for the related twins.
If import is successful, a modal window will display the number of models, twins
You can use the export feature to export partial or complete graphs, including models, twins, and relationships. Export serializes the twins and relationships from the most recent query results, as well as all models in the instance, to a JSON-based format that you can download to your machine.
-To begin, use the [Query Explorer](#query-your-digital-twin-graph) panel to run a query that selects the twins and relationships that you want to download. This will populate them in the Twin Graph panel.
+To begin, use the [Query Explorer](#query-your-digital-twin-graph) panel to run a query that selects the twins and relationships that you want to download. Doing so will populate them in the Twin Graph panel.
>[!TIP] >The query to display all twins and relationships is `SELECT * FROM digitaltwins`.
To share your environment, you can send a link to the recipient that will open a
>[!NOTE] > The value for the host name placeholder is **not** preceded by *https://* here.
-Here is an example of a URL with the placeholder values filled in:
+Here's an example of a URL with the placeholder values filled in:
`https://explorer.digitaltwins.azure.net/?tid=00a000aa-00a0-00aa-0a0aa000aa00&eid=ADT-instance.api.wcus.digitaltwins.azure.net`
For the recipient to view the instance in the resulting Azure Digital Twins Expl
### Link with a query
-You may want to share an environment and specify a query to execute upon landing, to highlight a subgraph or custom view for a teammate. To do this, start with the URL for the environment and add the query text to the URL as a querystring parameter:
+You may want to share an environment and specify a query to execute upon landing, to highlight a subgraph or custom view for a teammate. To do so, start with the URL for the environment and add the query text to the URL as a querystring parameter:
`https://explorer.digitaltwins.azure.net/?tid=<tenant-ID>&eid=<Azure-Digital-Twins-host-name>&query=<query-text>`
digital-twins How To Use Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-tags.md
# Mandatory fields. Title: Add tags to digital twins
-description: See how to implement tags on digital twins
+description: Learn how to implement marker and value tags on models and digital twins
Previously updated : 8/19/2021 Last updated : 1/5/2022
# Add tags to digital twins
+This article describes how to add different types of tags to models and digital twins, and how to query using the tags.
+ You can use the concept of tags to further identify and categorize your digital twins. In particular, users may want to replicate tags from existing systems, such as [Haystack Tags](https://project-haystack.org/doc/appendix/tags), within their Azure Digital Twins instances. This document describes patterns that can be used to implement tags on digital twins.
-Tags are first added as properties within the [model](concepts-models.md) that describes a digital twin. That property is then set on the twin when it is created based on the model. After that, the tags can be used in [queries](concepts-query-language.md) to identify and filter your twins.
+Tags are first added as properties within the [model](concepts-models.md) that describes a digital twin. That property is then set on the twin when it's created based on the model. After that, the tags can be used in [queries](concepts-query-language.md) to identify and filter your twins.
## Marker tags
A **marker tag** is a simple string that is used to mark or categorize a digital
Marker tags are modeled as a [DTDL](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) Map from `string` to `boolean`. The boolean `mapValue` is ignored, as the presence of the tag is all that's important.
-Here is an excerpt from a twin model implementing a marker tag as a property:
+Here's an excerpt from a twin model implementing a marker tag as a property:
:::code language="json" source="~/digital-twins-docs-samples/models/tags.json" range="2-16":::
Here is an excerpt from a twin model implementing a marker tag as a property:
Once the `tags` property is part of a digital twin's model, you can set the marker tag in the digital twin by setting the value of this property.
-Here is a code example on how to set marker `tags` for a twin using the [.NET SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true):
+Here's a code example on how to set marker `tags` for a twin using the [.NET SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true):
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="TagPropertiesCsharp":::
After creating the twin with tag properties according to the example above, the
Once tags have been added to digital twins, the tags can be used to filter the twins in queries.
-Here is a query to get all twins that have been tagged as "red":
+Here's a query to get all twins that have been tagged as "red":
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryMarkerTags1":::
-You can also combine tags for more complex queries. Here is a query to get all twins that are round, and not red:
+You can also combine tags for more complex queries. Here's a query to get all twins that are round, and not red:
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryMarkerTags2":::
A **value tag** is a key-value pair that is used to give each tag a value, such
Value tags are modeled as a [DTDL](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) Map from `string` to `string`. Both the `mapKey` and the `mapValue` are significant.
-Here is an excerpt from a twin model implementing a value tag as a property:
+Here's an excerpt from a twin model implementing a value tag as a property:
:::code language="json" source="~/digital-twins-docs-samples/models/tags.json" range="17-31":::
From the example above, `red` is being used as a marker tag. Remember that this
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryMarkerTags1":::
-Here is a query to get all entities that are small (value tag), and not red:
+Here's a query to get all entities that are small (value tag), and not red:
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryMarkerValueTags":::
digital-twins Resources Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/resources-customer-data-requests.md
Title: Customer data request featuresΓÇï for Azure Digital Twins
-description: This article shows processes for exporting and deleting personal data in Azure Digital Twins.
+description: Learn about the processes for exporting and deleting personal data in Azure Digital Twins.
Previously updated : 9/14/2021 Last updated : 12/30/2021
# Azure Digital Twins customer data request featuresΓÇï
-Azure Digital Twins is a developer platform for creating secure digital representations of a business environment. Representations are driven by live state data from data sources selected by users.
+Azure Digital Twins is a developer platform for creating secure digital representations of a business environment. Representations are driven by live state data from data sources selected by users. This article shows processes for exporting and deleting personal data in Azure Digital Twins.
[!INCLUDE [gdpr-intro-sentence](../../includes/gdpr-intro-sentence.md)] The digital representations called *digital twins* in Azure Digital Twins represent entities in real-world environments, and are associated with identifiers. Microsoft maintains no information and has no access to data that would allow identifiers to be correlated to users.
-Many of the digital twins in Azure Digital Twins do not directly represent personal entitiesΓÇötypical objects represented might be an office meeting room, or a factory floor. However, users may consider some entities to be personally identifiable, and at their discretion may maintain their own asset or inventory tracking methods that tie digital twins to individuals. Azure Digital Twins manages and stores all data associated with digital twins as if it were personal data.
+Many of the digital twins in Azure Digital Twins don't directly represent personal entitiesΓÇötypical objects represented might be an office meeting room, or a factory floor. However, users may consider some entities to be personally identifiable, and at their discretion may maintain their own asset or inventory tracking methods that tie digital twins to individuals. Azure Digital Twins manages and stores all data associated with digital twins as if it were personal data.
To view, export, and delete personal data that may be referenced in a data subject request, an Azure Digital Twins administrator can use the [Azure portal](https://portal.azure.com/) for users and roles, or the [Azure Digital Twins REST APIs](/rest/api/azure-digitaltwins/) for digital twins. The Azure portal and REST APIs provide different methods for users to service such data subject requests.
To view, export, and delete personal data that may be referenced in a data subje
Azure Digital Twins considers *personal data* to be data associated with its administrators and users.
-Azure Digital Twins stores the [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) **object ID** of users with access to the environment. Azure Digital Twins in the Azure portal displays user email addresses, but these email addresses are not stored within Azure Digital Twins. They are dynamically looked up in Azure Active Directory, using the Azure Active Directory object ID.
+Azure Digital Twins stores the [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) **object ID** of users with access to the environment. Azure Digital Twins in the Azure portal displays user email addresses, but these email addresses aren't stored within Azure Digital Twins. They're dynamically looked up in Azure Active Directory, using the Azure Active Directory object ID.
## Deleting customer data
-Azure Digital Twins administrators can use the Azure portal to delete data related to users. It is also possible to perform delete operations on individual digital twins using the Azure Digital Twins REST APIs. For more information about the APIs available, see [Azure Digital Twins REST APIs documentation](/rest/api/azure-digitaltwins/).
+Azure Digital Twins administrators can use the Azure portal to delete data related to users. It's also possible to perform delete operations on individual digital twins using the Azure Digital Twins REST APIs. For more information about the APIs available, see [Azure Digital Twins REST APIs documentation](/rest/api/azure-digitaltwins/).
## Exporting customer data
Azure Digital Twins stores data related to digital twins. Users can retrieve and
Customer data, including user roles and role assignments, can be selected, copied, and pasted from the Azure portal.
-## Links to additional documentation
+## Links to more documentation
For a full list of the Azure Digital Twins service APIs, see the [Azure Digital Twins REST APIs documentation](/rest/api/azure-digitaltwins/).
digital-twins Tutorial Command Line Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-command-line-cli.md
# Mandatory fields. Title: 'Tutorial: Create a graph in Azure Digital Twins (CLI)'
-description: Tutorial to build an Azure Digital Twins scenario using the Azure CLI
+description: Tutorial that shows how to build an Azure Digital Twins scenario using the Azure CLI
Previously updated : 6/1/2021 Last updated : 12/29/2021
To work with Azure Digital Twins in this article, you first need to **set up an
Otherwise, follow the instructions in [Set up an instance and authentication](how-to-set-up-instance-cli.md#create-the-azure-digital-twins-instance). The instructions also contain steps to verify that you've completed each step successfully and are ready to move on to using your new instance. After you set up your Azure Digital Twins instance, make a note of the following values that you'll need to connect to the instance later:
-* the instance's **_host name_**
-* the **Azure subscription** that you used to create the instance.
+* The instance's **_host name_**
+* The **Azure subscription** that you used to create the instance.
You can get both of these values for your instance in the output of the following Azure CLI command:
Now that the CLI and Azure Digital Twins instance are set up, you can begin buil
The first step in creating an Azure Digital Twins solution is defining twin [models](concepts-models.md) for your environment.
-Models are similar to classes in object-oriented programming languages; they provide user-defined templates for [digital twins](concepts-twins-graph.md) to follow and instantiate later. They are written in a JSON-like language called **Digital Twins Definition Language (DTDL)**, and can define a twin's *properties*, *telemetry*, *relationships*, and *components*.
+Models are similar to classes in object-oriented programming languages; they provide user-defined templates for [digital twins](concepts-twins-graph.md) to follow and instantiate later. They're written in a JSON-like language called **Digital Twins Definition Language (DTDL)**, and can define a twin's *properties*, *telemetry*, *relationships*, and *components*.
> [!NOTE] > DTDL also allows for the definition of *commands* on digital twins. However, commands are not currently supported in the Azure Digital Twins service.
Navigate on your machine to the *Room.json* file that you created in the [Prereq
### Upload models to Azure Digital Twins
-After designing models, you need to upload them to your Azure Digital Twins instance. This configures your Azure Digital Twins service instance with your own custom domain vocabulary. Once you have uploaded the models, you can create twin instances that use them.
+After designing models, you need to upload them to your Azure Digital Twins instance. Doing so configures your Azure Digital Twins service instance with your own custom domain vocabulary. Once you've uploaded the models, you can create twin instances that use them.
-1. To add models using Cloud Shell, you'll need to upload your model files to Cloud Shell's storage so the files will be available when you run the Cloud Shell command that uses them. To do this, select the "Upload/Download files" icon and choose "Upload".
+1. To add models using Cloud Shell, you'll need to upload your model files to Cloud Shell's storage so the files will be available when you run the Cloud Shell command that uses them. To do so, select the "Upload/Download files" icon and choose "Upload".
:::image type="content" source="media/how-to-set-up-instance/cloud-shell/cloud-shell-upload.png" alt-text="Screenshot of Cloud Shell browser window showing selection of the Upload icon.":::
After designing models, you need to upload them to your Azure Digital Twins inst
>[!TIP] >You can also upload all models within a directory at the same time, by using the `--from-directory` option for the model create command. For more information, see [Optional parameters for az dt model create](/cli/azure/dt/model#az_dt_model_create-optional-parameters).
-1. Verify the models were created with the [az dt model list](/cli/azure/dt/model#az_dt_model_list) command as shown below. This will print a list of all models that have been uploaded to the Azure Digital Twins instance with their full information.
+1. Verify the models were created with the [az dt model list](/cli/azure/dt/model#az_dt_model_list) command as shown below. Doing so will print a list of all models that have been uploaded to the Azure Digital Twins instance with their full information.
```azurecli-interactive az dt model list --dt-name <Azure-Digital-Twins-instance-name> --definition
After designing models, you need to upload them to your Azure Digital Twins inst
The CLI also handles errors from the service.
-Re-run the `az dt model create` command to try re-uploading one of the same models you just uploaded, for a second time:
+Rerun the `az dt model create` command to try re-uploading one of the same models you uploaded, for a second time:
```azurecli-interactive az dt model create --dt-name <Azure-Digital-Twins-instance-name> --models Room.json ```
-As models cannot be overwritten, this will now return an error code of `ModelIdAlreadyExists`.
+As models cannot be overwritten, running this command on the same model will now return an error code of `ModelIdAlreadyExists`.
## Create digital twins Now that some models have been uploaded to your Azure Digital Twins instance, you can create [digital twins](concepts-twins-graph.md) based on the model definitions. Digital twins represent the entities within your business environmentΓÇöthings like sensors on a farm, rooms in a building, or lights in a car.
-To create a digital twin, you use the [az dt twin create](/cli/azure/dt/twin#az_dt_twin_create) command. You must reference the model that the twin is based on, and can optionally define initial values for any properties in the model. You do not have to pass any relationship information at this stage.
+To create a digital twin, you use the [az dt twin create](/cli/azure/dt/twin#az_dt_twin_create) command. You must reference the model that the twin is based on, and can optionally define initial values for any properties in the model. You don't have to pass any relationship information at this stage.
-1. Run this code in the Cloud Shell to create several twins, based on the Room model you updated earlier and another model, Floor. Recall that Room has three properties, so you can provide arguments with the initial values for these. (Initializing property values is optional in general, but they're needed for this tutorial.)
+1. Run this code in the Cloud Shell to create several twins, based on the Room model you updated earlier and another model, Floor. Recall that Room has three properties, so you can provide arguments with the initial values for these properties. (Initializing property values is optional in general, but they're needed for this tutorial.)
```azurecli-interactive az dt twin create --dt-name <Azure-Digital-Twins-instance-name> --dtmi "dtmi:example:Room;2" --twin-id room0 --properties '{"RoomName":"Room0", "Temperature":70, "HumidityLevel":30}'
To create a digital twin, you use the [az dt twin create](/cli/azure/dt/twin#az_
az dt twin query --dt-name <Azure-Digital-Twins-instance-name> --query-command "SELECT * FROM DIGITALTWINS" ```
- Look for the room0, room1, floor0, and floor1 twins in the results. Here is an excerpt showing part of the result of this query.
+ Look for the room0, room1, floor0, and floor1 twins in the results. Here's an excerpt showing part of the result of this query.
:::image type="content" source="media/tutorial-command-line/cli/output-query-all.png" alt-text="Screenshot of Cloud Shell showing partial result of twin query, including room0 and room1." lightbox="media/tutorial-command-line/cli/output-query-all.png":::
You can also modify the properties of a twin you've created.
Next, you can create some **relationships** between these twins, to connect them into a [twin graph](concepts-twins-graph.md). Twin graphs are used to represent an entire environment.
-The types of relationships that you can create from one twin to another are defined within the [models](#model-a-physical-environment-with-dtdl) that you uploaded earlier. The [model definition for Floor](https://github.com/azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Floor.json) specifies that floors can have a type of relationship called *contains*. This makes it possible to create a *contains*-type relationship from each Floor twin to the corresponding room that it contains.
+The types of relationships that you can create from one twin to another are defined within the [models](#model-a-physical-environment-with-dtdl) that you uploaded earlier. The [model definition for Floor](https://github.com/azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Floor.json) specifies that floors can have a type of relationship called *contains*. Since the model definition specifies this relationship, it's possible to create a *contains*-type relationship from each Floor twin to the corresponding room that it contains.
To add a relationship, use the [az dt twin relationship create](/cli/azure/dt/twin/relationship#az_dt_twin_relationship_create) command. Specify the twin that the relationship is coming from, the type of relationship, and the twin that the relationship is connecting to. Lastly, give the relationship a unique ID. If a relationship was defined to have properties, you can initialize the relationship properties in this command as well.
The twins and relationships you have set up in this tutorial form the following
## Query the twin graph to answer environment questions
-A main feature of Azure Digital Twins is the ability to [query](concepts-query-language.md) your twin graph easily and efficiently to answer questions about your environment. In the Azure CLI, this is done with the [az dt twin query](/cli/azure/dt/twin#az_dt_twin_query) command.
+A main feature of Azure Digital Twins is the ability to [query](concepts-query-language.md) your twin graph easily and efficiently to answer questions about your environment. In the Azure CLI, querying is done with the [az dt twin query](/cli/azure/dt/twin#az_dt_twin_query) command.
[!INCLUDE [digital-twins-query-latency-note.md](../../includes/digital-twins-query-latency-note.md)]
Run the following queries in the Cloud Shell to answer some questions about the
az dt twin query --dt-name <Azure-Digital-Twins-instance-name> --query-command "SELECT * FROM DIGITALTWINS" ```
- This allows you to take stock of your environment at a glance, and make sure everything is represented as you want it to be within Azure Digital Twins. The result of this is an output containing each digital twin with its details. Here is an excerpt:
+ This query allows you to take stock of your environment at a glance, and make sure everything is represented as you want it to be within Azure Digital Twins. The result of this query is an output containing each digital twin with its details. Here's an excerpt:
:::image type="content" source="media/tutorial-command-line/cli/output-query-all.png" alt-text="Screenshot of Cloud Shell showing partial result of twin query, including room0 and room1." lightbox="media/tutorial-command-line/cli/output-query-all.png":::
Run the following queries in the Cloud Shell to answer some questions about the
az dt twin query --dt-name <Azure-Digital-Twins-instance-name> --query-command "SELECT * FROM DIGITALTWINS T WHERE IS_OF_MODEL(T, 'dtmi:example:Room;2')" ```
- You can restrict your query to twins of a certain type, to get more specific information about what's represented. The result of this shows room0 and room1, but does **not** show floor0 or floor1 (since they are floors, not rooms).
+ You can restrict your query to twins of a certain type, to get more specific information about what's represented. The result of this shows room0 and room1, but does **not** show floor0 or floor1 (since they're floors, not rooms).
:::image type="content" source="media/tutorial-command-line/cli/output-query-model.png" alt-text="Screenshot of Cloud Shell showing result of model query, which includes only room0 and room1." lightbox="media/tutorial-command-line/cli/output-query-model.png":::
Run the following queries in the Cloud Shell to answer some questions about the
az dt twin query --dt-name <Azure-Digital-Twins-instance-name> --query-command "SELECT * FROM DigitalTwins T WHERE T.Temperature > 75" ```
- You can query the graph based on properties to answer a variety of questions, including finding outliers in your environment that might need attention. Other comparison operators (*<*,*>*, *=*, or *!=*) are also supported. room1 shows up in the results here, because it has a temperature of 80.
+ You can query the graph based on properties to answer different kinds of questions, including finding outliers in your environment that might need attention. Other comparison operators (*<*,*>*, *=*, or *!=*) are also supported. room1 shows up in the results here, because it has a temperature of 80.
:::image type="content" source="media/tutorial-command-line/cli/output-query-property.png" alt-text="Screenshot of Cloud Shell showing result of property query, which includes only room1." lightbox="media/tutorial-command-line/cli/output-query-property.png":::
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
To complete this tutorial, you need to:
> [!IMPORTANT] > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.+
+ > - Azure Storage Account with Private endpoint is not supported by Azure Database Migration service.
+ > - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration. > - You should take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server). > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported.
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-online.md
To complete this tutorial, you need to:
> Regarding the storage account used as part of the migration, you must either: > * Choose to allow all network to access the storage account. > * Turn on [subnet delegation](../virtual-network/manage-subnet-delegation.md) on MI subnet and update the Storage Account firewall rules to allow this subnet.
+
+ >- Azure Storage account with the private endpoint is not supported by Azure Database Migration Service.
* Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md). * Configure your [Windows Firewall for source database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-managed-instance.md
To complete this tutorial, you need to:
> [!NOTE] > Azure Database Migration Service does not support using an account level SAS token when configuring the Storage Account settings during the [Configure Migration Settings](#configure-migration-settings) step.+
+
+ >- Azure Storage Account with Private endpoint is not supported by Azure Database Migration service.
[!INCLUDE [resource-provider-register](../../includes/database-migration-service-resource-provider-register.md)]
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| | | | | | | | **Abu Dhabi** | Etisalat KDC | 3 | n/a | 10G | | | **Amsterdam** | [Equinix AM5](https://www.equinix.com/locations/europe-colocation/netherlands-colocation/amsterdam-data-centers/am5/) | 1 | West Europe | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, Colt, Equinix, euNetworks, GÉANT, InterCloud, Interxion, KPN, IX Reach, Level 3 Communications, Megaport, NTT Communications, Orange, Tata Communications, Telefonica, Telenor, Telia Carrier, Verizon, Zayo |
-| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | 10G, 100G | BICS, British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GÉANT, Interxion, NOS, NTT Global DataCenters EMEA, Orange, Vodafone |
+| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | 10G, 100G | BICS, British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GÉANT, Interxion, NL-IX, NOS, NTT Global DataCenters EMEA, Orange, Vodafone |
| **Atlanta** | [Equinix AT2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at2/) | 1 | n/a | 10G, 100G | Equinix, Megaport | | **Auckland** | [Vocus Group NZ Albany](https://www.vocus.co.nz/business/cloud-data-centres) | 2 | n/a | 10G | Devoli, Kordia, Megaport, REANNZ, Spark NZ, Vocus Group NZ | | **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | n/a | 10G | AIS, National Telecom UIH |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Neutrona Networks](https://www.neutrona.com/index.php/azure-expressroute/)** |Supported |Supported |Dallas, Los Angeles, Miami, Sao Paulo, Washington DC | | **[Next Generation Data](https://vantage-dc-cardiff.co.uk/)** |Supported |Supported |Newport(Wales) | | **[NEXTDC](https://www.nextdc.com/services/axon-ethernet/microsoft-expressroute)** |Supported |Supported |Melbourne, Perth, Sydney, Sydney2 |
+| **NL-IX** |Supported |Supported |Amsterdam2 |
| **[NOS](https://www.nos.pt/empresas/corporate/cloud/cloud/Pages/nos-cloud-connect.aspx)** |Supported |Supported |Amsterdam2 | | **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** |Supported |Supported |Amsterdam, Hong Kong SAR, Jakarta, London, Los Angeles, Osaka, Singapore, Sydney, Tokyo, Washington DC | | **[NTT EAST](https://business.ntt-east.co.jp/service/crossconnect/)** |Supported |Supported |Tokyo |
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/overview.md
Resources are evaluated at specific times during the resource lifecycle, the pol
lifecycle, and for regular ongoing compliance evaluation. The following are the times or events that cause a resource to be evaluated: -- A resource is created, updated, or deleted in a scope with a policy assignment.
+- A resource is created or updated in a scope with a policy assignment.
- A policy or initiative is newly assigned to a scope. - A policy or initiative already assigned to a scope is updated. - During the standard compliance evaluation cycle, which occurs once every 24 hours.
iot-central Concepts Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-best-practices.md
To learn more about the CLI command, see [az iot central device manual-failover]
You can now check to see that telemetry from the device is still reaching your IoT Central application.
-To see sample device code that handles failovers in various programing languages, see [IoT high availability clients](https://github.com/iot-for-all/iot-central-high-availability-clients).
+> [!TIP]
+> To see sample device code that handles failovers in various programing languages, see [IoT Central high availability clients](/samples/azure-samples/iot-central-high-availability-clients/iotc-high-availability-clients/).
## Next steps
iot-central Howto Manage Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-dashboards.md
# Create and manage dashboards
-The default *organization dashboard* is the page that loads when you first go to your application. As an administrator, you can create additional organization dashboards that are associated with a specific organization. An organization dashboard is only visible to users who have access to the organization the dashboard is associated with. Only users in a role that has [organization dashboard permissions](howto-manage-users-roles.md#customizing-the-app) can create, edit, and delete organization dashboards.
+The default *organization dashboard* is the page that loads when you first go to your application. As an administrator, you can create more organization dashboards that are associated with a specific organization. An organization dashboard is only visible to users who have access to the organization the dashboard is associated with. Only users in a role that has [organization dashboard permissions](howto-manage-users-roles.md#customizing-the-app) can create, edit, and delete organization dashboards.
> [!TIP] > You can see which organization a dashboard is associated with in the dashboard settings.
-All users can create their own *personal dashboards*. Users can switch between organization dashboards and personal dashboards.
+All users can create *personal dashboards*, visible only to themselves. Users can switch between organization and personal dashboards.
## Create a dashboard
-The following screenshot shows the dashboard in an application created from the **Custom Application** template. If you're in a role with the appropriate permissions, you can customize the default dashboard. To create a new dashboard, select **+ New dashboard** in the upper-left corner of the page:
+The following screenshot shows the dashboard in an application created from the **Custom Application** template. If you're in a role with the appropriate permissions, you can customize the default dashboard. To create a new dashboard from scratch, select **+ New dashboard** in the upper-left corner of the page. To create a new dashboard by copying the current dashboard, select **Copy**:
:::image type="content" source="media/howto-manage-dashboards/dashboard-custom-app.png" alt-text="Screenshot that shows the New dashboard button.":::
-In the **Create dashboard** panel, give your dashboard a name and select either **Organization** or **Personal** as the dashboard type. If you're creating an organization dashboard, choose the [organization](howto-create-organizations.md) the dashboard is associated with. An organization dashboard and its tiles only show the devices that are visible to the organization and any of its sub-organizations.
+In the **Create dashboard** or **Duplicate dashboard** panel, give your dashboard a name and select either **Organization** or **Personal** as the dashboard type. If you're creating an organization dashboard, choose the [organization](howto-create-organizations.md) the dashboard is associated with. An organization dashboard and its tiles only show the devices that are visible to the organization and any of its suborganizations.
After you create the dashboard, choose items from the library to add to the dashboard. The library contains the tiles and dashboard primitives you use to customize the dashboard: :::image type="content" source="media/howto-manage-dashboards/dashboard-library.png" alt-text="Screenshot that shows the dashboard library.":::
-If you're an administrator, you can create a personal dashboard or an organization dashboard. Users see the organization dashboards associated with the organization they are assigned to. All users can create personal dashboards, that only they can see.
-
-Enter a title and select the type of dashboard you want to create. [Add tiles](#add-tiles) to customize your dashboard.
+If you're an administrator, you can create a personal dashboard or an organization dashboard. Users see the organization dashboards associated with the organization they're assigned to. All users can create personal dashboards, visible only to themselves.
> [!TIP] > You need to have at least one device template in your application to be able to add tiles that show device information.
You can have several personal dashboards and switch between them or choose from
You can edit your personal dashboards and delete dashboards you don't need. If you have the correct [permissions](howto-manage-users-roles.md#customizing-the-app), you can edit or delete organization dashboards as well. To rename a dashboard or see the organization it's assigned to, select **Dashboard settings**:
To rename a dashboard or see the organization it's assigned to, select **Dashboa
## Add tiles
-The following screenshot shows the dashboard in an application created from the **Custom application** template. To customize the current dashboard, select **Edit**. To add a personal or organization dashboard, select **New dashboard**:
+The following screenshot shows the dashboard in an application created from the **Custom application** template. To customize the current dashboard, select **Edit**:
:::image type="content" source="media/howto-manage-dashboards/dashboard-sample-contoso.png" alt-text="Screenshot that shows a dashboard for applications that are based on the Custom Application template.":::
-After you select **Edit** or **New dashboard**, the dashboard is in *edit* mode. You can use the tools in the **Edit dashboard** panel to add tiles to the dashboard. You can customize and remove tiles on the dashboard itself. For example, to add a line chart tile to track telemetry values reported by one or more devices over time:
+After you select **Edit**, **New dashboard**, or **Copy**, the dashboard is in *edit* mode. You can use the tools in the **Edit dashboard** panel to add tiles to the dashboard. You can customize and remove tiles on the dashboard itself. For example, to add a line chart tile to track telemetry values reported by one or more devices over time:
1. Select **Start with a Visual**, **Line chart**, and then **Add tile**, or just drag the tile onto the canvas.
-
-1. To configure the tile, select its **gear** button. Enter a **Title** and select a **Device Group**. In the **Devices** list, select the devices to show on the tile.
- :::image type="content" source="media/howto-manage-dashboards/device-details.png" alt-text="Screenshot that shows adding a tile to a dashboard.":::
+1. To edit the tile, select its **pencil** button. Enter a **Title** and select a **Device Group**. In the **Devices** list, select the devices to show on the tile.
1. After you select all the devices to show on the tile, select **Update**.
-1. After you finish adding and customizing tiles on the dashboard, select **Save**. Doing so takes you out of edit mode.
+1. After you finish adding and customizing tiles on the dashboard, select **Save**.
## Customize tiles To edit a tile, you need to be in edit mode. The different [tile types](#tile-types) have different options for customization:
-* The **ruler** button on a tile lets you change the visualization. Visualizations include line chart, bar chart, pie chart, last known value (LKV), key performance indicator (KPI), heat map, and map.
- * The **square** button lets you resize the tile.
-* The **gear** button lets you configure the visualization. For example, for a line chart you can choose to show the legend and axes and choose the time range to plot.
+* The **pencil** button lets you edit the visualization. For example, for a line chart you can choose to show the legend and axes and choose the time range to plot.
+
+* The **copy** button lets you create a duplicate of the tile.
## Tile types
This table describes the types of tiles you can add to a dashboard:
| Tile | Description | | - | -- |
-| Markdown | Markdown tiles are clickable tiles that display a heading and description text formatted in Markdown. The URL can be a relative link to another page in the application or an absolute link to an external site.|
-| Image | Image tiles display a custom image and can be clickable. The URL can be a relative link to another page in the application or an absolute link to an external site.|
-| Label | Label tiles display custom text on a dashboard. You can choose the size of the text. Use a label tile to add relevant information to the dashboard, like descriptions, contact details, or Help.|
-| Count | Count tiles display the number of devices in a device group.|
-| Map (telemetry) | Map tiles display the location of one or more devices on a map. You can also display up to 100 points of a device's location history. For example, you can display a sampled route of where a device has been in the past week.|
-| Map (property) | Map tiles display the location of one or more devices on a map.|
-| KPI | KPI tiles display aggregate telemetry values for one or more devices over a time period. For example, you can use them to show the maximum temperature and pressure reached for one or more devices during the past hour.|
-| Line chart | Line chart tiles plot one or more aggregate telemetry values for one or more devices over a time period. For example, you can display a line chart to plot the average temperature and pressure of one or more devices during the past hour.|
-| Bar chart | Bar chart tiles plot one or more aggregate telemetry values for one or more devices over a time period. For example, you can display a bar chart to show the average temperature and pressure of one or more devices during the past hour.|
-| Pie chart | Pie chart tiles display one or more aggregate telemetry values for one or more devices over a time period.|
-| Heat map | Heat map tiles display information, represented in colors, about one or more devices.|
-| Last known value | Last known value tiles display the latest telemetry values for one or more devices. For example, you can use this tile to display the most recent temperature, pressure, and humidity values for one or more devices. |
-| Event history | Event history tiles display the events for a device over a time period. For example, you can use them to show all the valve open and valve close events for one or more devices during the past hour.|
-| Property | Property tiles display the current values for properties and cloud properties for one or more devices. For example, you can use this tile to display device properties like the manufacturer or firmware version. |
-| State chart | State chart tiles plot changes for one or more devices over a time period. For example, you can use this tile to display properties like the temperature changes for a device. |
-| Event chart | Event chart tiles display telemetry events for one or more devices over a time period. For example, you can use this tile to display properties like the temperature changes for a device. |
-| State history | State history tiles list and display status changes for state telemetry.|
-| External content | External content tiles allow you to load content from an external source. |
+| KPI | Display aggregate telemetry values for one or more devices over a time period. For example, you can use them to show the maximum temperature and pressure reached for one or more devices during the past hour.|
+| Last known value | Display the latest telemetry values for one or more devices. For example, you can use this tile to display the most recent temperature, pressure, and humidity values for one or more devices. |
+| Line chart | Plot one or more aggregate telemetry values for one or more devices over a time period. For example, you can display a line chart to plot the average temperature and pressure of one or more devices during the past hour.|
+| Bar chart | Plot one or more aggregate telemetry values for one or more devices over a time period. For example, you can display a bar chart to show the average temperature and pressure of one or more devices during the past hour.|
+| Pie chart | Display one or more aggregate telemetry values for one or more devices over a time period.|
+| Heat map | Display information, represented in colors, about one or more devices.|
+| Event history | Display the events for a device over a time period. For example, you can use them to show all the valve open and valve close events for one or more devices during the past hour.|
+| State history | List and display status changes for state telemetry.|
+| Event chart | Display telemetry events for one or more devices over a time period. For example, you can use this tile to display properties like the temperature changes for a device. |
+| State chart | Plot changes for one or more devices over a time period. For example, you can use this tile to display properties like the temperature changes for a device. |
+| Property | Display the current values for properties and cloud properties for one or more devices. For example, you can use this tile to display device properties like the manufacturer or firmware version. |
+| Map (property) | Display the location of one or more devices on a map.|
+| Map (telemetry) | Display the location of one or more devices on a map. You can also display up to 100 points of a device's location history. For example, you can display a sampled route of where a device has been in the past week.|
+| Image | Display a custom image and can be clickable. The URL can be a relative link to another page in the application or an absolute link to an external site.|
+| Label | Display custom text on a dashboard. You can choose the size of the text. Use a label tile to add relevant information to the dashboard, like descriptions, contact details, or Help.|
+| Markdown | Clickable tiles that display a heading and description text formatted in Markdown. The URL can be a relative link to another page in the application or an absolute link to an external site.|
+| External content | Let you load content from an external source. |
+| Number of devices | Display the number of devices in a device group.|
Currently, you can add up to 10 devices to tiles that support multiple devices. ### Customize visualizations
-By default, line charts show data over a range of time. The selected time range is split into 50 equally sized partitions. The device data is then aggregated per partition to give 50 data points over the selected time range. If you want to view raw data, you can change your selection to view the last 100 values. To change the time range or to select raw data visualization, use the **Display range** dropdown list in the **Configure chart** panel:
+By default, line charts show data over a range of time. The selected time range is split into 50 equally sized partitions. The device data is then aggregated per partition to give 50 data points over the selected time range. If you want to view raw data, you can change your selection to view the last 100 values. To change the time range or to select raw data visualization, use the **Display range** dropdown in the **Configure chart** panel.
+For tiles that display aggregate values, select the **gear** button next to the telemetry type in the **Configure chart** panel to choose the aggregation. You can choose average, sum, maximum, minimum, or count:
-For tiles that display aggregate values, select the **gear** button next to the telemetry type in the **Configure chart** panel to choose the aggregation. You can choose average, sum, maximum, minimum, or count.
For line charts, bar charts, and pie charts, you can customize the colors of the various telemetry values. Select the **palette** button next to the telemetry you want to customize:
For line charts, bar charts, and pie charts, you can customize the colors of the
For tiles that show string properties or telemetry values, you can choose how to display the text. For example, if the device stores a URL in a string property, you can display it as a clickable link. If the URL references an image, you can render the image in a last known value or property tile. To change how a string displays, select the **gear** button next to the telemetry type or property in the tile configuration. -
-For numeric KPI, LKV, and property tiles, you can use conditional formatting to customize the color of the tile based on its value. To add conditional formatting, select **Configure** on the tile and then select the **Conditional formatting** button next to the value you want to customize:
-
+For numeric KPI, LKV, and property tiles, you can use conditional formatting to customize the color of the tile based on its value. To add conditional formatting, select **Configure** on the tile and then select the **Conditional formatting** button next to the value you want to customize.
Next, add your conditional formatting rules:
The following screenshot shows the effect of those conditional formatting rules:
### Tile formatting
-This feature is available on the KPI, LKV, and property tiles. It lets you adjust font size, choose decimal precision, abbreviate numeric values (for example, format 1,700 as 1.7K), or wrap string values on their tiles.
+This feature is available on the KPI, LKV, and property tiles. It lets you adjust font size, choose decimal precision, abbreviate numeric values (for example, format 1,700 as 1.7 K), or wrap string values on their tiles.
:::image type="content" source="media/howto-manage-dashboards/tile-format.png" alt-text="Screenshot that shows the dialog box for tile formatting.":::
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
The **Provision-EflowVm** command adds the provisioning information for your IoT
| registrationId | The registration ID of an existing IoT Edge device | Registration ID for provisioning an IoT Edge device (**DpsSymmetricKey**). | | identityCertPath | Directory path | Absolute destination path of the identity certificate on your Windows host machine (**ManualX509**, **DpsX509**). | | identityPrivKeyPath | Directory path | Absolute source path of the identity private key on your Windows host machine (**ManualX509**, **DpsX509**). |
+| globalEndpoint | Device Endpoint URL | URL for Global Endpoint to be used for DPS provisioning. |
For more information, use the command `Get-Help Provision-EflowVm -full`.
iot-hub Iot Hub Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-public-network-access.md
If you have trouble accessing your IoT hub, your network configuration could be
Unable to retrieve devices. Please ensure that your network connection is online and network settings allow connections from your IP address. ```
+When trying to access your IoT hub with other tools, such as the Azure CLI, the error message may include `{"errorCode": 401002, "message": "Unauthorized"}` in the case where the request is not routed correctly to your IoT hub.
+ To get access to the IoT hub, request permission from your IT administrator to add your IP address in the IP address range or to enable public network access to all networks. If that fails to resolve the issue, check your local network settings or contact your local network administrator to fix connectivity to the IoT Hub. For example, sometimes a proxy in the local network can interfere with access to IoT Hub. If the preceding commands do not work or you cannot turn on all network ranges, contact Microsoft support.
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/how-to-configure-key-rotation.md
Last updated 11/24/2021
# Configure key auto-rotation in Azure Key Vault (preview)
+> [!IMPORTANT]
+> This feature is currently disabled due to an issue with the service.
## Overview
key-vault Overview Storage Keys Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/overview-storage-keys-powershell.md
The commands in this section complete the following actions:
- Create an account shared access signature token for Blob, File, Table, and Queue services. The token is created for resource types Service, Container, and Object. The token is created with all permissions, over https, and with the specified start and end dates. - Set a Key Vault managed storage shared access signature definition in the vault. The definition has the template URI of the shared access signature token that was created. The definition has the shared access signature type `account` and is valid for N days. - Verify that the shared access signature was saved in your key vault as a secret.--+ ### Set variables First, set the variables to be used by the PowerShell cmdlets in the following steps. Be sure to update the \<YourStorageAccountName\> and \<YourKeyVaultName\> placeholders.
lab-services Class Type Ethical Hacking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/class-type-ethical-hacking.md
Kali is a Linux distribution that includes tools for penetration testing and sec
1. On the **Specify Generation** page, accept the defaults, and select **Next**. 1. On the **Assign Memory** page, enter **2048 MB** for the **startup memory**, and select **Next**. 1. On the **Configure Networking** page, leave the connection as **Not Connected**. You'll set up the network adapter later.
- 1. On the **Connect Virtual Hard Disk** page, select **Use an existing virtual hard disk**. Browse to the location for the **Kali-Linux-{version}-vmware-amd64.vmdk** file created in the previous step, and select **Next**.
+ 1. On the **Connect Virtual Hard Disk** page, select **Use an existing virtual hard disk**. Browse to the location for the **Kali-Linux-{version}-vmware-amd64.vhdk** file created in the previous step, and select **Next**.
1. On the **Completing the New Virtual Machine Wizard** page, and select **Finish**. 1. Once the virtual machine is created, select it in the Hyper-V Manager. Don't turn on the machine yet. 1. Choose **Action** -> **Settings**.
Kali is a Linux distribution that includes tools for penetration testing and sec
1. On the **Legacy Network Adapter** page, select **LabServicesSwitch** for the **Virtual Switch** setting, and select **OK**. LabServicesSwitch was created when preparing the template machine for Hyper-V in the **Prepare Template for Nested Virtualization** section. 1. The Kali-Linux image is now ready for use. From **Hyper-V Manager**, choose **Action** -> **Start**, then choose **Action** -> **Connect** to connect to the virtual machine. The default username is **kali** and the password is **kali**.
-## Set up a nested VM with Metasploitable Image
+### Set up a nested VM with Metasploitable Image
The Rapid7 Metasploitable image is an image purposely configured with security vulnerabilities. You'll use this image to test and find issues. The following instructions show you how to use a pre-created Metasploitable image. However, if a newer version of the Metasploitable image is needed, see [https://github.com/rapid7/metasploitable3](https://github.com/rapid7/metasploitable3).
lab-services How To Enable Nested Virtualization Template Vm Using Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/how-to-enable-nested-virtualization-template-vm-using-script.md
The steps in this article focus on setting up nested virtualization for Windows
1. When the **Trusted sites** dialog appears, add `https://github.com` to the trusted websites list, and select **Close**. ![Trusted sites](./media/how-to-enable-nested-virtualization-template-vm-using-script/trusted-sites-dialog.png)+ 1. Download the Git repository files as outlined in the following steps.
- 1. Go to [https://github.com/Azure/azure-devtestlab/](https://github.com/Azure/azure-devtestlab/).
+ 1. Go to https://github.com/Azure/azure-devtestlab/archive/refs/heads/master.zip or [https://github.com/Azure/azure-devtestlab/](https://github.com/Azure/azure-devtestlab/).
1. Click the **Clone or Download** button. 1. Click **Download ZIP**. 1. Extract the ZIP file
Next steps are common to setting up any lab.
- [Add users](tutorial-setup-classroom-lab.md#add-users-to-the-lab) - [Set quota](how-to-configure-student-usage.md#set-quotas-for-users) - [Set a schedule](tutorial-setup-classroom-lab.md#set-a-schedule-for-the-lab)-- [Email registration links to students](how-to-configure-student-usage.md#send-invitations-to-users)
+- [Email registration links to students](how-to-configure-student-usage.md#send-invitations-to-users)
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-ml-pipelines.md
Previously updated : 10/21/2021 Last updated : 01/15/2022
In this article, you learn how a machine learning pipeline helps you build, opti
<a name="compare"></a> ## Which Azure pipeline technology should I use?
-The Azure cloud provides several types of pipeline, each with a different purpose. The following table lists the different pipelines and what they are used for:
+The Azure cloud provides several types of pipeline, each with a different purpose. The following table lists the different pipelines and what they're used for:
| Scenario | Primary persona | Azure offering | OSS offering | Canonical pipe | Strengths | | -- | | -- | | -- | |
The Azure cloud provides several types of pipeline, each with a different purpos
An Azure Machine Learning pipeline is an independently executable workflow of a complete machine learning task. Subtasks are encapsulated as a series of steps within the pipeline. An Azure Machine Learning pipeline can be as simple as one that calls a Python script, so _may_ do just about anything. Pipelines _should_ focus on machine learning tasks such as:
-+ Data preparation including importing, validating and cleaning, munging and transformation, normalization, and staging
-+ Training configuration including parameterizing arguments, filepaths, and logging / reporting configurations
-+ Training and validating efficiently and repeatedly. Efficiency might come from specifying specific data subsets, different hardware compute resources, distributed processing, and progress monitoring
-+ Deployment, including versioning, scaling, provisioning, and access control
++ Data preparation++ Training configuration ++ Efficient training and validation++ Repeatable deployments
-Independent steps allow multiple data scientists to work on the same pipeline at the same time without over-taxing compute resources. Separate steps also make it easy to use different compute types/sizes for each step.
+Time-consuming steps can be done only when their input changes. A change to the training script may be run without redoing the data loading and preparation steps. Separate steps can use different compute type/sizes for each steps. Independent steps allow multiple data scientists to work on the same pipeline at the same time without over-taxing compute resources.
-After the pipeline is designed, there is often more fine-tuning around the training loop of the pipeline. When you rerun a pipeline, the run jumps to the steps that need to be rerun, such as an updated training script. Steps that do not need to be rerun are skipped.
-
-With pipelines, you may choose to use different hardware for different tasks. Azure coordinates the various [compute targets](concept-azure-machine-learning-architecture.md) you use, so your intermediate data seamlessly flows to downstream compute targets.
+## Key advantages
-You can [track the metrics for your pipeline experiments](./how-to-log-view-metrics.md) directly in Azure portal or your [workspace landing page (preview)](https://ml.azure.com). After a pipeline has been published, you can configure a REST endpoint, which allows you to rerun the pipeline from any platform or stack.
+The key advantages of using pipelines for your machine learning workflows are:
-In short, all of the complex tasks of the machine learning lifecycle can be helped with pipelines. Other Azure pipeline technologies have their own strengths. [Azure Data Factory pipelines](../data-factory/concepts-pipelines-activities.md) excels at working with data and [Azure Pipelines](https://azure.microsoft.com/services/devops/pipelines/) is the right tool for continuous integration and deployment. But if your focus is machine learning, Azure Machine Learning pipelines are likely to be the best choice for your workflow needs.
+|Key advantage|Description|
+|:-:|--|
+|**Unattended&nbsp;runs**|Schedule steps to run in parallel or in sequence in a reliable and unattended manner. Data preparation and modeling can last days or weeks, and pipelines allow you to focus on other tasks while the process is running. |
+|**Heterogenous compute**|Use multiple pipelines that are reliably coordinated across heterogeneous and scalable compute resources and storage locations. Make efficient use of available compute resources by running individual pipeline steps on different compute targets, such as HDInsight, GPU Data Science VMs, and Databricks.|
+|**Reusability**|Create pipeline templates for specific scenarios, such as retraining and batch-scoring. Trigger published pipelines from external systems via simple REST calls.|
+|**Tracking and versioning**|Instead of manually tracking data and result paths as you iterate, use the pipelines SDK to explicitly name and version your data sources, inputs, and outputs. You can also manage scripts and data separately for increased productivity.|
+| **Modularity** | Separating areas of concerns and isolating changes allows software to evolve at a faster rate with higher quality. |
+|**Collaboration**|Pipelines allow data scientists to collaborate across all areas of the machine learning design process, while being able to concurrently work on pipeline steps.|
### Analyzing dependencies
-Many programming ecosystems have tools that orchestrate resource, library, or compilation dependencies. Generally, these tools use file timestamps to calculate dependencies. When a file is changed, only it and its dependents are updated (downloaded, recompiled, or packaged). Azure Machine Learning pipelines extend this concept. Like traditional build tools, pipelines calculate dependencies between steps and only perform the necessary recalculations.
-
-The dependency analysis in Azure Machine Learning pipelines is more sophisticated than simple timestamps though. Every step may run in a different hardware and software environment. Data preparation might be a time-consuming process but not need to run on hardware with powerful GPUs, certain steps might require OS-specific software, you might want to use [distributed training](how-to-train-distributed-gpu.md), and so forth.
-
-Azure Machine Learning automatically orchestrates all of the dependencies between pipeline steps. This orchestration might include spinning up and down Docker images, attaching and detaching compute resources, and moving data between the steps in a consistent and automatic manner.
+The dependency analysis in Azure Machine Learning pipelines is more sophisticated than simple timestamps. Every step may run in a different hardware and software environment. Azure Machine Learning automatically orchestrates all of the dependencies between pipeline steps. This orchestration might include spinning up and down Docker images, attaching and detaching compute resources, and moving data between the steps in a consistent and automatic manner.
### Coordinating the steps involved
When you create and run a `Pipeline` object, the following high-level steps occu
![Pipeline steps](./media/concept-ml-pipelines/run_an_experiment_as_a_pipeline.png)
-## Building pipelines with the Python SDK
-
-In the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install), a pipeline is a Python object defined in the `azureml.pipeline.core` module. A [Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline%28class%29) object contains an ordered sequence of one or more [PipelineStep](/python/api/azureml-pipeline-core/azureml.pipeline.core.builder.pipelinestep) objects. The `PipelineStep` class is abstract and the actual steps will be of subclasses such as [EstimatorStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.estimatorstep), [PythonScriptStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.pythonscriptstep), or [DataTransferStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.datatransferstep). The [ModuleStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.modulestep) class holds a reusable sequence of steps that can be shared among pipelines. A `Pipeline` runs as part of an `Experiment`.
-
-An Azure machine learning pipeline is associated with an Azure Machine Learning workspace and a pipeline step is associated with a compute target available within that workspace. For more information, see [Create and manage Azure Machine Learning workspaces in the Azure portal](./how-to-manage-workspace.md) or [What are compute targets in Azure Machine Learning?](./concept-compute-target.md).
-
-### A simple Python Pipeline
-
-This snippet shows the objects and calls needed to create and run a `Pipeline`:
-
-```python
-ws = Workspace.from_config()
-blob_store = Datastore(ws, "workspaceblobstore")
-compute_target = ws.compute_targets["STANDARD_NC6"]
-experiment = Experiment(ws, 'MyExperiment')
-
-input_data = Dataset.File.from_files(
- DataPath(datastore, '20newsgroups/20news.pkl'))
-prepped_data_path = OutputFileDatasetConfig(name="output_path")
-
-dataprep_step = PythonScriptStep(
- name="prep_data",
- script_name="dataprep.py",
- source_directory="prep_src",
- compute_target=compute_target,
- arguments=["--prepped_data_path", prepped_data_path],
- inputs=[input_dataset.as_named_input('raw_data').as_mount() ]
- )
-
-prepped_data = prepped_data_path.read_delimited_files()
-
-train_step = PythonScriptStep(
- name="train",
- script_name="train.py",
- compute_target=compute_target,
- arguments=["--prepped_data", prepped_data],
- source_directory="train_src"
-)
-steps = [ dataprep_step, train_step ]
-
-pipeline = Pipeline(workspace=ws, steps=steps)
-
-pipeline_run = experiment.submit(pipeline)
-pipeline_run.wait_for_completion()
-```
-
-The snippet starts with common Azure Machine Learning objects, a `Workspace`, a `Datastore`, a [ComputeTarget](/python/api/azureml-core/azureml.core.computetarget), and an `Experiment`. Then, the code creates the objects to hold `input_data` and `prepped_data_path`. The `input_data` is an instance of [FileDataset](/python/api/azureml-core/azureml.data.filedataset) and the `prepped_data_path` is an instance of [OutputFileDatasetConfig](/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig). For `OutputFileDatasetConfig` the default behavior is to copy the output to the `workspaceblobstore` datastore under the path `/dataset/{run-id}/{output-name}`, where `run-id` is the Run's ID and `output-name` is an autogenerated value if not specified by the developer.
-
-The data preparation code (not shown), writes delimited files to `prepped_data_path`. These outputs from the data preparation step are passed as `prepped_data` to the training step.
-
-The array `steps` holds the two `PythonScriptStep`s, `dataprep_step` and `train_step`. Azure Machine Learning will analyze the data dependency of `prepped_data` and run `dataprep_step` before `train_step`.
-
-Then, the code instantiates the `Pipeline` object itself, passing in the workspace and steps array. The call to `experiment.submit(pipeline)` begins the Azure ML pipeline run. The call to `wait_for_completion()` blocks until the pipeline is finished.
-
-To learn more about connecting your pipeline to your data, see the articles [Data access in Azure Machine Learning](concept-data.md) and [Moving data into and between ML pipeline steps (Python)](how-to-move-data-in-out-of-pipelines.md).
-
-## Building pipelines with the designer
-
-Developers who prefer a visual design surface can use the Azure Machine Learning designer to create pipelines. You can access this tool from the **Designer** selection on the homepage of your workspace. The designer allows you to drag and drop steps onto the design surface.
-
-When you visually design pipelines, the inputs and outputs of a step are displayed visibly. You can drag and drop data connections, allowing you to quickly understand and modify the dataflow of your pipeline.
-
-![Azure Machine Learning designer example](./media/concept-designer/designer-drag-and-drop.gif)
-
-## Key advantages
-
-The key advantages of using pipelines for your machine learning workflows are:
-
-|Key advantage|Description|
-|:-:|--|
-|**Unattended&nbsp;runs**|Schedule steps to run in parallel or in sequence in a reliable and unattended manner. Data preparation and modeling can last days or weeks, and pipelines allow you to focus on other tasks while the process is running. |
-|**Heterogenous compute**|Use multiple pipelines that are reliably coordinated across heterogeneous and scalable compute resources and storage locations. Make efficient use of available compute resources by running individual pipeline steps on different compute targets, such as HDInsight, GPU Data Science VMs, and Databricks.|
-|**Reusability**|Create pipeline templates for specific scenarios, such as retraining and batch-scoring. Trigger published pipelines from external systems via simple REST calls.|
-|**Tracking and versioning**|Instead of manually tracking data and result paths as you iterate, use the pipelines SDK to explicitly name and version your data sources, inputs, and outputs. You can also manage scripts and data separately for increased productivity.|
-| **Modularity** | Separating areas of concerns and isolating changes allows software to evolve at a faster rate with higher quality. |
-|**Collaboration**|Pipelines allow data scientists to collaborate across all areas of the machine learning design process, while being able to concurrently work on pipeline steps.|
- ## Next steps
-Azure Machine Learning pipelines are a powerful facility that begins delivering value in the early development stages. The value increases as the team and project grows. This article has explained how pipelines are specified with the Azure Machine Learning Python SDK and orchestrated on Azure. You've seen some simple source code and been introduced to a few of the `PipelineStep` classes that are available. You should have a sense of when to use Azure Machine Learning pipelines and how Azure runs them.
-
-+ Learn how to [create your first pipeline](./how-to-create-machine-learning-pipelines.md).
-
-+ Learn how to [run batch predictions on large data](tutorial-pipeline-batch-scoring-classification.md ).
+Azure Machine Learning pipelines are a powerful facility that begins delivering value in the early development stages.
++ [Define pipelines with the Azure CLI](./how-to-train-cli.md#hello-pipelines)++ [Define pipelines with the Azure SDK](./how-to-create-machine-learning-pipelines.md)++ [Define pipelines with Designer](./tutorial-designer-automobile-price-train-score.md) + See the SDK reference docs for [pipeline core](/python/api/azureml-pipeline-core/) and [pipeline steps](/python/api/azureml-pipeline-steps/).- + Try out example Jupyter notebooks showcasing [Azure Machine Learning pipelines](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines). Learn how to [run notebooks to explore this service](samples-notebooks.md).
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-private-link.md
Previously updated : 01/05/2022 Last updated : 01/10/2022 # Configure a private endpoint for an Azure Machine Learning workspace
Azure Private Link enables you to connect to your workspace using a private endp
## Prerequisites - * You must have an existing virtual network to create the private endpoint in. * [Disable network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md) before adding the private endpoint.
ws = Workspace.create(name='myworkspace',
show_output=True) ```
-# [Azure CLI](#tab/azure-cli)
+# [Azure CLI extension 2.0 preview](#tab/azurecliextensionv2)
+
+When using the Azure CLI [extension 2.0 CLI preview for machine learning](how-to-configure-cli.md), a YAML document is used to configure the workspace. The following is an of creating a new workspace using a YAML configuration:
+
+> [!TIP]
+> When using private link, your workspace cannot use Azure Container Registry tasks compute for image building. The `image_build_compute` property in this configuration specifies a CPU compute cluster name to use for Docker image environment building. You can also specify whether the private link workspace should be accessible over the internet using the `public_network_access` property.
+>
+> In this example, the compute referenced by `image_build_compute` will need to be created before building images.
++
+```azurecli-interactive
+az ml workspace create \
+ -g <resource-group-name> \
+ --file privatelink.yml
+```
+
+After creating the workspace, use the [Azure networking CLI commands](/cli/azure/network/private-endpoint#az_network_private_endpoint_create) to create a private link endpoint for the workspace.
+
+```azurecli-interactive
+az network private-endpoint create \
+ --name <private-endpoint-name> \
+ --vnet-name <vnet-name> \
+ --subnet <subnet-name> \
+ --private-connection-resource-id "/subscriptions/<subscription>/resourceGroups/<resource-group-name>/providers/Microsoft.MachineLearningServices/workspaces/<workspace-name>" \
+ --group-id amlworkspace \
+ --connection-name workspace -l <location>
+```
+
+To create the private DNS zone entries for the workspace, use the following commands:
+
+```azurecli-interactive
+# Add privatelink.api.azureml.ms
+az network private-dns zone create \
+ -g <resource-group-name> \
+ --name 'privatelink.api.azureml.ms'
+
+az network private-dns link vnet create \
+ -g <resource-group-name> \
+ --zone-name 'privatelink.api.azureml.ms' \
+ --name <link-name> \
+ --virtual-network <vnet-name> \
+ --registration-enabled false
+
+az network private-endpoint dns-zone-group create \
+ -g <resource-group-name> \
+ --endpoint-name <private-endpoint-name> \
+ --name myzonegroup \
+ --private-dns-zone 'privatelink.api.azureml.ms' \
+ --zone-name 'privatelink.api.azureml.ms'
+
+# Add privatelink.notebooks.azure.net
+az network private-dns zone create \
+ -g <resource-group-name> \
+ --name 'privatelink.notebooks.azure.net'
+
+az network private-dns link vnet create \
+ -g <resource-group-name> \
+ --zone-name 'privatelink.notebooks.azure.net' \
+ --name <link-name> \
+ --virtual-network <vnet-name> \
+ --registration-enabled false
+
+az network private-endpoint dns-zone-group add \
+ -g <resource-group-name> \
+ --endpoint-name <private-endpoint-name> \
+ --name myzonegroup \
+ --private-dns-zone 'privatelink.notebooks.azure.net' \
+ --zone-name 'privatelink.notebooks.azure.net'
+```
+
+# [Azure CLI extension 1.0](#tab/azurecliextensionv1)
-The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace create](/cli/azure/ml/workspace#az_ml_workspace_create) command. The following parameters for this command can be used to create a workspace with a private network, but it requires an existing virtual network:
+If you are using the Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md), use the [az ml workspace create](/cli/azure/ml/workspace#az_ml_workspace_create) command. The following parameters for this command can be used to create a workspace with a private network, but it requires an existing virtual network:
* `--pe-name`: The name of the private endpoint that is created. * `--pe-auto-approval`: Whether private endpoint connections to the workspace should be automatically approved.
ws.add_private_endpoint(private_endpoint_config=pe, private_endpoint_auto_approv
For more information on the classes and methods used in this example, see [PrivateEndpointConfig](/python/api/azureml-core/azureml.core.privateendpointconfig) and [Workspace.add_private_endpoint](/python/api/azureml-core/azureml.core.workspace(class)#add-private-endpoint-private-endpoint-config--private-endpoint-auto-approval-true--location-none--show-output-true--tags-none-).
-# [Azure CLI](#tab/azure-cli)
+# [Azure CLI extension 2.0 preview](#tab/azurecliextensionv2)
+
+When using the Azure CLI [extension 2.0 CLI preview for machine learning](how-to-configure-cli.md), use the [Azure networking CLI commands](/cli/azure/network/private-endpoint#az_network_private_endpoint_create) to create a private link endpoint for the workspace.
+
+```azurecli-interactive
+az network private-endpoint create \
+ --name <private-endpoint-name> \
+ --vnet-name <vnet-name> \
+ --subnet <subnet-name> \
+ --private-connection-resource-id "/subscriptions/<subscription>/resourceGroups/<resource-group-name>/providers/Microsoft.MachineLearningServices/workspaces/<workspace-name>" \
+ --group-id amlworkspace \
+ --connection-name workspace -l <location>
+```
+
+To create the private DNS zone entries for the workspace, use the following commands:
+
+```azurecli-interactive
+# Add privatelink.api.azureml.ms
+az network private-dns zone create \
+ -g <resource-group-name> \
+ --name 'privatelink.api.azureml.ms'
+
+az network private-dns link vnet create \
+ -g <resource-group-name> \
+ --zone-name 'privatelink.api.azureml.ms' \
+ --name <link-name> \
+ --virtual-network <vnet-name> \
+ --registration-enabled false
+
+az network private-endpoint dns-zone-group create \
+ -g <resource-group-name> \
+ --endpoint-name <private-endpoint-name> \
+ --name myzonegroup \
+ --private-dns-zone 'privatelink.api.azureml.ms' \
+ --zone-name 'privatelink.api.azureml.ms'
+
+# Add privatelink.notebooks.azure.net
+az network private-dns zone create \
+ -g <resource-group-name> \
+ --name 'privatelink.notebooks.azure.net'
+
+az network private-dns link vnet create \
+ -g <resource-group-name> \
+ --zone-name 'privatelink.notebooks.azure.net' \
+ --name <link-name> \
+ --virtual-network <vnet-name> \
+ --registration-enabled false
+
+az network private-endpoint dns-zone-group add \
+ -g <resource-group-name> \
+ --endpoint-name <private-endpoint-name> \
+ --name myzonegroup \
+ --private-dns-zone 'privatelink.notebooks.azure.net' \
+ --zone-name 'privatelink.notebooks.azure.net'
+```
+
+# [Azure CLI extension 1.0](#tab/azurecliextensionv1)
The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace private-endpoint add](/cli/azure/ml(v1)/workspace/private-endpoint#az_ml_workspace_private_endpoint_add) command.
ws = Workspace.from_config()
_, _, connection_name = ws.get_details()['privateEndpointConnections'][0]['id'].rpartition('/') ws.delete_private_endpoint_connection(private_endpoint_connection_name=connection_name) ```
+# [Azure CLI extension 2.0 preview](#tab/azurecliextensionv2)
+
+When using the Azure CLI [extension 2.0 CLI preview for machine learning](how-to-configure-cli.md), use the following command to remove the private endpoint:
+
+```azurecli
+az network private-endpoint delete \
+ --name <private-endpoint-name> \
+ --resource-group <resource-group-name> \
+```
-# [Azure CLI](#tab/azure-cli)
+# [Azure CLI extension 1.0](#tab/azurecliextensionv1)
The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace private-endpoint delete](/cli/azure/ml(v1)/workspace/private-endpoint#az_ml_workspace_private_endpoint_delete) command.
ws = Workspace.from_config()
ws.update(allow_public_access_when_behind_vnet=True) ```
-# [Azure CLI](#tab/azure-cli)
+# [Azure CLI extension 2.0 preview](#tab/azurecliextensionv2)
+
+When using the Azure CLI [extension 2.0 CLI preview for machine learning](how-to-configure-cli.md), create a YAML document that sets the `public_network_access` property to `Enabled`. Then use the `az ml update` command to update the workspace:
+
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/workspace.schema.json
+name: mlw-privatelink-prod
+location: eastus
+display_name: Private Link endpoint workspace-example
+description: When using private link, you must set the image_build_compute property to a cluster name to use for Docker image environment building. You can also specify whether the workspace should be accessible over the internet.
+image_build_compute: cpu-compute
+public_network_access: Enabled
+tags:
+ purpose: demonstration
+```
+
+```azurecli
+az ml workspace update \
+ -n <workspace-name> \
+ -f workspace.yml
+ -g <resource-group-name>
+```
+
+# [Azure CLI extension 1.0](#tab/azurecliextensionv1)
The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml/workspace#az_ml_workspace_update) command. To enable public access to the workspace, add the parameter `--allow-public-access true`.
machine-learning Migrate Rebuild Experiment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/migrate-rebuild-experiment.md
In this article, you learn how to rebuild an ML Studio (classic) experiment in A
Studio (classic) **experiments** are similar to **pipelines** in Azure Machine Learning. However, in Azure Machine Learning pipelines are built on the same back-end that powers the SDK. This means that you have two options for machine learning development: the drag-and-drop designer or code-first SDKs.
-For more information on building pipelines with the SDK, see [What are Azure Machine Learning pipelines](concept-ml-pipelines.md#building-pipelines-with-the-python-sdk).
+For more information on building pipelines with the SDK, see [What are Azure Machine Learning pipelines](concept-ml-pipelines.md).
## Prerequisites
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/resource-curated-environments.md
ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
### TensorFlow **Name**: AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu
-**Description**: An environment for deep learning with Tensorflow containing the AzureML Python SDK and additional python packages.
+**Description**: An environment for deep learning with TensorFlow containing the AzureML Python SDK and additional python packages.
The following Dockerfile can be customized for your personal workflows.
marketplace Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/anomaly-detection.md
To help ensure that your customers are billed correctly, use the **Anomaly detec
## View and manage metered usage anomalies -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Insights** tile.
To help ensure that your customers are billed correctly, use the **Anomaly detec
[![Illustrates the Mark as an anomaly dialog box.](./media/anomaly-detection/mark-as-anomaly-workspaces.png)](./media/anomaly-detection/mark-as-anomaly-workspaces.png#lightbox)<br> ***Figure: 6: Mark as anomaly dialog box***
-#### [Current view](#tab/current-view)
-
-1. Sign-in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-navigation menu, select **Commercial Marketplace** > **Analyze** > **Usage**.
-1. Select the **Metered usage anomalies** tab.
-
- [![Illustrates the Metered usage anomalies tab on the Usage page.](./media/anomaly-detection/metered-usage-anomalies.png)](./media/anomaly-detection/metered-usage-anomalies.png#lightbox)<br>
- ***Figure 1: Metered usage anomalies tab***
-
-1. For any usage anomalies detected against metered billing, as a publisher you will be asked to investigate and confirm if the anomaly is true or not. Select **Mark as anomaly** to confirm the diagnosis.
-
- [![Illustrates the Mark as anomaly dialog box.](./media/anomaly-detection/mark-as-anomaly.png)](./media/anomaly-detection/mark-as-anomaly.png#lightbox)<br>
- ***Figure 2: Mark as an anomaly dialog box***
-
-1. If you believe that the overage usage anomaly we detected is not genuine, you can provide that feedback by selecting **Not an anomaly** for the Partner Center flagged anomaly on the particular overage usage.
-
- [![Illustrates the Why is it not an anomaly dialog box.](./media/anomaly-detection/why-is-it-not-an-anomaly.png)](./media/anomaly-detection/why-is-it-not-an-anomaly.png#lightbox)
- ***Figure 3: Why is it not an anomaly? dialog box***
-
-1. You can scroll down the page to see an inventory list of unacknowledged anomalies. The list provides an inventory of anomalies that you have not acknowledged. You can choose to mark any of the Partner Center flagged anomalies as genuine or false.
-
- [![Illustrates the Partner Center unacknowledged anomalies list on the Usage page.](./media/anomaly-detection/unacknowledged-anomalies.png)](./media/anomaly-detection/unacknowledged-anomalies.png#lightbox)<br>
- ***Figure 4: Partner Center unacknowledged anomalies list***
-
- By default, flagged anomalies that have an estimated financial impact greater than 100 USD are shown in Partner Center. However, you can select **All** from the **Estimated financial impact of anomaly** list to see all flagged anomalies.
-
- :::image type="content" source="./media/anomaly-detection/all-anomalies.png" alt-text="Screenshot of all metered usage anomalies for the selected offer.":::
-
-1. You would also see an anomaly action log that shows the actions you took on the overage usages. In the action log, you will be able to see which overage usage events were marked as genuine or false.
-
- [![Illustrates the Anomaly action log on the Usage page.](./media/anomaly-detection/anomaly-action-log.png)](./media/anomaly-detection/anomaly-action-log.png#lightbox)<br>
- ***Figure 5: Anomaly action log***
-
-1. Partner Center analytics will not support restatement of overage usage events in the export reports. Partner Center lets you enter the corrected overage usage for an anomaly and the details are passed on to Microsoft teams for investigation. Based on the investigation, Microsoft will issue credit refunds to the overcharged customer, as appropriate. When you select any of the flagged anomalies, you can select **Mark as anomaly** to mark the usage overage anomaly as genuine.
-
- [![Illustrates the Mark as an anomaly dialog box.](./media/anomaly-detection/new-reported-usage.png)](./media/anomaly-detection/new-reported-usage.png#lightbox)<br>
- ***Figure: 6: Mark as anomaly dialog box***
--- The first time an overage usage is flagged as irregular in Partner Center, you will get a window of 30 days from that instance to mark the anomaly as genuine or false. After the 30-day period, as the publisher you would not be able to act on the anomalies. > [!IMPORTANT]
After you mark an overage usage as an anomaly or acknowledge a model that flagge
> [!IMPORTANT] > You can re-submit overage usages in the event of overcharge situations. -- ## See also - [Metered billing for SaaS using the commercial marketplace metering service](./partner-center-portal/saas-metered-billing.md) - [Managed application metered billing](marketplace-metering-service-apis.md)
marketplace Azure App Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-offer-setup.md
If you havenΓÇÖt already done so, read [Plan an Azure application offer for the
## Create a new offer -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Marketplace offers** tile.
If you havenΓÇÖt already done so, read [Plan an Azure application offer for the
1. To generate the offer and continue, select **Create**.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-
-1. In the left-navigation menu, select **Commercial Marketplace** > **Overview**.
-
-1. On the Overview page, select **+ New offer** > **Azure Application**.
-
- ![Illustrates the left-navigation menu.](./media/create-new-azure-app-offer/new-offer-azure-app.png)
-
-1. In the **New offer** dialog box, enter an **Offer ID**. This is a unique identifier for each offer in your account. This ID is visible in the URL of the commercial marketplace listing and Azure Resource Manager templates, if applicable. For example, if you enter test-offer-1 in this box, the offer web address will be `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1`.
-
- * Each offer in your account must have a unique offer ID.
- * Use only lowercase letters and numbers. It can include hyphens and underscores, but no spaces, and is limited to 50 characters.
- * The Offer ID can't be changed after you select **Create**.
-
-1. Enter an **Offer alias**. This is the name used for the offer in Partner Center.
-
- * This name is only visible in Partner Center and itΓÇÖs different from the offer name and other values shown to customers.
- * The Offer alias can't be changed after you select **Create**.
-
-1. To generate the offer and continue, select **Create**.
--- ## Configure your Azure application offer setup details On the **Offer setup** tab, under **Setup details**, youΓÇÖll choose whether to configure a test drive. YouΓÇÖll also connect your customer relationship management (CRM) system with your commercial marketplace offer.
marketplace Azure App Test Publish https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-test-publish.md
This article explains how to use Partner Center to submit your Azure Application
## Submit the offer for publishing
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Marketplace offers** tile.
This article explains how to use Partner Center to submit your Azure Application
1. After all the pages are complete, in the **Notes for certification** box, provide testing instructions to the certification team to ensure that your app is tested correctly. Provide any supplementary notes helpful for understanding your app. 1. To start the publishing process for your offer, select **Publish**. The **Offer overview** page appears and shows the offer's **Publish status**.
-#### [Current view](#tab/current-view)
-
-1. Sign in to the commercial marketplace in [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview).
-1. On the **Overview** page, select the offer you want to publish.
-1. In the upper-right corner of the portal, select **Review and publish**.
-1. Make sure that the **Status** column for each page says **Complete**. The three possible statuses are as follows:
- - **Not started** ΓÇô The page is incomplete.
- - **Incomplete** ΓÇô The page is missing required information or has errors that need to be fixed. You'll need to go back to the page and update it.
- - **Complete** ΓÇô The page is complete. All required data has been provided and there are no errors.
-1. If any of the pages have a status other than **Complete**, select the page name, correct the issue, save the page, and then select **Review and publish** again to return to this page.
-1. After all the pages are complete, in the **Notes for certification** box, provide testing instructions to the certification team to ensure that your app is tested correctly. Provide any supplementary notes helpful for understanding your app.
-1. To start the publishing process for your offer, select **Publish**. The **Offer overview** page appears and shows the offer's **Publish status**.
--- Your offer's publish status will change as it moves through the publication process. For detailed information on this process, see [Validation and publishing steps](review-publish-offer.md#validation-and-publishing-steps). ## Preview and test the offer
marketplace Azure Consumption Commitment Enrollment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-consumption-commitment-enrollment.md
An offer must meet the following requirements to be enrolled in the MACC program
[!INCLUDE [Workspaces view note](./includes/preview-interface.md)]
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Marketplace offers** tile.
An offer must meet the following requirements to be enrolled in the MACC program
> [!NOTE] > MACC program status for offers published to Azure Marketplace is updated weekly on Mondays. This means that if you publish an offer that meets the eligibility requirements for the MACC program, the status in Partner Center will not show the Enrolled status until the following Monday.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-navigation menu, select **Commercial Marketplace** > **Overview**.
-1. In the **Offers** section, select the offer you want to see.
-1. On the **Offer overview** page, in the **Marketplace programs** section the **Microsoft Azure consumption commitment** status will show either _Enrolled_ or _Not Enrolled_.
-
- :::image type="content" source="media/azure-benefit/enrolled.png" alt-text="Screenshot of the Offer overview page in Partner Center that shows the Microsoft Azure Consumption Commitment status.":::
-
- ***Figure 1: Offer that is enrolled in the MACC program***
-
-> [!NOTE]
-> MACC program status for offers published to Azure Marketplace is updated weekly on Mondays. This means that if you publish an offer that meets the eligibility requirements for the MACC program, the status in Partner Center will not show the Enrolled status until the following Monday.
--- ## Next steps - To learn more about how the MACC program benefits customers and how they can find solutions that are enabled for MACC, see [Azure Consumption Commitment benefit](/marketplace/azure-consumption-commitment-benefit).
marketplace Azure Container Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-container-offer-setup.md
Review [Plan an Azure Container offer](marketplace-containers.md). It will expla
## Create a new offer -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Marketplace offers** tile.
Review [Plan an Azure Container offer](marketplace-containers.md). It will expla
> [!IMPORTANT] > After an offer is published, any edits you make to it in Partner Center appear on Azure Marketplace only after you republish the offer. Be sure to always republish an offer after changing it.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-2. In the left-nav menu, select **Commercial Marketplace** > **Overview**.
-3. On the Overview page, select **+ New offer** > **Azure Container**.
-
- :::image type="content" source="media/azure-container/new-offer-azure-container.png" alt-text="The left pane menu options and the 'New offer' button.":::
-
-> [!IMPORTANT]
-> After an offer is published, any edits you make to it in Partner Center appear on Azure Marketplace only after you republish the offer. Be sure to always republish an offer after changing it.
--- ## New offer Enter an **Offer ID**. This is a unique identifier for each offer in your account.
marketplace Azure Vm Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-offer-setup.md
If you haven't done so yet, review [Plan a virtual machine offer](marketplace-vi
## Create a new offer -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002). 1. On the Home page, select the **Marketplace offers** tile.
Enter an **Offer alias**. The offer alias is the name that's used for the offer
Select **Create** to generate the offer and continue. Partner Center opens the **Offer setup** page.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-2. On the left pane, select **Commercial Marketplace** > **Overview**.
-3. On the **Overview** page, select **+ New offer** > **Azure Virtual Machine**.
-
- ![Screenshot showing the left pane menu options and the "New offer" button.](./media/create-vm/new-offer-azure-virtual-machine.png)
-
-> [!NOTE]
-> After an offer is published, any edits you make to it in Partner Center appear on Azure Marketplace only after you republish the offer. Be sure to always republish an offer after making changes to it.
-
-Enter an **Offer ID**. This is a unique identifier for each offer in your account.
--- This ID is visible to customers in the web address for the Azure Marketplace offer and in Azure PowerShell and the Azure CLI, if applicable.-- Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if you enter **test-offer-1**, the offer web address will be `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1`.-- The Offer ID can't be changed after you select **Create**.-
-Enter an **Offer alias**. The offer alias is the name that's used for the offer in Partner Center.
--- This name is not used on Azure Marketplace. It is different from the offer name and other values that are shown to customers.-
-Select **Create** to generate the offer and continue. Partner Center opens the **Offer setup** page.
--- ## Test drive (optional) [!INCLUDE [Test drives section](includes/test-drives.md)]
marketplace Co Sell Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-configure.md
The Co-sell option is available for the following offer types.
## Go to the Co-sell with Microsoft tab -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Marketplace offers** tile.
The Co-sell option is available for the following offer types.
[ ![Illustrates the Co-sell with Microsoft page.](./media/co-sell/co-sell-with-microsoft-tab-workspaces.png) ](./media/co-sell/co-sell-with-microsoft-tab-workspaces.png#lightbox)
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-navigation menu, select **Commercial Marketplace** > **Overview**.
- > [!TIP]
- > If you donΓÇÖt see **Commercial Marketplace** in the left-navigation, [create a commercial marketplace account in Partner Center](create-account.md) and make sure your account is enrolled in the commercial marketplace program.
-1. On the **Overview** tab, select the offer you want to co-sell.
- > [!NOTE]
- > You can configure co-sell for a new offer thatΓÇÖs not yet published or with an offer thatΓÇÖs already published.
-
-1. In the menu on the left, select **Co-sell with Microsoft**.
-
- [![Illustrates the Co-sell with Microsoft link in the left navigation.](./media/co-sell/co-sell-with-microsoft-tab.png)](./media/co-sell/co-sell-with-microsoft-tab.png#lightbox)
--- ## Co-sell listings Co-sell listings help Microsoft sales teams market your offer to a wider audience. You must provide the following information to achieve co-sell ready status:
marketplace Co Sell Solution Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-solution-migration.md
If you have a solution in OCP GTM that you want to migrate to Partner Center the
#### Step 1: Similar offer does not exist in commercial marketplace please follow these steps
-#### [Workspaces view](#tab/workspaces-view)
- If you do not have an offer already in the commercial marketplace to merge a solution in OCP GTM with you will first need to create AND PUBLISH an offer in the commercial marketplace (this will retain its co-sell status, incentives, and referral pipeline.) 1. Create a draft offer in commercial marketplace
If you do not have an offer already in the commercial marketplace to merge a sol
1. **Continue to Scenario 2 below to complete the merge process.**
-#### [Current view](#tab/current-view)
-
-If you do not have an offer already in the commercial marketplace to merge a solution in OCP GTM with you will first need to create AND PUBLISH an offer in the commercial marketplace (this will retain its co-sell status, incentives, and referral pipeline.)
-
-1. Create a draft offer in commercial marketplace
-
- 1. Select **+ New Offer**
-
- :::image type="content" source="media/co-sell-migrate/new-offer.png" alt-text="New Offer display":::
-
- 2. Complete the required information in each tab.
- ΓÇó The **Learn more** links and tooltips will guide you through the requirements and details.
- ΓÇó Optionally, complete the **Resell through CSPs** page (in the left-nav menu below) to resell through the Cloud Solution Provider (CSP) program.
-
- :::image type="content" source="media/co-sell-migrate/offer-setup-nav.png" alt-text="Displays the Offer Setup page with overview options highlighted.":::
- 3. Select **Save Draft**.
- - For detailed instructions on the information you need to provide before your offer can be published, read the appropriate [publishing guide](./publisher-guide-by-offer-type.md).
- - Review the eligibility requirements in the corresponding article for your offer type to finalize the selection and configuration of your offer.
- - Review the publishing patterns for each online store for examples on how your solution maps to an offer type and configuration.
- - [Offer listing best practices - Microsoft commercial marketplace | Microsoft Docs](./gtm-offer-listing-best-practices.md)
-
- > [!TIP]
- > We recommend that you *do not fill out* the data in the **Co-sell with Microsoft** tab. To save you time we will take care of populating this data for you with your existing collateral in OCP GTM during the merge process.
-
- After the merge is complete you can return to the Co-sell with Microsoft tab and make updates if needed. For more information, see [Configure co-sell for a commercial marketplace offer](./co-sell-configure.md).
-1. When complete, select **Review and publish**.
-
- :::image type="content" source="media/co-sell-migrate/co-sell-with-ms.png" alt-text="Co-Sell with Microsoft page is displayed with options highlighted":::
-1. After reviewing all submitted information, select **Publish** to submit your draft offer for certification review. [Learn more about the certification phase](./review-publish-offer.md).:::image type="content" source="media/co-sell-migrate/review-and-publish.png" alt-text="Displays the Review and Publish page.":::
-1. Track the status of your submission on the Overview tab.
-
- :::image type="content" source="media/co-sell-migrate/offer-overview-tab.png" alt-text="Dispalys overview tab":::
-1. We will notify you when our certification review is complete. If we provide actionable feedback, address it, then select **Publish** to initiate a recertification.
-
-1. Once your offer passes certification, preview the offer with the link provided and make any final adjustments you may want. When you're ready, select **Go live** (see button above) to publish your offer to relevant commercial marketplace storefront(s).
-
-1. **Continue to Scenario 2 below to complete the merge process.**
--- #### Scenario 2: Similar offer exists in commercial marketplace please follow these steps Select this option if the solution is already published and live in the commercial marketplace and the solution in OCP GTM and the offer in the commercial marketplace should be merged into a single offer. This will avoid creating duplicate offers.
marketplace Co Sell Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-status.md
You can verify the Co-sell status for an offer on the **Offer overview** page of
## Verify Co-sell status -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Marketplace offers** tile.
You can verify the Co-sell status for an offer on the **Offer overview** page of
[![Illustrates the co-sell status in the Marketplace Programs of the Overview page in Partner Center.](./media/co-sell/co-sell-status.png)](./media/co-sell/co-sell-status.png#lightbox)
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-navigation menu, select **Commercial Marketplace** > **Overview**.
-1. In the **Offer alias** column, select the offer you want. The co-sell status is shown in the Marketplace Programs section of the page.
-
- [![Illustrates the co-sell status in the Marketplace Programs of the Overview page in Partner Center.](./media/co-sell/co-sell-status.png)](./media/co-sell/co-sell-status.png#lightbox)
--- The following table shows all possible co-sell statuses. To learn about the requirements for each co-sell status, see [Co-sell requirements](co-sell-requirements.md). | Status | Comment |
marketplace Create Consulting Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-consulting-service-offer.md
To publish a consulting service offer, you must meet certain eligibility require
## Create a consulting service offer -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Marketplace offers** tile.
To publish a consulting service offer, you must meet certain eligibility require
1. Enter an **Offer alias**. This is the name used for the offer in Partner Center. It isn't visible in the online stores and is different from the offer name shown to customers. 1. To generate the offer and continue, select **Create**.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-navigation menu, select **Commercial Marketplace** > **Overview**.
-1. On the Overview tab, select **+ New offer** > **Consulting service**.
-
- ![Illustrates the left-navigation menu.](./media/new-offer-consulting-service.png)
-
-1. In the **New offer** dialog box, enter an **Offer ID**. This ID is visible in the URL of the commercial marketplace listing. For example, if you enter test-offer-1 in this box, the offer web address will be `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1`.
-
- * Each offer in your account must have a unique offer ID.
- * Use only lowercase letters and numbers. The offer ID can include hyphens and underscores, but no spaces, and is limited to 50 characters.
- * The offer ID can't be changed after you select **Create**.
-
-1. Enter an **Offer alias**. This is the name used for the offer in Partner Center. It isn't visible in the online stores and is different from the offer name shown to customers.
-1. To generate the offer and continue, select **Create**.
--- ## Configure lead management Connect your customer relationship management (CRM) system with your commercial marketplace offer so you can receive customer contact information when a customer expresses interest in your consulting service. You can modify this connection at any time during or after you create the offer. For detailed guidance, see [Customer leads from your commercial marketplace offer](./partner-center-portal/commercial-marketplace-get-customer-leads.md).
marketplace Create Managed Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-managed-service-offer.md
To publish a Managed Service offer, you must have earned a Gold or Silver Micros
## Create a new offer -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002). 1. On the Home page, select the **Marketplace offers** tile.
To publish a Managed Service offer, you must have earned a Gold or Silver Micros
1. Enter an **Offer alias**. This is the name used for the offer in Partner Center. It isn't visible in the online stores and is different from the offer name shown to customers. 1. To generate the offer and continue, select **Create**.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-navigation menu, select **Commercial Marketplace** > **Overview**.
-1. On the Overview tab, select **+ New offer** > **Managed Service**.
-
- :::image type="content" source="./media/new-offer-managed-service.png" alt-text="Illustrates the left-navigation menu.":::
-
-1. In the **New offer** dialog box, enter an **Offer ID**. This is a unique identifier for each offer in your account. This ID is visible in the URL of the commercial marketplace listing and Azure Resource Manager templates, if applicable. For example, if you enter test-offer-1 in this box, the offer web address will be `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1`.
-
- - Each offer in your account must have a unique offer ID.
- - Use only lowercase letters and numbers. It can include hyphens and underscores, but no spaces, and is limited to 50 characters.
- - The Offer ID can't be changed after you select **Create**.
-
-1. Enter an **Offer alias**. This is the name used for the offer in Partner Center. It isn't visible in the online stores and is different from the offer name shown to customers.
-1. To generate the offer and continue, select **Create**.
--- ## Setup details This section does not apply for this offer type.
marketplace Create New Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-new-saas-offer.md
If you havenΓÇÖt already done so, read [Plan a SaaS offer](plan-saas-offer.md).
## Create a SaaS offer -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Marketplace offers** tile.
If you havenΓÇÖt already done so, read [Plan a SaaS offer](plan-saas-offer.md).
+ The offer alias can't be changed after you select **Create**. 1. To generate the offer and continue, select **Create**.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-navigation menu, select **Commercial Marketplace** > **Overview**.
-1. On the **Overview** tab, select **+ New offer** > **Software as a Service**.
-
- :::image type="content" source="./media/new-offer-saas.png" alt-text="Illustrates the left-navigation menu and the New offer list.":::
-
-1. In the **New offer** dialog box, enter an **Offer ID**. This ID is visible in the URL of the commercial marketplace listing and Azure Resource Manager templates, if applicable. For example, if you enter **test-offer-1** in this box, the offer web address will be `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1`.
- + Each offer in your account must have a unique offer ID.
- + Use only lowercase letters and numbers. It can include hyphens and underscores, but no spaces, and is limited to 50 characters.
- + The offer ID can't be changed after you select **Create**.
-
-1. Enter an **Offer alias**. This is the name used for the offer in Partner Center.
-
- + This name isn't visible in the commercial marketplace and itΓÇÖs different from the offer name and other values shown to customers.
- + The offer alias can't be changed after you select **Create**.
-1. To generate the offer and continue, select **Create**.
--- ## Configure your SaaS offer setup details On the **Offer setup** tab, under **Setup details**, youΓÇÖll choose whether to sell your offer through Microsoft or manage your transactions independently. Offers sold through Microsoft are referred to as _transactable offers_, which means that Microsoft facilitates the exchange of money for a software license on the publisherΓÇÖs behalf. For more information on these options, see [Listing options](plan-saas-offer.md#listing-options) and [Determine your publishing option](determine-your-listing-type.md).
marketplace Customer Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/customer-dashboard.md
The [Customers dashboard](https://go.microsoft.com/fwlink/?linkid=2166011) displ
## Access the Customers dashboard -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Insights** tile.
The [Customers dashboard](https://go.microsoft.com/fwlink/?linkid=2166011) displ
1. In the left menu, select **Customers**.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-nav, select **Commercial Marketplace** > **Analyze** > **Customers**.
--- ## Elements of the Customers dashboard The following sections describe how to use the Customers dashboard and how to read the data. ### Month range
-#### [Workspaces view](#tab/workspaces-view)
- You can find a month range selection at the top-right corner of each page. Customize the output of the **Customers** page graphs by selecting a month range based on the past 6, or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months. [ ![Illustrates the month filters on the Customers page.](./media/customer-dashboard/customers-workspace-filters.png) ](./media/customer-dashboard/customers-workspace-filters.png#lightbox)
-#### [Current view](#tab/current-view)
-
-You can find a month range selection at the top-right corner of each page. Customize the output of the **Customers** page graphs by selecting a month range based on the past 6, or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months.
---- > [!NOTE] > All metrics in the visualization widgets and export reports honor the computation period selected by the user.
marketplace Downloads Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/downloads-dashboard.md
You will receive a pop-up notification containing a link to the **Downloads** da
## Access the Downloads dashboard -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Insights** tile.
You will receive a pop-up notification containing a link to the **Downloads** da
1. In the left menu, select **Downloads**.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-nav, select **Commercial Marketplace** > **Analyze** > **Downloads**.
--- ## Lifetime export of commercial marketplace Analytics reports On the Downloads page, end user can do the following:
Support for Lifetime Export Capability of Analytics reports:
A user can schedule asynchronous downloads of reports from the Downloads dashboard.
-#### [Workspaces view](#tab/workspaces-view)
- [![scheduling asynchronous downloads of reports from the Downloads page](media/downloads-dashboard/download-reports-workspaces.png)](media/downloads-dashboard/download-reports.png#lightbox)
-#### [Current view](#tab/current-view)
-
-[![scheduling asynchronous downloads of reports from the Downloads section](media/downloads-dashboard/download-reports.png)](media/downloads-dashboard/download-reports.png#lightbox)
--- ## Next steps - For an overview of analytics reports available in the Partner Center commercial marketplace, see [Analytics for the commercial marketplace in Partner Center](analytics.md).
marketplace Dynamics 365 Business Central Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-business-central-offer-setup.md
Review [Plan a Dynamics 365 offer](marketplace-dynamics-365.md). It explains the
## Create a new offer -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002). 1. On the Home page, select the **Marketplace offers** tile.
Review [Plan a Dynamics 365 offer](marketplace-dynamics-365.md). It explains the
> [!IMPORTANT] > After an offer is published, any edits you make to it in Partner Center appear on Microsoft AppSource only after you republish the offer. Be sure to always republish an offer after changing it.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-nav menu, select **Commercial Marketplace** > **Overview**.
-1. On the Overview page, select **+ New offer** > **Dynamics 365 Business central**.
-
- :::image type="content" source="media/dynamics-365/new-offer-dynamics-365-business-central.png" alt-text="The left pane menu options and the 'New offer' button.":::
-
-> [!IMPORTANT]
-> After an offer is published, any edits you make to it in Partner Center appear on Microsoft AppSource only after you republish the offer. Be sure to always republish an offer after changing it.
--- ## New offer In the dialog box that appears, enter an **Offer ID**. This is a unique identifier for each offer in your account.
marketplace Dynamics 365 Customer Engage Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-customer-engage-offer-setup.md
Review [Plan a Dynamics 365 offer](marketplace-dynamics-365.md). It will explain
## Create a new offer -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Marketplace offers** tile.
Review [Plan a Dynamics 365 offer](marketplace-dynamics-365.md). It will explain
> [!IMPORTANT] > After an offer is published, any edits you make to it in Partner Center appear on Microsoft AppSource only after you republish the offer. Be sure to always republish an offer after changing it.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-2. In the left-nav menu, select **Commercial Marketplace** > **Overview**.
-3. On the Overview page, select **+ New offer** > **Dynamics 365 apps on Dataverse and Power Apps**.
-
- :::image type="content" source="media/dynamics-365/new-offer-dynamics-365-customer-engagement.png" alt-text="Shows the left pane menu options and the 'New offer' button with Customer Engagement select.":::
-
-> [!IMPORTANT]
-> After an offer is published, any edits you make to it in Partner Center appear on Microsoft AppSource only after you republish the offer. Be sure to always republish an offer after changing it.
--- ## New offer Enter an **Offer ID**. This is a unique identifier for each offer in your account.
marketplace Dynamics 365 Customer Engage Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-customer-engage-plans.md
Last updated 12/03/2021
If you enabled app license management for your offer, the **Plans overview** tab appears as shown in the following screenshot. Otherwise, go to [Set up Dynamics 365 apps on Dataverse and Power Apps offer technical configuration](dynamics-365-customer-engage-technical-configuration.md).
-#### [Workspaces view](#tab/workspaces-view)
- [ ![Screenshot of the Plan overview tab for a Dynamics 365 apps on Dataverse and Power Apps offer that's been enabled for third-party app licensing.](./media/third-party-license/plan-tab-d365-workspaces.png) ](./media/third-party-license/plan-tab-d365-workspaces.png#lightbox)
-#### [Current view](#tab/current-view)
---- You need to define at least one plan, if your offer has app license management enabled. You can create a variety of plans with different options for the same offer. These plans (sometimes referred to as SKUs) can differ in terms of monetization or tiers of service. Later, you will map the Service IDs of these plans in your solution package to enable a runtime license check by the Dynamics platform against these plans. You will map the Service ID of each plan in your solution package. This enables the Dynamics platform to run a license check against these plans. ## Create a plan
You need to define at least one plan, if your offer has app license management e
On the **Plan listing** tab, you can define the plan name and description as you want them to appear in the commercial marketplace. This information will be shown on the Microsoft AppSource listing page.
-#### [Workspaces view](#tab/workspaces-view)
- 1. In the **Plan name** box, the name you provided earlier for this plan appears here. You can change it at any time. This name will appear in the commercial marketplace as the title of your offer's software plan. 1. In the **Plan description** box, explain what makes this software plan unique and any differences from other plans within your offer. This description may contain up to 500 characters. 1. Select **Save draft**, and then in the breadcrumb at the top of the page, select **Plans**.
On the **Plan listing** tab, you can define the plan name and description as you
1. To create another plan for this offer, at the top of the **Plan overview** page, select **+ Create new plan**. Then repeat the steps in the [Create a plan](#create-a-plan) section. Otherwise, if you're done creating plans, go to the next section: Copy the Service IDs.
-#### [Current view](#tab/current-view)
-
-1. In the **Plan name** box, the name you provided earlier for this plan appears here. You can change it at any time. This name will appear in the commercial marketplace as the title of your offer's software plan.
-1. In the **Plan description** box, explain what makes this software plan unique and any differences from other plans within your offer. This description may contain up to 500 characters.
-1. Select **Save draft**, and then in the top-left, select **Plan overview**.
-
- :::image type="content" source="./media/third-party-license/bronze-plan.png" alt-text="Screenshot shows the Plan overview link on the Plan listing page of an offer in Partner Center.":::
-
-1. To create another plan for this offer, at the top of the **Plan overview** page, select **+ Create new plan**. Then repeat the steps in the [Create a plan](#create-a-plan) section. Otherwise, if you're done creating plans, go to the next section: Copy the Service IDs.
--- ## Copy the Service IDs You need to copy the Service ID of each plan you created so you can map them to your solution package in the next step.
-#### [Workspaces view](#tab/workspaces-view)
- - For each plan you created, copy the Service ID to a safe place. YouΓÇÖll add them to your solution package in the next step. The service ID is listed on the **Plan overview** page in the form of `ISV name.offer name.plan ID`. For example, Fabrikam.F365.bronze. [ ![Screenshot of the Plan overview page. The service ID for the plan is highlighted.](./media/third-party-license/service-id-workspaces.png) ](./media/third-party-license/service-id-workspaces.png#lightbox)
-#### [Current view](#tab/current-view)
--- For each plan you created, copy the Service ID to a safe place. YouΓÇÖll add them to your solution package in the next step. The service ID is listed on the **Plan overview** page in the form of `ISV name.offer name.plan ID`. For example, Fabrikam.F365.bronze.-
- :::image type="content" source="./media/third-party-license/service-id.png" alt-text="Screenshot of the Plan overview page. The service ID for the plan is highlighted.":::
--- ## Add Service IDs to your solution package 1. Add the Service IDs you copied in the previous step to your solution package. To learn how, see [Adding license metadata to your solution](/powerapps/developer/data-platform/appendix-add-license-information-to-your-solution) and [Create an AppSource package for your app](/powerapps/developer/data-platform/create-package-app-appsource).
marketplace Dynamics 365 Operations Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-operations-offer-setup.md
Review [Plan a Dynamics 365 offer](marketplace-dynamics-365.md). It will explain
## Create a new offer -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002). 1. On the Home page, select the **Marketplace offers** tile.
Review [Plan a Dynamics 365 offer](marketplace-dynamics-365.md). It will explain
> [!IMPORTANT] > After an offer is published, any edits you make to it in Partner Center appear on Microsoft AppSource only after you republish the offer. Be sure to always republish an offer after changing it.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-nav menu, select **Commercial Marketplace** > **Overview**.
-1. On the Overview page, select **+ New offer** > **Dynamics 365 Operations Apps**.
-
- :::image type="content" source="media/dynamics-365/new-offer-dynamics-365-operations.png" alt-text="The left pane menu options and the 'New offer' button.":::
-
-> [!IMPORTANT]
-> After an offer is published, any edits you make to it in Partner Center appear on Microsoft AppSource only after you republish the offer. Be sure to always republish an offer after changing it.
--- ## New offer Enter an **Offer ID**. This is a unique identifier for each offer in your account.
marketplace Dynamics 365 Operations Validation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-operations-validation.md
To schedule a final review call, contact [appsourceCRM@microsoft.com](mailto:app
### Option 2: Upload a demo video and LCS screenshots
-#### [Workspaces view](#tab/workspaces-view)
- 1. Record a video and upload the address to the hosting site of your choice. Follow these guidelines: - Viewable by the Microsoft certification team.
To schedule a final review call, contact [appsourceCRM@microsoft.com](mailto:app
[ ![Illustrates a zip file uploaded to the Supplemental content page.](./media//dynamics-365-operations/supplemental-content-workspaces.png) ](./media//dynamics-365-operations/supplemental-content-workspaces.png#lightbox)
-#### [Current view](#tab/current-view)
-
-1. Record a video and upload the address to the hosting site of your choice. Follow these guidelines:
-
- - Viewable by the Microsoft certification team.
- - Less than 20 minutes long.
- - Includes up to three core functionality highlights of your solution in the Dynamics 365 environment.
-
- > [!NOTE]
- > It is acceptable to use an existing marketing video if it meets the guidelines.
-
-2. Take the following screenshots of the [LCS](https://lcs.dynamics.com/) environment that match the offer or solution you want to publish. They must be clear enough for the certification team to read the text. Save the screenshots as JPG files.
-
- 1. Go to **LCS** > **Business Process Modeler** > **Project library**. Take screenshots of all the Process steps. Include the **Diagrams** and **Reviewed** columns, as shown here:
-
- :::image type="content" source="media/dynamics-365-operations/project-library.png" alt-text="Shows the project library window.":::
-
- 2. Go to **LCS** > **Solution Management** > **Test Solution Package**. Take screenshots that include the package overview and contents shown in these examples:
-
- | Field | Image |
- | | |
- | Package overview | [![Screenshot that shows the "Package overview" window.](media/dynamics-365-operations/package-overview-45.png)](media/dynamics-365-operations/package-overview.png#lightbox) |
- | <ul><li>Solution approvers</li></ul> | [![Package overview screen](media/dynamics-365-operations/solution-approvers-45.png)](media/dynamics-365-operations/solution-approvers.png#lightbox) |
- | Package contents<ul><li>Model</li><li>Software deployable package</li></ul> | [![Package contents screen one](media/dynamics-365-operations/package-contents-1-45.png)](media/dynamics-365-operations/package-contents-1.png#lightbox) |
- | <ul><li>GER configuration</li><li>Database backup</li></ul><br>Artifacts are not required in the **GER configuration** section. | [![Package contents screen two](media/dynamics-365-operations/package-contents-2-45.png)](media/dynamics-365-operations/package-contents-2.png#lightbox) |
- | <ul><li>Power BI report model</li><li>BPM artifact</li></ul><br>Artifacts are not required in the **Power BI** section. | [![Package contents screen three](media/dynamics-365-operations/package-contents-3-45.png)](media/dynamics-365-operations/package-contents-3.png#lightbox) |
- | <ul><li>Process data package</li><li>Solution license agreement and privacy policy</li></ul><br>The **GER configuration** and **Power BI report model** sections are optional to include for operations offers. | [![Package contents screen four](media/dynamics-365-operations/package-contents-4-45.png)](media/dynamics-365-operations/package-contents-4.png#lightbox) |
-
- To learn more about each section of the LCS portal, see the [LCS User Guide](/dynamics365/fin-ops-core/dev-itpro/lifecycle-services/lcs-user-guide).
-
-3. Upload to Partner Center.
-
- 1. Create a text document that includes the demo video address and screenshots, or save the screenshots as separate JPG files.
- 2. Add the text and images to a .zip file in [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165290) on the offer's **Supplemental content** tab.
-
- [![Shows the project library window](media/dynamics-365-operations/supplemental-content.png)](media/dynamics-365-operations/supplemental-content.png#lightbox)
--- ## Next steps - To start creating an offer, see [Planning a Microsoft Dynamics 365 offer](marketplace-dynamics-365.md)
marketplace Dynamics 365 Review Publish https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-review-publish.md
If any of the pages have a status other than **Complete**, you need to correct t
After all pages are complete and you have entered applicable testing notes, select **Publish** to submit your offer. We will email you when a preview version of your offer is available to approve. At that time complete the following steps:
-#### [Workspaces view](#tab/workspaces-view)
- 1. Return to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002). 1. On the Home page, select the **Marketplace offers** tile.
After all pages are complete and you have entered applicable testing notes, sele
1. Select **Review and publish**. 1. Select **Go live** to make your offer publicly available.
-#### [Current view](#tab/current-view)
-
-1. Return to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).
-1. Select the **Overview** tab in the left-nav menu bar.
-1. Select the offer.
-1. Select **Review and publish**.
-1. Select **Go live** to make your offer publicly available.
--- After you select **Review and publish**, we will perform certification and other verification processes before your offer is published to AppSource. We will notify you when your offer is available in preview so you can go live. If there is an issue, we will notify you with the details and provide guidance on how to fix it. ## Next steps
marketplace Insights Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/insights-dashboard.md
The Marketplace Insights dashboard provides clickstream data, which shouldn't be
## Access the Marketplace insights dashboard -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Insights** tile.
The Marketplace Insights dashboard provides clickstream data, which shouldn't be
1. In the left menu, select **Marketplace insights**.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-nav, select **Commercial Marketplace** > **Analyze** > **Marketplace insights**.
--- ## Elements of the Marketplace Insights dashboard The Marketplace Insights dashboard displays web telemetry details for Azure Marketplace and AppSource in two separate tabs. The following sections describe how to use the Marketplace Insights dashboard and how to read the data. ### Month range
-#### [Workspaces view](#tab/workspaces-view)
- You can find a month range selection at the top-right corner of each page. Customize the output of the **Marketplace Insights** page graphs by selecting a month range based on the past 6 or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months. [ ![Illustrates the month filters on the Marketplace Insights dashboard.](./media/insights-dashboard/marketplace-insights-filters.png) ](./media/insights-dashboard/marketplace-insights-filters.png#lightbox)
-#### [Current view](#tab/current-view)
-
-You can find a month range selection at the top-right corner of each page. Customize the output of the **Marketplace Insights** page graphs by selecting a month range based on the past 6, or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months.
---- > [!NOTE] > All metrics in the visualization widgets and export reports honor the computation period selected by the user.
marketplace Iot Edge Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/iot-edge-offer-setup.md
Review [Plan an IoT Edge Module offer](marketplace-iot-edge.md). It will explain
## Create a new offer
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002). 1. On the Home page, select the **Marketplace offers** tile.
Review [Plan an IoT Edge Module offer](marketplace-iot-edge.md). It will explain
> [!IMPORTANT] > After an offer is published, any edits you make to it in Partner Center appear on Azure Marketplace only after you republish the offer. Be sure to always republish an offer after changing it.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-2. In the left-nav menu, select **Commercial Marketplace** > **Overview**.
-3. On the Overview page, select **+ New offer** > **IoT Edge module**.
-
- :::image type="content" source="media/iot-edge/new-offer-iot-edge.png" alt-text="The left pane menu options and the 'New offer' button.":::
-
-> [!IMPORTANT]
-> After an offer is published, any edits you make to it in Partner Center appear on Azure Marketplace only after you republish the offer. Be sure to always republish an offer after changing it.
--- ## New offer Enter an **Offer ID**. This is a unique identifier for each offer in your account.
marketplace License Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/license-dashboard.md
This article provides information about the License dashboard in the commercial
## Check license usage -
-#### [Workspaces view](#tab/workspaces-view)
- To check license usage of ISV apps in Partner Center, do the following: 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
To check license usage of ISV apps in Partner Center, do the following:
[ ![Screenshot of the License dashboard in Partner Center.](./media/license-dashboard/license-dashboard-workspaces.png) ](./media/license-dashboard/license-dashboard-workspaces.png#lightbox)
-#### [Current view](#tab/current-view)
-
-To check license usage of ISV apps in Partner Center, do the following:
-
-1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165507).
-1. In the left-navigation menu, select **Commercial Marketplace** > **Analyze** > **License**.
---- ## Elements of the License dashboard The following sections describe how to use the License dashboard and how to read the data.
The following sections describe how to use the License dashboard and how to read
You can find a month range selection at the top-right corner of the page. Customize the output of the widgets on the page by selecting a month range based on the past 6 or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months.
-#### [Workspaces view](#tab/workspaces-view)
- [ ![Screenshot of the month range selections on the License dashboard in Partner Center.](./media/license-dashboard/license-workspace-filters.png) ](./media/license-dashboard/license-workspace-filters.png#lightbox)
-#### [Current view](#tab/current-view)
---- ## Customers widget The _Customers widget_ shows the current number of customers. The trend chart shows the month-over-month number of customers.
marketplace Manage Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/manage-account.md
Once you've [created a Partner Center account](./create-account.md), use the [co
## Access your account settings -
-#### [Workspaces view](#tab/workspaces-view)
- If you have not already done so, you (or your organization's administrator) should access the [account settings](https://partner.microsoft.com/dashboard/account/v3/organization/legalinfo#mpn) for your Partner Center account. 1. Sign in to the [commercial marketplace dashboard](https://partner.microsoft.com/dashboard/home) in Partner Center with the account you want to access. If youΓÇÖre part of multiple accounts and have signed in with a different, you can [switch accounts](switch-accounts.md).
If you have not already done so, you (or your organization's administrator) shou
[ ![Screenshot of the developer tab on the legal page in Account settings.](./media/manage-accounts/developer-tab-workspaces.png) ](./media/manage-accounts/developer-tab-workspaces.png#lightbox)
-#### [Current view](#tab/current-view)
-
-If you have not already done so, you (or your organization's administrator) should access the [account settings](https://go.microsoft.com/fwlink/?linkid=2165291) for your Partner Center account.
-
-1. Sign in to the [commercial marketplace dashboard](https://go.microsoft.com/fwlink/?linkid=2165290) in Partner Center with the account you want to access. If youΓÇÖre part of multiple accounts and have signed in with a different, you can [switch accounts](switch-accounts.md).
-1. In the top-right, select **Settings** (gear icon), and then select **Account settings**.
-
- :::image type="content" source="media/manage-accounts/settings-account.png" alt-text="Screenshot showing the Account Settings option in Partner Center.":::
-
-1. Under **Account settings**, select **Legal**, then the **Developer** tab to view details related to your commercial marketplace account.
-
- :::image type="content" source="media/manage-accounts/developer-tab.png" alt-text="Screenshot showing the Developer tab." lightbox="media/manage-accounts/developer-tab.png":::
--- ### Account settings page When you select **Settings** and expand **Account settings**, the default view is **Legal info**. This page can have up to three tabs, depending on the programs you have enrolled in: _Partner_, _Reseller_, and _Developer_.
A payout profile is the bank account to which proceeds are sent from your sales.
### To set up your payout profile
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to the [commercial marketplace dashboard](https://partner.microsoft.com/dashboard/home) in Partner Center with the account you want to access. 1. In the top-right, select **Settings** (gear icon), and then select **Account settings**.
A payout profile is the bank account to which proceeds are sent from your sales.
1. For more information about setting up your payout profile, see [Set up your payout account and tax forms](/partner-center/set-up-your-payout-account).
-#### [Current view](#tab/current-view)
-
-1. Go to the [commercial marketplace overview](https://partner.microsoft.com/dashboard/commercial-marketplace/overview) page in Partner Center.
-1. In the **Profile** section, next to **Payout Profile**, select **Update**.
-
- > [!NOTE]
- > In you don't see the **Payout and tax** section in the left menu, contact your global admin or account admin for permissions.
-
-1. For more information about setting up your payout profile, see [Set up your payout account and tax forms](/partner-center/set-up-your-payout-account).
--- > [!IMPORTANT] > Changing your payout account can delay your payments by up to one payment cycle. This delay occurs because we need to verify the account change, just as we do when first setting up the payout account. You'll still get paid for the full amount after your account has been verified; any payments due for the current payment cycle will be added to the next one.
marketplace Anomaly Detection Service For Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/anomaly-detection-service-for-metered-billing.md
If one of the following cases applies, you can adjust the usage amount in Partne
To submit a support ticket related to metered billing anomalies:
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home) with your work account. 1. On the Home page, select the **Help + support** tile.
To submit a support ticket related to metered billing anomalies:
For more publisher support options, see [Support for the commercial marketplace program in Partner Center](../support.md).
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home) with your work account.
-1. In the menu on the upper-right of the page, select the **Support** icon. The **Help and support** pane appears on the right side of the page.
-1. For help with the commercial marketplace, select **Commercial Marketplace**.
- ![Illustrates the support pane.](../media/support/commercial-marketplace-support-pane.png)
-1. In the **Problem summary** box, enter **commercial marketplace > metered billing**.
-1. In the **Problem type** box, select one of the following:
- - **Commercial Marketplace > Metered Billing > Wrong usage sent for Azure Applications offer**
- - **Commercial Marketplace > Metered Billing > Wrong usage sent for SaaS offer**
-1. Under **Next step**, select **Review solutions**.
-1. Review the recommended documents, if any or select **Provide issue details** to submit a support ticket.
-
-For more publisher support options, see [Support for the commercial marketplace program in Partner Center](../support.md).
--- ## Next steps - Learn about the [Marketplace metering service API](../marketplace-metering-service-apis.md).
marketplace Revenue Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/revenue-dashboard.md
The [Revenue dashboard](https://partner.microsoft.com/dashboard/commercial-marke
## Access the Revenue dashboard -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Insights** tile.
The [Revenue dashboard](https://partner.microsoft.com/dashboard/commercial-marke
[ ![Illustrates the Revenue dashboard.](./media/revenue-dashboard/revenue-dashboard.png) ](./media/revenue-dashboard/revenue-dashboard.png#lightbox)
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-
-1. In the [Commercial Marketplace dashboard](https://partner.microsoft.com/dashboard/commercial-marketplace/overview) in Partner Center, expand the **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** section and select **Revenue**.
-
- :::image type="content" source="./media/revenue-dashboard/revenue-dashboard-nav.png" alt-text="Illustrates the Revenue dashboard link in the left nav of the Partner Center Home page.":::
--- ## Elements of the Revenue dashboard The following sections describe how to use the Revenue dashboard and how to read the data.
marketplace Summary Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/summary-dashboard.md
The [Summary dashboard](https://go.microsoft.com/fwlink/?linkid=2165765) present
## Access the Summary dashboard -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Insights** tile.
The [Summary dashboard](https://go.microsoft.com/fwlink/?linkid=2165765) present
1. In the left menu, select **Summary**.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-nav, select **Commercial Marketplace** > **Analyze** > **Summary**.
--- ## Elements of the Summary dashboard The following sections describe how to use the summary dashboard and how to read the data. ### Month range
-#### [Workspaces view](#tab/workspaces-view)
- You can find a month range selection at the top-right corner of each page. Customize the output of the **Summary** page graphs by selecting a month range based on the past specified number of months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months. [ ![Illustrates the monthly range options on the summary dashboard.](./media/summary-dashboard/summary-dashboard-filters.png) ](./media/summary-dashboard/summary-dashboard-filters.png#lightbox)
-#### [Current view](#tab/current-view)
-
-You can find a month range selection at the top-right corner of each page. Customize the output of the **Summary** page graphs by selecting a month range based on the past 3, 6, or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months.
---- > [!NOTE] > All metrics in the visualization widgets and export reports honor the computation period selected by the user.
The Orders widget on the **Summary** dashboard displays the current orders for a
[![Illustrates the Orders widget on the summary dashboard.](./media/summary-dashboard/orders-widget.png)](./media/summary-dashboard/orders-widget.png#lightbox) - You can also go to the Orders report by selecting the **Orders Dashboard** link in the lower-left corner of the widget. ### Customers widget
marketplace Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/support.md
Microsoft provides support for a wide variety of products and services. Finding
## Get help or open a support ticket
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home) with your work account. If you have not yet done so, you will need to [create a Partner Center account](create-account.md). 1. On the Home page, select the **Help + support** tile.
If you cannot find your answer in the self help, select **Provide issue details*
>[!Note] >If you have not signed in to Partner Center, you may be required to sign in before you can create a ticket.
-#### [Current view](#tab/current-view)
-
-1. Sign in with your work account. If you have not yet done so, you will need to [create a Partner Center account](create-account.md).
-
-1. In the menu on the upper-right of the page, select the **Support** icon. The **Help and support** pane appears on the right side of the page.
-
-1. For help with the commercial marketplace, select **Commercial Marketplace**.
-
- ![Support drop-down menu](./media/support/commercial-marketplace-support-pane.png)
-
-1. In the **Problem summary** box, enter a brief description of the issue.
-
-1. In the **Problem type** box, do one of the following:
-
- - **Option 1**: Enter keywords such as: Marketplace, Azure app, SaaS offer, account management, lead management, deployment issue, payout, or co-sell offer migration. Then select a problem type from the recommended list that appears.
-
- - **Option 2**: Select **Browse topics** from the **Category** list and then select **Commercial Marketplace**. Then select the appropriate **Topic** and **Subtopic**.
-
-1. After you have found the topic of your choice, select **Review Solutions**.
-
- ![Next step](./media/support/next-step.png)
-
-The following options are shown:
--- To select a different topic, click **Select a different issue**.-- To help solve the issue, review the recommended steps and documents, if available.-
- ![Recommended solutions](./media/support/recommended-solutions.png)
-
-If you cannot find your answer in the self help, select **Provide issue details**. Complete all required fields to speed up the resolution process, then select **Submit**.
-
->[!Note]
->If you have not signed in to Partner Center, you may be required to sign in before you can create a ticket.
--- ## Track your existing support requests
-#### [Workspaces view](#tab/workspaces-view)
- 1. To review your open and closed tickets, sign in to [Partner Center](https://partner.microsoft.com/dashboard/home) with your work account. 1. On the Home page, select the **Help + support** tile. [ ![Illustrates the Partner Center Home page with the Help + support tile highlighted.](./media/workspaces/partner-center-help-support-tile.png) ](./media/workspaces/partner-center-help-support-tile.png#lightbox)
-#### [Current view](#tab/current-view)
-
-To review your open and closed tickets, in the left-navigation menu, select **Commercial Marketplace** > **Support**.
--- ## Record issue details with a HAR file To help support agents troubleshoot your issue, consider attaching an HTTP Archive format (HAR) file to your support ticket. HAR files are logs of network requests in a web browser.
marketplace Switch Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/switch-accounts.md
You can be part of more than one account. This article describes how to see if y
## View and switch accounts -
-#### [Workspaces view](#tab/workspaces-view)
- You can check to see if you are part of multiple accounts by the presence of the _account picker_. In the top-right, select the your account icon as seen highlighted in the following screenshot. [ ![Illustrates the account picker in Partner Center.](./media/manage-accounts/account-picker-workspaces.png) ](./media/manage-accounts/account-picker-workspaces.png#lightbox)
In the following example, the signed-in user is part of the four highlighted acc
[ ![Screenshot of accounts that can be selected with the account picker.](./media/manage-accounts/account-picker-two-workspaces.png) ](./media/manage-accounts/account-picker-two-workspaces.png#lightbox)
-#### [Current view](#tab/current-view)
-
-You can check to see if you are part of multiple accounts by the presence of the *account picker* in the left navigation menu, as seen highlighted in the following screenshot.
-
-[ ![Screenshot of the account picker in the left-nav of Partner Center.](./media/manage-accounts/account-picker.png) ](./media/manage-accounts/account-picker.png#lightbox)
-
-If you don't see the *account picker*, you are part of one account only. You can find the details of this account on the **Account settings** > **Organization profile** > **Legal** > **Developer** tab in Partner Center.
-
-When you select this picker, all the accounts that you are a part of appear as a list. You can then select any of them to switch to that account. After you switch, everything in Partner Center appears in the context of that account.
-
-> [!NOTE]
-> Partner Center uses [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) for multi-user account access and management. Your organization's Azure AD is automatically associated with your Partner Center account as part of the enrollment process.
-
-In the following example, the signed-in user is part of the three highlighted accounts. The user can switch between them by clicking on an account.
-
-[ ![Screenshot of accounts that can be selected with the account picker.](./media/manage-accounts/account-picker-two.png) ](./media/manage-accounts/account-picker-two.png#lightbox)
--- ## Next steps - [Add and manage users for the commercial marketplace](add-manage-users.md)
marketplace Test Drive Hosted Detailed Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/test-drive-hosted-detailed-config.md
This article describes how to configure a hosted test drive for Dynamics 365 app
## Configure for Dynamics 365 apps on Dataverse and Power Apps -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview). 2. If you can't access the above link, you need to submit a request [here](https://appsource.microsoft.com/partners/list-an-app) to publish your application. Once we review the request, you will be granted access to start the publish process. 3. Find an existing **Dynamics 365 apps on Dataverse and Power Apps** offer or create a new **Dynamics 365 apps on Dataverse and Power Apps** offer.
This article describes how to configure a hosted test drive for Dynamics 365 app
- **User manual** ΓÇô A PDF user manual that helps test drive users understand how to use your app (required). - **Test drive demo video** ΓÇô A video that showcases your app (optional).
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview).
-2. If you can't access the above link, you need to submit a request [here](https://appsource.microsoft.com/partners/list-an-app) to publish your application. Once we review the request, you will be granted access to start the publish process.
-3. Find an existing **Dynamics 365 apps on Dataverse and Power Apps** offer or create a new **Dynamics 365 apps on Dataverse and Power Apps** offer.
-4. Select the **Enable a test drive** check box and select a **Type of test drive** (see bullet below), then select **Save**.
-
- [![Selecting the 'Enable a test drive' check box.](media/test-drive/enable-test-drive-check-box.png)](media/test-drive/enable-test-drive-check-box.png#lightbox)
-
- - **Type of test drive** ΓÇô Choose **Dynamics 365 apps on Dataverse and Power Apps**. This indicates that Microsoft will host and maintain the service that performs the test drive user provisioning and deprovisioning.
-
-5. Grant Microsoft AppSource permission to provision and deprovision test drive users in your tenant using [these instructions](./test-drive-azure-subscription-setup.md). In this step, you will generate the **Azure AD App ID** and **Azure AD App Key** values mentioned below.
-6. Complete these fields on the **Test drive technical configuration** page.
-
- [![The test drive technical configuration page.](media/test-drive/technical-config-details.png)](media/test-drive/technical-config-details.png#lightbox)
-
- - **Max concurrent test drives** ΓÇô The number of concurrent users that can have an active test drive running at the same time. Each user will consume a Dynamics license while their test drive is active, so ensure you have at least this many Dynamics licenses available for test drive users. We recommended 3 to 5.
- - **Test drive duration** ΓÇô The number of hours the user's test drive will be active. After the time has expired, the user will be deprovisioned from your tenant. We recommended 2-24 hours depending on the complexity of your app. The user can always request another test drive if they run out of time and want to access the test drive again.
- - **Instance URL**
- - *Customer Engagement* ΓÇô The URL the test drive user will be sent to when they start the test drive. This is typically the URL of your Dynamics 365 instance on which your app and sample data is installed. Example value: `https://testdrive.crm.dynamics.com`.
- - *Canvas App (Power Apps)*
- 1. Open the **PowerApps portal** page and sign in.
- 2. Select **Apps** and then the ellipses at the app.
- 4. Select **Details**.
- 5. Copy the **Web link** from the **Details** tab:
-
- [ ![Shows the TestDrive Canvas App window.](./media/test-drive/testdrive-canvas-app.png) ](./media/test-drive/testdrive-canvas-app.png#lightbox)
-
- - **Instance web API URL**
- - *Customer Engagement* ΓÇô The Web API URL for your Dynamics 365 Instance. Retrieve this value by signing into your Microsoft Dynamics 365 instance, selecting **Setting** > **Customization** > **Developer Resources** > **Instance Web API**, and copying the address (URL). Example value:
-
- :::image type="content" source="./media/test-drive/sample-web-api-url.png" alt-text="An example of Instance Web API.":::
-
- - *Canvas App (Power Apps)* ΓÇô If you are not using CE/Dataverse as a backend to your Canvas App, use `https://localhost` as a placeholder.
-
- - **Role name**
- - *Customer Engagement* ΓÇô The name of the custom Dynamics 365 security role you created for test drive or you can use an existing role. A new role should have minimum required privileges added to the role to sign into a Customer Engagement instance. Refer to [Minimum privileges required to sign into Microsoft Dynamics 365](https://community.dynamics.com/crm/b/crminogic/archive/2016/11/24/minimum-privileges-required-to-login-microsoft-dynamics-365). This is the role that will be assigned to users during their test drive. Example value: `testdriverole`.
- - *Canvas App (Power Apps)* ΓÇô Use "NA" when not using CE/Dataverse as the backend data source.
-
- > [!IMPORTANT]
- > Ensure the security group check is not added. This allows the user to be synced to the Customer Engagement instance.
-
- - **Azure Active Directory tenant ID** ΓÇô The ID of the Azure tenant for your Dynamics 365 instance. To retrieve this value, sign in to Azure portal and navigate to **Azure Active Directory** > **Properties** and copy the directory ID. Example value: 172f988bf-86f1-41af-91ab-2d7cd01112341.
- - **Azure Active Directory tenant name** ΓÇô The name of the Azure Tenant for your Dynamics 365 Instance. Use the format `<tenantname>.onmicrosoft.com`. Example Value: `testdrive.onmicrosoft.com`.
- - **Azure Active Directory application ID** ΓÇô The ID of the Azure Active Directory (AD) app you created in Step 5. Example value: `53852862-a2ae-4e43-9461-faa49650a096`.
- - **Azure Active Directory application client secret** ΓÇô Secret for the Azure AD app created in Step 5. Example value: `IJUgaIOfq9b9LbUjeQmzNBW4VGn6grr1l/n3aMrnfdk=`.
-
-7. Provide the marketplace listing details. Select **Language** to see further required fields.
-
- [ ![Illustrates Marketplace listing details page.](./media/test-drive/marketplace-listing-details-workspaces.png) ](./media/test-drive/marketplace-listing-details-workspaces.png#lightbox)
-
- - **Description** ΓÇô An overview of your test drive. This text will be shown to the user while the test drive is being provisioned. This field supports HTML if you want to provide formatted content (required).
- - **User manual** ΓÇô A PDF user manual that helps test drive users understand how to use your app (required).
- - **Test drive demo video** ΓÇô A video that showcases your app (optional).
--- ## Dynamics 365 Operations Apps -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview). 2. If you can't access the above link, you need to submit a request [here](https://appsource.microsoft.com/partners/list-an-app) to publish your application. Once we review the request, you will be granted access to start the publish process. 3. Find an existing **Dynamics 365 Operations Apps** offer or create a new **Dynamics 365 Operations Apps** offer.
This article describes how to configure a hosted test drive for Dynamics 365 app
- **User manual** ΓÇô A PDF user manual that helps test drive users understand how to use your app (required). - **Test drive demo video** ΓÇô A video that showcases your app (optional).
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165507).
-2. If you can't access the above link, you need to submit a request [here](https://appsource.microsoft.com/partners/list-an-app) to publish your application. Once we review the request, you will be granted access to start the publish process.
-3. Find an existing **Dynamics 365 Operations Apps** offer or create a new **Dynamics 365 Operations Apps** offer.
-4. Select the **Enable a test drive** check box and select a **Type of test drive** (see bullet below), then select **Save draft**.
-
- [![Select the 'Enable a test drive' check box.](media/test-drive/enable-test-drive-check-box-operations.png)](media/test-drive/enable-test-drive-check-box-operations.png#lightbox)
-
- - **Type of test drive** ΓÇô Choose **Dynamics 365 Operations Apps** option. This means Microsoft will host and maintain the service that performs the test drive user provisioning and deprovisioning.
-
-5. Grant Microsoft AppSource permission to provision and deprovision test drive users in your tenant using [these instructions](https://github.com/Microsoft/AppSource/blob/master/Microsoft%20Hosted%20Test%20Drive/Setup-your-Azure-subscription-for-Dynamics365-Microsoft-Hosted-Test-Drives.md). In this step, you will generate the **Azure AD App ID** and **Azure AD App Key** values mentioned below.
-6. Complete these fields on the **Test drive technical configuration** page.
-
- [![The Marketplace technical configuration page.](media/test-drive/technical-config-details.png)](media/test-drive/technical-config-details.png#lightbox)
-
- - **Max concurrent test drives** ΓÇô The number of concurrent users that can have an active test drive running at the same time. Each user will consume a Dynamics license while their test drive is active, so ensure you have at least this many Dynamics licenses available for test drive users. We recommended 3 to 5.
- - **Test drive duration** ΓÇô The number of hours the user's test drive will be active. After the time has expired, the user will be deprovisioned from your tenant. We recommended 2-24 hours depending on the complexity of your app. The user can always request another test drive if they run out of time and want to access the test drive again.
- - **Instance URL** ΓÇô The URL the test drive user will be sent to when they start the test drive. This is typically the URL of your Dynamics 365 instance on which your app and sample data is installed. Example value: `https://testdrive.crm.dynamics.com`.
- - **Azure Active Directory tenant ID** ΓÇô The ID of the Azure tenant for your Dynamics 365 instance. To retrieve this value, sign in to Azure portal and navigate to **Azure Active Directory** > **Properties** and copy the directory ID. Example value: 172f988bf-86f1-41af-91ab-2d7cd01112341.
- - **Azure Active Directory tenant name** ΓÇô The name of the Azure Tenant for your Dynamics 365 Instance. Use the format `<tenantname>.onmicrosoft.com`. Example Value: `testdrive.onmicrosoft.com`.
- - **Azure Active Directory application ID** ΓÇô The ID of the Azure Active Directory (AD) app you created in Step 5. Example value: `53852862-a2ae-4e43-9461-faa49650a096`.
- - **Azure Active Directory application client secret** ΓÇô Secret for the Azure AD app created in Step 5. Example value: `IJUgaIOfq9b9LbUjeQmzNBW4VGn6grr1l/n3aMrnfdk=`.
- - **Trial Legal Entity** ΓÇô Provide a Legal Entity to assign a trial user. You can create a new one at [Create or modify a legal entity](/dynamicsax-2012/appuser-itpro/create-or-modify-a-legal-entity).
- - **Role name** ΓÇô The AOT name (Application Object Tree) of the custom Dynamics 365 security role you created for test drive. This is the role that will be assigned to users during their test drive.
-
- :::image type="content" source="./media/test-drive/security-config.png" alt-text="The security configuration page.":::
-
-7. Provide the marketplace listing details. Select **Language** to see further required fields.
-
- [![The Marketplace listing details page.](media/test-drive/marketplace-listing-details.png)](media/test-drive/marketplace-listing-details.png#lightbox)
-
- - **Description** ΓÇô An overview of your test drive. This text will be shown to the user while the test drive is being provisioned. This field supports HTML if you want to provide formatted content.
- - **User manual** ΓÇô A PDF user manual that helps test drive users understand how to use your app.
- - **Test drive demo video** ΓÇô A video that showcases your app (optional).
--- <!-- ## Next steps
marketplace Test Publish Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/test-publish-saas-offer.md
This article explains how to use Partner Center to submit your SaaS offer for pu
## Submit your offer for publishing -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002). 1. On the Home page, select the **Marketplace offers** tile.
This article explains how to use Partner Center to submit your SaaS offer for pu
Your offer's publish status will change as it moves through the publication process. For detailed information on this process, see [Validation and publishing steps](review-publish-offer.md#validation-and-publishing-steps).
-#### [Current view](#tab/current-view)
-
-1. Sign in to the commercial marketplace dashboard in [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview).
-1. On the **Overview** page, select the offer you want to publish.
-1. In the upper-right corner of the portal, select **Review and publish**.
-1. Make sure that the **Status** column for each page says **Complete**. The three possible statuses are as follows:
-
- - **Not started** ΓÇô The page is incomplete.
- - **Incomplete** ΓÇô The page is missing required information or has errors that need to be fixed. You'll need to go back to the page and update it.
- - **Complete** ΓÇô The page is complete. All required data has been provided and there are no errors.
-
-1. If any of the pages have a status other than **Complete**, select the page name, correct the issue, save the page, and then select **Review and publish** again to return to this page.
-1. After all the pages are complete, in the **Notes for certification** box, provide testing instructions to the certification team to ensure that your app is tested correctly. Provide any supplementary notes helpful for understanding your app.
-1. To start the publishing process for your offer, select **Publish**. The **Offer overview** page appears and shows the offer's **Publish status**.
-
-Your offer's publish status will change as it moves through the publication process. For detailed information on this process, see [Validation and publishing steps](review-publish-offer.md#validation-and-publishing-steps).
--- ## Preview and test your offer When the offer is ready for your sign off, weΓÇÖll send you an email to request that you review and approve your offer preview. You can also refresh the **Offer overview** page in your browser to see if your offer has reached the Publisher sign-off phase. If it has, the **Go live** button and preview links will be available. There will be a link for either Microsoft AppSource preview, Azure Marketplace preview, or both depending on the options you chose when creating your offer. If you chose to sell your offer through Microsoft, anyone who has been added to the preview audience can test the acquisition and deployment of your offer to ensure it meets your requirements during this stage.
marketplace Usage Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/usage-dashboard.md
The [Usage dashboard](https://go.microsoft.com/fwlink/?linkid=2166106) displays
## Access the Usage dashboard -
-#### [Workspaces view](#tab/workspaces-view)
- 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Insights** tile.
The [Usage dashboard](https://go.microsoft.com/fwlink/?linkid=2166106) displays
1. In the left menu, select **Usage**.
-#### [Current view](#tab/current-view)
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-nav, select **Commercial Marketplace** > **Analyze** > **Usage**.
--- ## Elements of the Usage dashboard The following sections describe how to use the Usage dashboard and how to read the data. ### Month range
-#### [Workspaces view](#tab/workspaces-view)
- You can find a month range selection at the top-right corner of each page. Customize the output of the **Usage** page graphs by selecting a month range based on the past 6 or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months. [ ![Illustrates the Month filters on the Usage dashboard.](./media/usage-dashboard/usage-dashboard-filters.png) ](./media/usage-dashboard/usage-dashboard-filters.png#lightbox)
-#### [Current view](#tab/current-view)
-
-You can find a month range selection at the top-right corner of each page. Customize the output of the **Usage** page graphs by selecting a month range based on the past 6 or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months.
---- ### Usage trend In this section, you will find total usage hours and trend for all your offers that are consumed by your customers during the selected computation period. Metrics and growth trends are represented by a line chart. Show the value for each month by hovering over the line on the chart. The percentage value below the usage metrics in the widget represents the amount of growth or decline during the selected computation period.
migrate Migrate Appliance Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-appliance-architecture.md
Title: Azure Migrate appliance architecture
-description: Provides an overview of the Azure Migrate appliance used in server discovery, assessment and migration.
+description: Provides an overview of the Azure Migrate appliance used in server discovery, assessment, and migration.
ms.
The Azure Migrate appliance is used in the following scenarios.
## Deployment methods
-The appliance can be deployed using a couple of methods:
+The appliance can be deployed using the following methods:
- The appliance can be deployed using a template for servers running in VMware or Hyper-V environment ([OVA template for VMware](how-to-set-up-appliance-vmware.md) or [VHD for Hyper-V](how-to-set-up-appliance-hyper-v.md)). - If you don't want to use a template, you can deploy the appliance for VMware or Hyper-V environment using a [PowerShell installer script](deploy-appliance-script.md).
The appliance can be deployed using a couple of methods:
The appliance has the following -- **Appliance configuration manager**: This is a web application which can be configured with source details to start the discovery and assessment of servers.-- **Discovery agent**: The agent collects server configuration metadata which can be used to create as on-premises assessments.-- **Assessment agent**: The agent collects server performance metadata which can be used to create performance-based assessments.
+- **Appliance configuration manager**: This is a web application, which can be configured with source details to start the discovery and assessment of servers.
+- **Discovery agent**: The agent collects server configuration metadata, which can be used to create as on-premises assessments.
+- **Assessment agent**: The agent collects server performance metadata, which can be used to create performance-based assessments.
- **Auto update service**: The service keeps all the agents running on the appliance up-to-date. It automatically runs once every 24 hours. - **DRA agent**: Orchestrates server replication, and coordinates communication between replicated servers and Azure. Used only when replicating servers to Azure using agentless migration. - **Gateway**: Sends replicated data to Azure. Used only when replicating servers to Azure using agentless migration.
The appliance communicates with the discovery sources using the following proces
**Start discovery** | The appliance communicates with the vCenter server on TCP port 443 by default. If the vCenter server listens on a different port, you can configure it in the appliance configuration manager. | The appliance communicates with the Hyper-V hosts on WinRM port 5985 (HTTP). | The appliance communicates with Windows servers over WinRM port 5985 (HTTP) with Linux servers over port 22 (TCP). **Gather configuration and performance metadata** | The appliance collects the metadata of servers running on vCenter Server(s) using vSphere APIs by connecting on port 443 (default port) or any other port each vCenter Server listens on. | The appliance collects the metadata of servers running on Hyper-V hosts using a Common Information Model (CIM) session with hosts on port 5985.| The appliance collects metadata from Windows servers using Common Information Model (CIM) session with servers on port 5985 and from Linux servers using SSH connectivity on port 22. **Send discovery data** | The appliance sends the collected data to Azure Migrate: Discovery and assessment and Azure Migrate: Server Migration over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits. | The appliance sends the collected data to Azure Migrate: Discovery and assessment over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits. | The appliance sends the collected data to Azure Migrate: Discovery and assessment over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits.
-**Data collection frequency** | Configuration metadata is collected and sent every 30 minutes. <br/><br/> Performance metadata is collected every 20 seconds and is aggregated to send a data point to Azure every 10 minutes. <br/><br/> Software inventory data is sent to Azure once every 12 hours. <br/><br/> Agentless dependency data is collected every 5 mins, aggregated on appliance and sent to Azure every 6 hours. <br/><br/> The SQL Server configuration data is updated once every 24 hours and the performance data is captured every 30 seconds. <br/><br/> The web apps configuration data is updated once every 24 hours. Performance data is not captured for web apps.| Configuration metadata is collected and sent every 30 mins. <br/><br/> Performance metadata is collected every 30 seconds and is aggregated to send a data point to Azure every 10 minutes.| Configuration metadata is collected and sent every 30 mins. <br/><br/> Performance metadata is collected every 5 minutes and is aggregated to send a data point to Azure every 10 minutes.
+**Data collection frequency** | Configuration metadata is collected and sent every 15 minutes. <br/><br/> Performance metadata is collected every 50 minutes to send a data point to Azure. <br/><br/> Software inventory data is sent to Azure once every 24 hours. <br/><br/> Agentless dependency data is collected every 5 minutes, aggregated on appliance and sent to Azure every 6 hours. <br/><br/> The SQL Server configuration data is updated once every 24 hours and the performance data is captured every 30 seconds. <br/><br/> The web apps configuration data is updated once every 24 hours. Performance data is not captured for web apps.| Configuration metadata is collected and sent every 30 minutes. <br/><br/> Performance metadata is collected every 30 seconds and is aggregated to send a data point to Azure every 15 minutes.| Configuration metadata is collected and sent every 3 hours. <br/><br/> Performance metadata is collected every 5 minutes to send a data point to Azure.
**Assess and migrate** | You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool.<br/><br/>In addition, you can also start migrating servers running in your VMware environment using Azure Migrate: Server Migration tool to orchestrate agentless server replication.| You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool. | You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool. ## Next steps
purview Catalog Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-managed-vnet.md
+
+ Title: Managed Virtual Network and managed private endpoints
+description: This article describes Managed Virtual Network and managed private endpoints in Azure Purview.
+++++ Last updated : 12/12/2021+
+# Customer intent: As a Purview admin, I want to set up Managed Virtual Network and managed private endpoints for my Purview account.
++
+# Use a Managed VNet with your Azure Purview account
+
+> [!IMPORTANT]
+> Currently, Managed Virtual Network and managed private endpoints are available for Azure Purview accounts that are deployed in the following regions:
+> - Canada Central
+> - East US 2
+> - West Europe
+
+## Conceptual overview
+
+This article describes how to configure Managed Virtual Network and managed private endpoints for Azure Purview.
+
+### Supported regions
+
+Currently, Managed Virtual Network and managed private endpoints are available for Azure Purview accounts that are deployed in the following regions:
+- Canada Central
+- East US 2
+- West Europe
+
+### Supported data sources
+
+Currently, the following data sources are supported to have a managed private endpoint and can be scanned using Managed VNet Runtime in Azure Purview:
+
+- Azure Blob Storage
+- Azure Data Lake Storage Gen 2
+- Azure SQL Database
+- Azure Cosmos DB
+- Azure Synapse Analytics
+- Azure Files
+- Azure Database for MySQL
+- Azure Database for PostgreSQL
+
+Additionally, you can deploy managed private endpoints for your Azure Key Vault resources if you need to run scans using any authentication options rather than Managed Identities, such as SQL Authentication or Account Key.
+
+### Managed Virtual Network
+
+A Managed Virtual Network in Azure Purview is a virtual network which is deployed and managed by Azure inside the same region as Purview account to allow scanning Azure data sources inside a managed network, without having to deploy and manage any self-hosted integration runtime virtual machines by the customer in Azure.
++
+You can deploy an Azure Managed Integration Runtime within an Azure Purview Managed Virtual Network. From there, the Managed VNet Runtime will leverage private endpoints to securely connect to and scan supported data sources.
+
+Creating an Managed VNet Runtime within Managed Virtual Network ensures that data integration process is isolated and secure.
+
+Benefits of using Managed Virtual Network:
+
+- With a Managed Virtual Network, you can offload the burden of managing the Virtual Network to Azure Purview. You don't need to create and manage VNets or subnets for Azure Integration Runtime to use for scanning Azure data sources.
+- It doesn't require deep Azure networking knowledge to do data integrations securely. Using a Managed Virtual Network is much simplified for data engineers.
+- Managed Virtual Network along with Managed private endpoints protects against data exfiltration.
+
+> [!IMPORTANT]
+> Currently, the Managed Virtual Network is only supported in the same region as Azure Purview account region.
+
+> [!Note]
+> You cannot switch a global Azure integration runtime or self-hosted integration runtime to a Managed VNet Runtime and vice versa.
+
+A Managed VNet is created for your Azure Purview account when you create a Managed VNet Runtime for the first time in your Purview account. You can't view or manage the Managed VNets.
+
+### Managed private endpoints
+
+Managed private endpoints are private endpoints created in the Azure Purview Managed Virtual Network establishing a private link to Purview and Azure resources. Azure Purview manages these private endpoints on your behalf.
++
+Azure Purview supports private links. Private link enables you to access Azure (PaaS) services (such as Azure Storage, Azure Cosmos DB, Azure Synapse Analytics).
+
+When you use a private link, traffic between your data sources and Managed Virtual Network traverses entirely over the Microsoft backbone network. Private Link protects against data exfiltration risks. You establish a private link to a resource by creating a private endpoint.
+
+Private endpoint uses a private IP address in the Managed Virtual Network to effectively bring the service into it. Private endpoints are mapped to a specific resource in Azure and not the entire service. Customers can limit connectivity to a specific resource approved by their organization. Learn more about [private links and private endpoints](../private-link/index.yml).
+
+> [!NOTE]
+> To reduce administrative overhead, it's recommended that you create managed private endpoints to scan all supported Azure data sources.
+
+> [!WARNING]
+> If an Azure PaaS data store (Blob, Azure Data Lake Storage Gen2, Azure Synapse Analytics) has a private endpoint already created against it, and even if it allows access from all networks, Purview would only be able to access it using a managed private endpoint. If a private endpoint does not already exist, you must create one in such scenarios.
+
+A private endpoint connection is created in a "Pending" state when you create a managed private endpoint in Azure Purview. An approval workflow is initiated. The private link resource owner is responsible to approve or reject the connection.
++
+If the owner approves the connection, the private link is established. Otherwise, the private link won't be established. In either case, the Managed private endpoint will be updated with the status of the connection.
++
+Only a Managed private endpoint in an approved state can send traffic to a given private link resource.
+
+### Interactive authoring
+
+Interactive authoring capabilities is used for functionalities like test connection, browse folder list and table list, get schema, and preview data. You can enable interactive authoring when creating or editing an Azure Integration Runtime which is in Purview-Managed Virtual Network. The backend service will pre-allocate compute for interactive authoring functionalities. Otherwise, the compute will be allocated every time any interactive operation is performed which will take more time. The Time To Live (TTL) for interactive authoring is 60 minutes, which means it will automatically become disabled after 60 minutes of the last interactive authoring operation.
++
+## Deployment Steps
+
+### Prerequisites
+
+Before deploying a Managed VNet and Managed VNet Runtime for an Azure Purview account, ensure you meet the following prerequisites:
+
+1. An Azure Purview account deployed in one of the [supported regions](#supported-regions).
+2. From Azure Purview roles, you must be a data curator at root collection level in your Azure Purview account.
+3. From Azure RBAC roles, you must be contributor on the Purview account and data source to approve private links.
+
+### Deploy Managed VNet Runtimes
+
+> [!NOTE]
+> The following guide shows how to register and scan an Azure Data Lake Storage Gen 2 using Managed VNet Runtime.
+
+1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Purview accounts** page and select your _Purview account_.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-azure-portal.png" alt-text="Screenshot that shows the Purview account":::
+
+2. **Open Purview Studio** and navigate to the **Data Map --> Integration runtimes**.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-vnet.png" alt-text="Screenshot that shows Purview Data Map menus":::
+
+3. From **Integration runtimes** page, select **+ New** icon, to create a new runtime. Select Azure and then select **Continue**.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-ir-create.png" alt-text="Screenshot that shows how to create new Azure runtime":::
+
+4. Provide a name for your Managed VNet Runtime, select the region and configure interactive authoring. Select **Create**.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-ir-region.png" alt-text="Screenshot that shows to create a Managed VNet Runtime":::
+
+5. Deploying the Managed VNet Runtime for the first time triggers multiple workflows in Purview Studio for creating managed private endpoints for Azure Purview and its Managed Storage Account. Click on each workflow to approve the private endpoint for the corresponding Azure resource.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-ir-workflows.png" alt-text="Screenshot that shows deployment of a Managed VNet Runtime":::
+
+6. In Azure portal, from your Purview account resource blade, approve the managed private endpoint. From Managed storage account blade approve the managed private endpoints for blob and queue
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-purview.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Purview":::
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-purview-approved.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Purview - approved":::
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-managed-storage.png" alt-text="Screenshot that shows how to approve a managed private endpoint for managed storage account":::
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-managed-storage-approved.png" alt-text="Screenshot that shows how to approve a managed private endpoint for managed storage account - approved":::
+
+7. From Management, select Managed private endpoint to validate if all managed private endpoints are successfully deployed and approved. All private endpoints be approved.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list.png" alt-text="Screenshot that shows managed private endpoints in Purview":::
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list-approved.png" alt-text="Screenshot that shows managed private endpoints in Purview - approved ":::
+
+### Deploy managed private endpoints for data sources
+
+To scan any data sources using Managed VNet Runtime, you need to deploy and approve a managed private endpoint for the data source prior to create a new scan. To deploy and approve a managed private endpoint for a data source, follow these steps selecting data source of your choice from the list:
+
+1. Navigate to **Management**, and select **Managed private endpoints**.
+
+2. Select **+ New**.
+
+3. From the list of supported data sources, select the type that corresponds to the data source you are planning to scan using Managed VNet Runtime.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-data-source.png" alt-text="Screenshot that shows how to create a managed private endpoint for data sources":::
+
+4. Provide a name for the managed private endpoint, select the Azure subscription and the data source from the drop down lists. Select **create**.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-data-source-pe.png" alt-text="Screenshot that shows how to select data source for setting managed private endpoint":::
+
+5. From the list of managed private endpoints, click on the newly created managed private endpoint for your data source and then click on **Manage approvals in the Azure portal**, to approve the private endpoint in Azure portal.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-data-source-approval.png" alt-text="Screenshot that shows the approval for managed private endpoint for data sources":::
+
+6. By clicking on the link, you are redirected to Azure portal. Under private endpoints connection, select the newly created private endpoint and select **approve**.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-data-source-pe-azure.png" alt-text="Screenshot that shows how to approve a private endpoint for data sources in Azure portal":::
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-data-source-pe-azure-approved.png" alt-text="Screenshot that shows approved private endpoint for data sources in Azure portal":::
+
+7. Inside Azure Purview Studio, the managed private endpoint must be shown as approved as well.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list-2.png" alt-text="Screenshot that shows managed private endpoints including data sources' in purview studio":::
+
+### Register and scan a data source using Managed VNet Runtime
+
+#### Register data source
+It is important to register the data source in Azure Purview prior to setting up a scan for the data source. Follow these steps to register data source if you haven't yet registered it.
+
+1. Go to your Azure Purview account.
+1. Select **Data Map** on the left menu.
+1. Select **Register**.
+2. On **Register sources**, select your data source
+3. Select **Continue**.
+4. On the **Register sources** screen, do the following:
+
+ 1. In the **Name** box, enter a name that the data source will be listed with in the catalog.
+ 2. In the **Subscription** dropdown list box, select a subscription.
+ 3. In the **Select a collection** box, select a collection.
+ 4. Select **Register** to register the data sources.
+
+For more information, see [Manage data sources in Azure Purview](manage-data-sources.md).
+
+#### Scan data source
+
+You can use any of the following options to scan data sources using Purview Managed VNet Runtime:
++
+- [Using Managed Identity](#scan-using-managed-identity) (Recommended) - As soon as the Azure Purview Account is created, a system-assigned managed identity (SAMI) is created automatically in Azure AD tenant. Depending on the type of resource, specific RBAC role assignments are required for the Azure Purview system-assigned managed identity (SAMI) to perform the scans.
+
+- [Using other authentication options](#scan-using-other-authentication-options):
+
+ - Account Key or SQL Authentication- Secrets can be created inside an Azure Key Vault to store credentials in order to enable access for Azure Purview to scan data sources securely using the secrets. A secret can be a storage account key, SQL login password, or a password.
+
+ - Service Principal - In this method, you can create a new or use an existing service principal in your Azure Active Directory tenant.
+
+##### Scan using Managed Identity
+
+To scan a data source using a Managed VNet Runtime and Purview managed identity perform these steps:
+
+1. Select the **Data Map** tab on the left pane in the Purview Studio.
+
+1. Select the data source that you registered.
+
+1. Select **View details** > **+ New scan**, or use the **Scan** quick-action icon on the source tile.
+
+1. Provide a **Name** for the scan.
+
+1. Under **Connect via integration runtime**, select the newly created Managed VNet Runtime.
+
+1. For **Credential** Select the managed identity, choose the appropriate collection for the scan, and select **Test connection**. On a successful connection, select **Continue**.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-scan.png" alt-text="Screenshot that shows how to create a new scan using Managed VNet":::
+
+1. Follow the steps to select the appropriate scan rule and scope for your scan.
+
+1. Choose your scan trigger. You can set up a schedule or run the scan once.
+
+1. Review your scan and select **Save and run**.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-scan-run.png" alt-text="review scan":::
+
+##### Scan using other authentication options
+
+You can also use other supported options to scan data sources using Purview Managed Runtime. This requires setting up a private connection to Azure Key Vault where the secret is stored.
+
+To set up a scan using Account Key or SQL Authentication follow these steps:
+
+1. [Grant Azure Purview access to your Azure Key Vault](manage-credentials.md#grant-azure-purview-access-to-your-azure-key-vault).
+
+2. [Create a new credential in Azure Purview](manage-credentials.md#create-a-new-credential).
+
+3. Navigate to **Management**, and select **Managed private endpoints**.
+
+4. Select **+ New**.
+
+5. From the list of supported data sources, select **Key Vault**.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault.png" alt-text="Screenshot that shows how to create a managed private endpoint for Azure Key Vault":::
+
+6. Provide a name for the managed private endpoint, select the Azure subscription and the Azure Key Vault from the drop down lists. Select **create**.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-create.png" alt-text="Screenshot that shows how to create a managed private endpoint for Azure Key Vault in Purview Studio":::
+
+7. From the list of managed private endpoints, click on the newly created managed private endpoint for your Azure Key Vault and then click on **Manage approvals in the Azure portal**, to approve the private endpoint in Azure portal.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-approve.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Azure Key Vault":::
+
+8. By clicking on the link, you are redirected to Azure portal. Under private endpoints connection, select the newly created private endpoint for your Azure Key Vault and select **approve**.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-az-approve.png" alt-text="Screenshot that shows how to approve a private endpoint for an Azure Key Vault in Azure portal":::
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-az-approved.png" alt-text="Screenshot that shows approved private endpoint for Azure Key Vault in Azure portal":::
+
+9. Inside Azure Purview Studio, the managed private endpoint must be shown as approved as well.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list-3.png" alt-text="Screenshot that shows managed private endpoints including Azure Key Vault in purview studio":::
+
+10. Select the **Data Map** tab on the left pane in the Purview Studio.
+
+11. Select the data source that you registered.
+
+12. Select **View details** > **+ New scan**, or use the **Scan** quick-action icon on the source tile.
+
+13. Provide a **Name** for the scan.
+
+14. Under **Connect via integration runtime**, select the newly created Managed VNet Runtime.
+
+15. For **Credential** Select the credential you have registered earlier, choose the appropriate collection for the scan, and select **Test connection**. On a successful connection, select **Continue**.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-scan.png" alt-text="Screenshot that shows how to create a new scan using Managed VNet and a SPN":::
+
+16. Follow the steps to select the appropriate scan rule and scope for your scan.
+
+17. Choose your scan trigger. You can set up a schedule or run the scan once.
+
+18. Review your scan and select **Save and run**.
+
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-scan-spn-run.png" alt-text="review scan using SPN":::
+
+## Next steps
+
+- [Manage data sources in Azure Purview](manage-data-sources.md)
purview Catalog Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link.md
Previously updated : 10/19/2021 Last updated : 01/10/2022 # Customer intent: As a Purview admin, I want to set up private endpoints for my Purview account, for secure access.
Use the following recommended checklist to perform deployment of Azure Purview a
||| |**Scenario 1** - [Connect to your Azure Purview and scan data sources privately and securely](./catalog-private-link-end-to-end.md) |You need to restrict access to your Azure Purview account only via a private endpoint, including access to Azure Purview Studio, Atlas APIs and scan data sources in on-premises and Azure behind a virtual network using self-hosted integration runtime ensuring end to end network isolation. (Deploy _account_, _portal_ and _ingestion_ private endpoints.) | |**Scenario 2** - [Connect privately and securely to your Purview account](./catalog-private-link-account-portal.md) | You need to enable access to your Azure Purview account, including access to _Azure Purview Studio_ and Atlas API through private endpoints. (Deploy _account_ and _portal_ private endpoints). |
+|**Scenario 3** - [Scan data source securely using Managed Virtual Network](./catalog-managed-vnet.md) | You need to scan Azure data sources securely, without having to manage a virtual network or a self-hosted integration runtime VM. (Deploy managed private endpoint for Purview, managed storage account and Azure data sources). |
+ ## Support matrix for Scanning data sources through _ingestion_ private endpoint
scheduler Migrate From Scheduler To Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/scheduler/migrate-from-scheduler-to-logic-apps.md
To learn more about exception handling, see [Handle errors and exceptions - RunA
**Q**: Do I have to back up or perform any other tasks before migrating my Scheduler jobs to Logic Apps? <br> **A**: As a best practice, always back up your work. Check that the logic apps you created are running as expected before deleting or disabling your Scheduler jobs.
+
+**Q**: What will happen to my scheduled Azure Web Jobs from Azure Scheduler? <br>
+**A**: Web Jobs using this way of [Scheduling Web Jobs](https://github.com/projectkudu/kudu/wiki/WebJobs#scheduling-a-triggered-webjob) are not using the Azure Scheduler internally. ΓÇ£For the schedule to work it requires the website to be configured as Always On and is not an Azure Scheduler but an internal implementation of a scheduler.ΓÇ¥
+ The only type of Web Jobs that would be affected are those that are specifically using Azure Scheduler to run the Web Job by means of the Web Jobs API. You can trigger these WebJobs from a Logic App using the HTTP Action.
**Q**: Is there a tool that can help me migrate my jobs from Scheduler to Logic Apps? <br> **A**: Each Scheduler job is unique, so a one-size-fits-all tool doesn't exist. However, based on your needs, you can [edit this script to migrate Azure Scheduler jobs to Azure Logic Apps](https://github.com/Azure/logicapps/tree/master/scripts/scheduler-migration).
If your Azure subscription has a paid support plan, you can create a technical s
## Next steps
-* [Create regularly running tasks and workflows with Azure Logic Apps](../connectors/connectors-native-recurrence.md)
+* [Create regularly running tasks and workflows with Azure Logic Apps](../connectors/connectors-native-recurrence.md)
search Search Howto Reindex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-reindex.md
Title: Rebuild a search index
+ Title: Drop and rebuild an index
description: Add new elements, update existing elements or documents, or delete obsolete documents in a full rebuild or partial indexing to refresh an Azure Cognitive Search index.
Previously updated : 10/06/2021 Last updated : 01/10/2022
-# Rebuild an index in Azure Cognitive Search
+# Drop and rebuild an index in Azure Cognitive Search
-This article explains how to rebuild an Azure Cognitive Search index, the circumstances under which rebuilds are required, and recommendations for mitigating the impact of rebuilds on ongoing query requests.
+This article explains how to drop and rebuild an Azure Cognitive Search index, the circumstances under which rebuilds are required, and recommendations for mitigating the impact of rebuilds on ongoing query requests.
-A *rebuild* refers to dropping and recreating the physical data structures associated with an index, including all field-based inverted indexes. In Azure Cognitive Search, you cannot drop and recreate individual fields. To rebuild an index, all field storage must be deleted, recreated based on an existing or revised index schema, and then repopulated with data pushed to the index or pulled from external sources.
+A search index is a collection of physical folders and field-based inverted indexes of your content, distributed in shards across the number of partitions allocated to your search index. In Azure Cognitive Search, you cannot drop and recreate individual fields. If you want to fully rebuild a field, all field storage must be deleted, recreated based on an existing or revised index schema, and then repopulated with data pushed to the index or pulled from external sources.
-It's common to rebuild indexes during development when you are iterating over index design, but you might also need to rebuild a production-level index to accommodate structural changes, such as adding complex types or adding fields to suggesters.
-
-## "Rebuild" versus "refresh"
-
-Rebuild should not be confused with refreshing the contents of an index with new, modified, or deleted documents. Refreshing a search corpus is almost a given in every search app, with some scenarios requiring up-to-the-minute updates (for example, when a search corpus needs to reflect inventory changes in an online sales app).
-
-As long as you are not changing the structure of the index, you can refresh an index using the same techniques that you used to load the index initially:
-
-* For push-mode indexing, call [Add, Update or Delete Documents](/rest/api/searchservice/addupdate-or-delete-documents) to push the changes to an index.
-
-* For indexers, you can [schedule indexer execution](search-howto-schedule-indexers.md) and use change-tracking or timestamps to identify the delta. If updates must be reflected faster than what a scheduler can manage, you can use push-mode indexing instead.
+It's common to drop and rebuild indexes during development when you are iterating over index design. Most developers work with a small representative sample of their data to facilitate this process.
## Rebuild conditions
-Drop and recreate an index if any of the following conditions are true.
+The following table enumerates the conditions under which a rebuild is required.
| Condition | Description | |--|-|
Many other modifications can be made without impacting existing physical structu
+ Add a new field + Set the **retrievable** attribute on an existing field
-+ Set a **searchAnalyzer** on an existing field
-+ Add a new analyzer definition in an index
++ Update **searchAnalyzer** on a field having an existing **indexAnalyzer**++ Add a new analyzer definition in an index (which can be applied to new fields) + Add, update, or delete scoring profiles + Add, update, or delete CORS settings + Add, update, or delete synonymMaps++ Add, update, or delete semantic configurations When you add a new field, existing indexed documents are given a null value for the new field. On a future data refresh, values from external source data replace the nulls added by Azure Cognitive Search. For more information on updating index content, see [Add, Update or Delete Documents](/rest/api/searchservice/addupdate-or-delete-documents).
search Search Howto Run Reset Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-run-reset-indexers.md
Last updated 01/07/2022
# Run or reset indexers, skills, or documents
-Indexers can be invoked in three ways: on demand, on a schedule, or when the [indexer is created](/rest/api/searchservice/create-indexer), assuming it's not created in "disabled" mode.
+Indexers can be invoked in three ways: on demand, on a schedule, or when the [indexer is created](/rest/api/searchservice/create-indexer), assuming it's not created in "disabled" mode. This article explains how to run indexers on demand, with and without a reset.
+
+## Resetting indexers
After the initial run, an indexer keeps track of which search documents have been indexed through an internal *high-water mark*. The marker is never exposed, but internally the indexer knows where it last stopped, so that it can pick up where it left off on the next run.
sentinel Kusto Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/kusto-overview.md
+
+ Title: Kusto Query Language in Microsoft Sentinel
+description: This article describes how Kusto Query Language is used in Microsoft Sentinel, and provides some basic familiarity with the language.
+++ Last updated : 01/06/2022++
+# Kusto Query Language in Microsoft Sentinel
+
+Kusto Query Language is the language you will use to work with and manipulate data in Microsoft Sentinel. The logs you feed into your workspace aren't worth much if you can't analyze them and get the important information hidden in all that data. Kusto Query Language has not only the power and flexibility to get that information, but the simplicity to help you get started quickly. If you have a background in scripting or working with databases, a lot of the content of this article will feel very familiar. If not, don't worry, as the intuitive nature of the language will quickly enable you to start writing your own queries and driving value for your organization.
+
+This article introduces the basics of Kusto Query Language, covering some of the most used functions and operators, which should address 75 to 80 percent of the queries you will write day to day. When you'll need more depth, or to run more advanced queries, you can take advantage of the new **Advanced KQL for Microsoft Sentinel** workbook (see this [introductory blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/advanced-kql-framework-workbook-empowering-you-to-become-kql/ba-p/3033766)). See also the [official Kusto Query Language documentation](/azure/data-explorer/kusto/query/) as well as a variety of online courses (such as [Pluralsight's](https://www.pluralsight.com/courses/kusto-query-language-kql-from-scratch)).
+
+## Background - Why Kusto Query Language?
+
+Microsoft Sentinel is built on top of the Azure Monitor service and it uses Azure MonitorΓÇÖs [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) workspaces to store all of its data. This data includes any of the following:
+- data ingested from external sources into predefined tables using Microsoft Sentinel data connectors.
+- data ingested from external sources into user-defined custom tables, using custom-created data connectors as well as some types of out-of-the-box connectors.
+- data created by Microsoft Sentinel itself, resulting from the analyses it creates and performs - for example, alerts, incidents, and UEBA-related information.
+- data uploaded to Microsoft Sentinel to assist with detection and analysis - for example, threat intelligence feeds and watchlists.
+
+[Kusto Query Language](/data-explorer/kusto/query/) was developed as part of the [Azure Data Explorer](/data-explorer/) service, and itΓÇÖs therefore optimized for searching through big-data stores in a cloud environment. Inspired by famed undersea explorer Jacques Cousteau (and pronounced accordingly "koo-STOH"), itΓÇÖs designed to help you dive deep into your oceans of data and explore their hidden treasures.
+
+Kusto Query Language is also used in Azure Monitor (and therefore in Microsoft Sentinel), including some additional Azure Monitor features, to retrieve, visualize, analyze, and parse data in Log Analytics data stores. In Microsoft Sentinel, you're using tools based on Kusto Query Language whenever youΓÇÖre visualizing and analyzing data and hunting for threats, whether in existing rules and workbooks, or in building your own.
+
+Because Kusto Query Language is a part of nearly everything you do in Microsoft Sentinel, a clear understanding of how it works will help you get that much more out of your SIEM.
+
+## What is a query?
+
+A Kusto Query Language query is a read-only request to process data and return results ΓÇô it doesnΓÇÖt write any data. Queries operate on data that's organized into a hierarchy of [databases](/data-explorer/kusto/query/schema-entities/databases), [tables](/data-explorer/kusto/query/schema-entities/tables), and [columns](/data-explorer/kusto/query/schema-entities/columns), similar to SQL.
+
+Requests are stated in plain language and use a data-flow model designed to make the syntax easy to read, write, and automate. We'll see this in detail.
+
+Kusto Query Language queries are made up of *statements* separated by semicolons. There are many kinds of statements, but only two widely used types that weΓÇÖll discuss here:
+
+- [**tabular expression statements**](/azure/data-explorer/kusto/query/tabularexpressionstatements) are what we typically mean when we talk about queries ΓÇô these are the actual body of the query. The important thing to know about tabular expression statements is that they accept a tabular input (a table or another tabular expression) and produce a tabular output. At least one of these is required. Most of the rest of this article will discuss this kind of statement.
+
+- [***let* statements**](/azure/data-explorer/kusto/query/letstatement) allow you to create and define variables and constants outside the body of the query, for easier readability and versatility. These are optional and depend on your particular needs. We'll address this kind of statement at the end of the article.
+
+## Demo environment
+
+You can practice Kusto Query Language statements - including the ones in this article - in a [Log Analytics demo environment](https://aka.ms/lademo) in the Azure portal. There is no charge to use this practice environment, but you do need an Azure account to access it.
+
+Explore the demo environment. Like Log Analytics in your production environment, it can be used in a number of ways:
+
+- **Choose a table on which to build a query.** From the default **Tables** tab (shown in the red rectangle at the upper left), select a table from the list of tables grouped by topics (shown at the lower left). Expand the topics to see the individual tables, and you can further expand each table to see all its fields (columns). Double-clicking on a table or a field name will place it at the point of the cursor in the query window. Type the rest of your query following the table name, as directed below.
+
+- **Find an existing query to study or modify.** Select the **Queries** tab (shown in the red rectangle at the upper left) to see a list of queries available out-of-the-box. Or, select **Queries** from the button bar at the top right. You can explore the queries that come with Microsoft Sentinel out-of-the-box. Double-clicking a query will place the whole query in the query window at the point of the cursor.
+
+ :::image type="content" source="./media/kusto-overview/portal-placement.png" alt-text="Shows the Log Analytics demo environment.":::
+
+Like in this demo environment, you can query and filter data in the Microsoft Sentinel **Logs** page. You can select a table and drill down to see columns. You can modify the default columns shown using the **Column chooser**, and you can set the default time range for queries. If the time range is explicitly defined in the query, the time filter will be unavailable (grayed out).
++
+## Query structure
+
+A good place to start learning Kusto Query Language is to understand the overall query structure. The first thing you'll notice when looking at a Kusto query is the use of the pipe symbol (` | `). The structure of a Kusto query starts with getting your data from a data source and then passing the data across a "pipeline," and each step provides some level of processing and then passes the data to the next step. At the end of the pipeline, you will get your final result. In effect, this is our pipeline:
+
+`Get Data | Filter | Summarize | Sort | Select`
+
+This concept of passing data down the pipeline makes for a very intuitive structure, as it is easy to create a mental picture of your data at each step.
+
+To illustrate this, let's take a look at the following query, which looks at Azure Active Directory (Azure AD) sign-in logs. As you read through each line, you can see the keywords that indicate what's happening to the data. We've included the relevant stage in the pipeline as a comment in each line.
+
+> [!NOTE]
+> You can add comments to any line in a query by preceding them with a double slash (` // `).
+
+```kusto
+SigninLogs // Get data
+| evaluate bag_unpack(LocationDetails) // Ignore this line for now; we'll come back to it at the end.
+| where RiskLevelDuringSignIn == 'none' // Filter
+ and TimeGenerated >= ago(7d) // Filter
+| summarize Count = count() by city // Summarize
+| sort by Count desc // Sort
+| take 5 // Select
+```
+
+Because the output of every step serves as the input for the following step, the order of the steps can determine the query's results and affect its performance. It's crucial that you order the steps according to what you want to get out of the query.
+
+> [!TIP]
+> - A good rule of thumb is to filter your data early, so you are only passing relevant data down the pipeline. This will greatly increase performance and ensure that you aren't accidentally including irrelevant data in summarization steps.
+> - This article will point out some other best practices to keep in mind. For a more complete list, see [query best practices](/azure/data-explorer/kusto/query/best-practices).
+
+Hopefully, you now have an appreciation for the overall structure of a query in Kusto Query Language. Now let's look at the actual query operators themselves, which are used to create a query.
+
+### Data types
+
+Before we get into the query operators, let's first take a quick look at [data types](/azure/data-explorer/kusto/query/scalar-data-types/). As in most languages, the data type determines what calculations and manipulations can be run against a value. For example, if you have a value that is of type *string*, you won't be able to perform arithmetic calculations against it.
+
+In Kusto Query Language, most of the data types follow standard conventions and have names you've probably seen before. The following table shows the full list:
+
+#### Data type table
+
+| Type | Additional name(s) | Equivalent .NET type |
+| - | | |
+| `bool` | `Boolean` | `System.Boolean` |
+| `datetime` | `Date` | `System.DateTime` |
+| `dynamic` | | `System.Object` |
+| `guid` | `uuid`, `uniqueid` | `System.Guid` |
+| `int` | | `System.Int32` |
+| `long` | | `System.Int64` |
+| `real` | `Double` | `System.Double` |
+| `string` | | `System.String` |
+| `timespan` | `Time` | `System.TimeSpan` |
+| `decimal` | | `System.Data.SqlTypes.SqlDecimal` |
+| | | |
+
+While most of the data types are standard, you might be less familiar with types like *dynamic*, *timespan*, and *guid*.
+
+***Dynamic*** has a structure very similar to JSON, but with one key difference: It can store Kusto Query Language-specific data types that traditional JSON cannot, such as a nested *dynamic* value, or *timespan*. Here's an example of a dynamic type:
+
+```json
+{
+"countryOrRegion":"US",
+"geoCoordinates": {
+"longitude":-122.12094116210936,
+"latitude":47.68050003051758
+},
+"state":"Washington",
+"city":"Redmond"
+}
+```
+
+***Timespan*** is a data type that refers to a measure of time such as hours, days, or seconds. Do not confuse *timespan* with *datetime*, which evaluates to an actual date and time, not a measure of time. The following table shows a list of *timespan* suffixes.
+
+#### *Timespan* suffixes
+
+| Function | Description |
+| -- | -- |
+| `D` | days |
+| `H` | hours |
+| `M` | minutes |
+| `S` | seconds |
+| `Ms` | milliseconds |
+| `Microsecond` | microseconds |
+| `Tick` | nanoseconds |
+| | |
+
+***Guid*** is a datatype representing a 128-bit, globally-unique identifier, which follows the standard format of [8]-[4]-[4]-[4]-[12], where each [number] represents the number of characters and each character can range from 0-9 or a-f.
+
+> [!NOTE]
+> Kusto Query Language has both tabular and scalar operators. Throughout the rest of this article, if you simply see the word "operator," you can assume it means tabular operator, unless otherwise noted.
+
+## Getting, limiting, sorting, and filtering data
+
+The core vocabulary of Kusto Query Language - the foundation that will allow you to accomplish the overwhelming majority of your tasks - is a collection of operators for filtering, sorting, and selecting your data. The remaining tasks you will need to do will require you to stretch your knowledge of the language to meet your more advanced needs. Let's expand a bit on some of the commands we used [in our above example](#query-structure) and look at `take`, `sort`, and `where`.
+
+For each of these operators, we'll examine its use in our previous *SigninLogs* example, and learn either a useful tip or a best practice.
+
+### Getting data
+
+The first line of any basic query specifies which table you want to work with. In the case of Microsoft Sentinel, this will likely be the name of a log type in your workspace, such as *SigninLogs*, *SecurityAlert*, or *CommonSecurityLog*. For example:
+
+`SigninLogs`
+
+Note that in Kusto Query Language, log names are case sensitive, so `SigninLogs` and `signinLogs` will be interpreted differently. Take care when choosing names for your custom logs, so they are easily identifiable and not too similar to another log.
+
+### Limiting data: *take* / *limit*
+
+The [*take*](/azure/data-explorer/kusto/query/takeoperator) operator (and the identical [limit](/azure/data-explorer/kusto/query/limitoperator) operator) is used to limit your results by returning only a given number of rows. It's followed by an integer that specifies the number of rows to return. Typically, it's used at the end of a query after you have determined your sort order, and in such a case it will return the given number of rows at the top of the sorted order.
+
+Using `take` earlier in the query can be useful for testing a query, when you don't want to return large datasets. However, if you place the `take` operation before any `sort` operations, `take` will return rows selected at random - and possibly a different set of rows every time the query is run. Here's an example of using take:
+
+```kusto
+SigninLogs
+ | take 5
+```
+
+> [!TIP]
+> When working on a brand-new query where you may not know what the query will look like, it can be useful to put a `take` statement at the beginning to artificially limit your dataset for faster processing and experimentation. Once you are happy with the full query, you can remove the initial `take` step.
+
+### Sorting data: *sort* / *order*
+
+The [*sort*](/azure/data-explorer/kusto/query/sortoperator) operator (and the identical [order](/azure/data-explorer/kusto/query/orderoperator) operator) is used to sort your data by a specified column. In the following example, we ordered the results by *TimeGenerated* and set the order direction to descending with the *desc* parameter, placing the highest values first; for ascending order we would use *asc*.
+
+> [!NOTE]
+> The default direction for sorts is descending, so technically you only have to specify if you want to sort in ascending order. However, specifying the sort direction in any case will make your query more readable.
+
+```kusto
+SigninLogs
+| sort by TimeGenerated desc
+| take 5
+```
+
+As we mentioned, we put the `sort` operator before the `take` operator. We need to sort first to make sure we get the appropriate five records.
++
+#### *Top*
+
+The [*top*](/azure/data-explorer/kusto/query/topoperator) operator allows us to combine the `sort` and `take` operations into a single operator:
+
+```kusto
+SigninLogs
+| top 5 by TimeGenerated desc
+```
+
+In cases where two or more records have the same value in the column you are sorting by, you can add more columns to sort by. Add extra sorting columns in a comma-separated list, located after the first sorting column, but before the sort order keyword. For example:
+
+```kusto
+SigninLogs
+| sort by TimeGenerated, Identity desc
+| take 5
+```
+
+Now, if *TimeGenerated* is the same between multiple records, it will then try to sort by the value in the *Identity* column.
+
+> [!NOTE]
+> **When to use `sort` and `take`, and when to use `top`**
+>
+> - If you're only sorting on one field, use `top`, as it provides better performance than the combination of `sort` and `take`.
+>
+> - If you need to sort on more than one field (like in the last example above), `top` can't do that, so you must use `sort` and `take`.
+
+### Filtering data: *where*
+
+The [*where*](/azure/data-explorer/kusto/query/whereoperator) operator is arguably the most important operator, because it's the key to making sure you are only working with the subset of data that is relevant to your scenario. You should do your best to filter your data as early in the query as possible because doing so will improve query performance by reducing the amount of data that needs to be processed in subsequent steps; it also ensures that you are only performing calculations on the desired data. See this example:
+
+```kusto
+SigninLogs
+| where TimeGenerated >= ago(7d)
+| sort by TimeGenerated, Identity desc
+| take 5
+```
+
+The `where` operator specifies a variable, a comparison (*scalar*) operator, and a value. In our case, we used `>=` to denote that the value in the *TimeGenerated* column needs to be greater than (that is, later than) or equal to seven days ago.
+
+There are two types of comparison operators in Kusto Query Language: string and numerical. The following table shows the full list of numerical operators:
+
+#### Numerical operators
+
+| Operator | Description |
+| -- | -- |
+| `+` | Addition |
+| `-` | Subtraction |
+| `*` | Multiplication |
+| `/` | Division |
+| `%` | Modulo |
+| `<` | Less than |
+| `>` | Greater than |
+| `==` | Equal to |
+| `!=` | Not equal to |
+| `<=` | Less than or equal to |
+| `>=` | Greater than or equal to |
+| `in` | Equal to one of the elements |
+| `!in` | Not equal to any of the elements |
+|
+
+The list of string operators is a much longer list because it has permutations for case sensitivity, substring locations, prefixes, suffixes, and much more. The `==` operator is both a numeric and string operator, meaning it can be used for both numbers and text. For example, both of the following statements would be valid where statements:
+
+- `| where ResultType == 0`
+- `| where Category == 'SignInLogs'`
+
+> [!TIP]
+> **Best Practice:** In most cases, you will probably want to filter your data by more than one column, or filter the same column in more than one way. In these instances, there are two best practices you should keep in mind.
+>
+> You can combine multiple `where` statements into a single step by using the *and* keyword. For example:
+>
+> ```kusto
+> SigninLogs
+> | where Resource == ResourceGroup
+> and TimeGenerated >= ago(7d)
+> ```
+>
+> When you have multiple filters joined into a single `where` statement using the *and* keyword, like above, you will get better performance by putting filters that only reference a single column first. So, a better way to write the above query would be:
+>
+> ```kusto
+> SigninLogs
+> | where TimeGenerated >= ago(7d)
+> and Resource == ResourceGroup
+> ```
+>
+> In this example, the first filter mentions a single column (*TimeGenerated*), while the second references two columns (*Resource* and *ResourceGroup*).
+
+## Summarizing data
+
+[*Summarize*](/azure/data-explorer/kusto/query/summarizeoperator) is one of the most important tabular operators in Kusto Query Language, but it also is one of the more complex operators to learn if you are new to query languages in general. The job of `summarize` is to take in a table of data and output a *new table* that is aggregated by one or more columns.
+
+### Structure of the summarize statement
+
+The basic structure of a `summarize` statement is as follows:
+
+`| summarize <aggregation> by <column>`
+
+For example, the following would return the count of records for each *CounterName* value in the *Perf* table:
+
+```kusto
+Perf
+| summarize count() by CounterName
+```
++
+Because the output of `summarize` is a new table, any columns not explicitly specified in the `summarize` statement will **not** be passed down the pipeline. To illustrate this concept, consider this example:
+
+```kusto
+Perf
+| project ObjectName, CounterValue, CounterName
+| summarize count() by CounterName
+| sort by ObjectName asc
+```
+
+On the second line, we are specifying that we only care about the columns *ObjectName*, *CounterValue*, and *CounterName*. We then summarized to get the record count by *CounterName* and finally, we attempt to sort the data in ascending order based on the *ObjectName* column. Unfortunately, this query will fail with an error (indicating that the *ObjectName* is unknown) because when we summarized, we only included the *Count* and *CounterName* columns in our new table. To avoid this error, we can simply add *ObjectName* to the end of our `summarize` step, like this:
+
+```kusto
+Perf
+| project ObjectName, CounterValue , CounterName
+| summarize count() by CounterName, ObjectName
+| sort by ObjectName asc
+```
+
+The way to read the `summarize` line in your head would be: "summarize the count of records by *CounterName*, and group by *ObjectName*". You can continue adding columns, separated by commas, to the end of the `summarize` statement.
++
+Building on the previous example, if we want to aggregate multiple columns at the same time, we can achieve this by adding aggregations to the `summarize` operator, separated by commas. In the example below, we are getting not only a count of all the records but also a sum of the values in the *CounterValue* column across all records (that match any filters in the query):
+
+```kusto
+Perf
+| project ObjectName, CounterValue , CounterName
+| summarize count(), sum(CounterValue) by CounterName, ObjectName
+| sort by ObjectName asc
+```
++
+#### Renaming aggregated columns
+
+This seems like a good time to talk about column names for these aggregated columns. [At the start of this section](#summarizing-data), we said the `summarize` operator takes in a table of data and produces a new table, and only the columns you specify in the `summarize` statement will continue down the pipeline. Therefore, if you were to run the above example, the resulting columns for our aggregation would be *count_* and *sum_CounterValue*.
+
+The Kusto engine will automatically create a column name without us having to be explicit, but often, you will find that you will prefer your new column have a friendlier name. You can easily rename your column in the `summarize` statement by specifying a new name, followed by ` = ` and the aggregation, like so:
+
+```kusto
+Perf
+| project ObjectName, CounterValue , CounterName
+| summarize Count = count(), CounterSum = sum(CounterValue) by CounterName, ObjectName
+| sort by ObjectName asc
+```
+
+Now, our summarized columns will be named *Count* and *CounterSum*.
++
+There is much more to the `summarize` operator than we can cover here, but you should invest the time to learn it because it is a key component to any data analysis you plan to perform on your Microsoft Sentinel data.
+
+### Aggregation reference
+
+The are many aggregation functions, but some of the most commonly used are `sum()`, `count()`, and `avg()`. Here's a partial list (see the [full list](/azure/data-explorer/kusto/query/aggregation-functions)):
+
+#### Aggregation functions
+
+| Function | Description |
+| | - |
+| `arg_max()` | Returns one or more expressions when argument is maximized |
+| `arg_min()` | Returns one or more expressions when argument is minimized |
+| `avg()` | Returns average value across the group |
+| `buildschema()` | Returns the minimal schema that admits all values of the dynamic input |
+| `count()` | Returns count of the group |
+| `countif()` | Returns count with the predicate of the group |
+| `dcount()` | Returns approximate distinct count of the group elements |
+| `make_bag()` | Returns a property bag of dynamic values within the group |
+| `make_list()` | Returns a list of all the values within the group |
+| `make_set()` | Returns a set of distinct values within the group |
+| `max()` | Returns the maximum value across the group |
+| `min()` | Returns the minimum value across the group |
+| `percentiles()` | Returns the percentile approximate of the group |
+| `stdev()` | Returns the standard deviation across the group |
+| `sum()` | Returns the sum of the elements within the group |
+| `take_any()` | Returns random non-empty value for the group |
+| `variance()` | Returns the variance across the group |
+|
++
+## Selecting: adding and removing columns
+
+As you start working more with queries, you may find that you have more information than you need on your subjects (that is, too many columns in your table). Or you might need more information than you have (that is, you need to add a new column that will contain the results of analysis of other columns). Let's look at a few of the key operators for column manipulation.
+
+### *Project* and *project-away*
+
+[*Project*](/azure/data-explorer/kusto/query/projectoperator) is roughly equivalent to many languages' *select* statements. It allows you to choose which columns to keep. The order of the columns returned will match the order of the columns you list in your `project` statement, as shown in this example:
+
+```kusto
+Perf
+| project ObjectName, CounterValue, CounterName
+```
++
+As you can imagine, when you are working with very wide datasets, you may have lots of columns you want to keep, and specifying them all by name would require a lot of typing. For those cases, you have [*project-away*](/azure/data-explorer/kusto/query/projectawayoperator), which lets you specify which columns to remove, rather than which ones to keep, like so:
+
+```kusto
+Perf
+| project-away MG, _ResourceId, Type
+```
+
+> [!TIP]
+> It can be useful to use `project` in two locations in your queries, at the beginning and again at the end. Using `project` early in your query can help improve performance by stripping away large chunks of data you don't need to pass down the pipeline. Using it again at the end lets you get rid of any columns that may have been created in previous steps and aren't needed in your final output.
+>
+### *Extend*
+
+[*Extend*](/azure/data-explorer/kusto/query/extendoperator) is used to create a new calculated column. This can be useful when you want to perform a calculation against existing columns and see the output for every row. Let's look at a simple example where we calculate a new column called *Kbytes*, which we can calculate by multiplying the MB value (in the existing *Quantity* column) by 1,024.
+
+```kusto
+Usage
+| where QuantityUnit == 'MBytes'
+| extend KBytes = Quantity * 1024
+| project ResourceUri, MBytes=Quantity, KBytes
+```
+
+On the final line in our `project` statement, we renamed the *Quantity* column to *Mbytes*, so we can easily tell which unit of measure is relevant to each column.
++
+It's worth noting that `extend` also works with already calculated columns. For example, we can add one more column called *Bytes* that is calculated from *Kbytes*:
+
+```kusto
+Usage
+| where QuantityUnit == 'MBytes'
+| extend KBytes = Quantity * 1024
+| extend Bytes = KBytes * 1024
+| project ResourceUri, MBytes=Quantity, KBytes, Bytes
+```
++
+## Joining tables
+
+Much of your work in Microsoft Sentinel can be carried out by using a single log type, but there are times when you will want to correlate data together or perform a lookup against another set of data. Like most query languages, Kusto Query Language offers a few operators used to perform various types of joins. In this section, we will look at the most-used operators, `union` and `join`.
+
+### *Union*
+
+[*Union*](/azure/data-explorer/kusto/query/unionoperator?pivots=azuremonitor) simply takes two or more tables and returns all the rows. For example:
+
+```kusto
+OfficeActivity
+| union SecurityEvent
+```
+
+This would return all rows from both the *OfficeActivity* and *SecurityEvent* tables. `Union` offers a few parameters that can be used to adjust how the union behaves. Two of the most useful are *withsource* and *kind*:
+
+```kusto
+OfficeActivity
+| union withsource = SourceTable kind = inner SecurityEvent
+```
+
+The *withsource* parameter lets you specify the name of a new column whose value in a given row will be the name of the table from which the row came. In the example above, we named the column SourceTable, and depending on the row, the value will either be *OfficeActivity* or *SecurityEvent*.
+
+The other parameter we specified was *kind*, which has two options: *inner* or *outer*. In the example above, we specified *inner*, which means the only columns that will be kept during the union are those that exist in both tables. Alternatively, if we had specified *outer* (which is the default value), then all columns from both tables would be returned.
+
+### *Join*
+
+[*Join*](/azure/data-explorer/kusto/query/joinoperator?pivots=azuremonitor) works similarly to `union`, except instead of joining tables to make a new table, we are joining *rows* to make a new table. Like most database languages, there are multiple types of joins you can perform. The general syntax for a `join` is:
+
+```kusto
+T1
+| join kind = <join type>
+(
+ T2
+) on $left.<T1Column> == $right.<T2Column>
+```
+
+After the `join` operator, we specify the *kind* of join we want to perform followed by an open parenthesis. Within the parentheses is where you specify the table you want to join, as well as any other query statements on *that* table you wish to add. After the closing parenthesis, we use the *on* keyword followed by our left (*$left.\<columnName>* keyword) and right (*$right.\<columnName>*) columns separated with the == operator. Here's an example of an *inner join*:
+
+```kusto
+OfficeActivity
+| where TimeGenerated >= ago(1d)
+ and LogonUserSid != ''
+| join kind = inner (
+ SecurityEvent
+ | where TimeGenerated >= ago(1d)
+ and SubjectUserSid != ''
+) on $left.LogonUserSid == $right.SubjectUserSid
+```
+
+> [!NOTE]
+> If both tables have the same name for the columns on which you are performing a join, you don't need to use *$left* and *$right*; instead, you can just specify the column name. Using *$left* and *$right*, however, is more explicit and generally considered to be a good practice.
+
+For your reference, the following table shows a list of available types of joins.
+
+#### Types of Joins
+
+| Join Type | Description |
+| - | - |
+| `inner` | Returns a single for each combination of matching rows from both tables. |
+| `innerunique` | Returns rows from the left table with distinct values in linked field that have a match in the right table. <br>This is the default unspecified join type. |
+| `leftsemi` | Returns all records from the left table that have a match in the right table. <br>Only columns from the left table will be returned. |
+| `rightsemi` | Returns all records from the right table that have a match in the left table. <br>Only columns from the right table will be returned. |
+| `leftanti`/<br>`leftantisemi` | Returns all records from the left table that don't have a match in the right table. <br>Only columns from the left table will be returned. |
+| `rightanti`/<br>`rightantisemi` | Returns all records from the right table that don't have a match in the left table. <br>Only columns from the right table will be returned. |
+| `leftouter` | Returns all records from the left table. For records that have no match in the right table, cell values will be null. |
+| `rightouter` | Returns all records from the right table. For records that have no match in the left table, cell values will be null. |
+| `fullouter` | Returns all records from both left and right tables, matching or not. <br>Unmatched values will be null. |
+|
+
+> [!TIP]
+> **It's a best practice** to have your smallest table on the left. In some cases, following this rule can give you huge performance benefits, depending on the types of joins you are performing and the size of the tables.
+
+## Evaluate
+
+You may remember that back [in the first example](#query-structure), we saw the [*evaluate*](/azure/data-explorer/kusto/query/evaluateoperator) operator on one of the lines. The `evaluate` operator is less commonly used than the ones we have touched on previously. However, knowing how the `evaluate` operator works is well worth your time. Once more, here is that first query, where you will see `evaluate` on the second line.
+
+```kusto
+SigninLogs
+| evaluate bag_unpack(LocationDetails)
+| where RiskLevelDuringSignIn == 'none'
+ and TimeGenerated >= ago(7d)
+| summarize Count = count() by city
+| sort by Count desc
+| take 5
+```
+
+This operator allows you to invoke available plugins (basically built-in functions). Many of these plugins are focused around data science, such as [*autocluster*](/azure/data-explorer/kusto/query/autoclusterplugin), [*diffpatterns*](/azure/data-explorer/kusto/query/diffpatternsplugin), and [*sequence_detect*](/azure/data-explorer/kusto/query/sequence-detect-plugin), allowing you to perform advanced analysis and discover statistical anomalies and outliers.
+
+The plugin used in the above example was called [*bag_unpack*](/azure/data-explorer/kusto/query/bag-unpackplugin), and it makes it very easy to take a chunk of dynamic data and convert it to columns. Remember, [dynamic data](#data-type-table) is a data type that looks very similar to JSON, as shown in this example:
+
+```json
+{
+"countryOrRegion":"US",
+"geoCoordinates": {
+"longitude":-122.12094116210936,
+"latitude":47.68050003051758
+},
+"state":"Washington",
+"city":"Redmond"
+}
+```
+
+In this case, we wanted to summarize the data by city, but *city* is contained as a property within the *LocationDetails* column. To use the *city* property in our query, we had to first convert it to a column using *bag_unpack*.
+
+Going back to our original pipeline steps, we saw this:
+
+`Get Data | Filter | Summarize | Sort | Select`
+
+Now that we've considered the `evaluate` operator, we can see that it represents a new stage in the pipeline, which now looks like this:
+
+`Get Data | `***`Parse`***` | Filter | Summarize | Sort | Select`
+
+There are many other examples of operators and functions that can be used to parse data sources into a more readable and manipulatable format. You can learn about them - and the rest of the Kusto Query Language - in the [full documentation](#more-resources) and in the [workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/advanced-kql-framework-workbook-empowering-you-to-become-kql/ba-p/3033766).
+
+## Let statements
+
+Now that we've covered many of the major operators and data types, let's wrap up with the [*let* statement](/azure/data-explorer/kusto/query/letstatement), which is a great
+way to make your queries easier to read, edit, and maintain.
+
+*Let* allows you to create and set a variable, or to assign a name to an expression. This expression could be a single value, but it could also be a whole query. Here's a simple example:
+
+```kusto
+let aWeekAgo = ago(7d);
+SigninLogs
+| where TimeGenerated >= aWeekAgo
+```
+
+Here, we specified a name of *aWeekAgo* and set it to be equal to the output of a *timespan* function, which returns a *datetime* value. We then terminate the *let* statement with a semicolon. Now we have a new variable called *aWeekAgo* that can be used anywhere in our query.
+
+As we just mentioned, you can use a *let* statement to take a whole query and give the result a name. Since query results, being tabular expressions, can be used as the inputs of queries, you can treat this named result as a table for the purposes of running another query on it. Here's a slight modification to the previous example:
+
+```kusto
+let aWeekAgo = ago(7d);
+let getSignins = SigninLogs
+| where TimeGenerated >= aWeekAgo;
+getSignins
+```
+
+In this case, we created a second *let* statement, where we wrapped our whole query into a new variable called *getSignins*. Just like before, we terminate the second *let* statement with a semicolon. Then we call the variable on the final line, which will run the query. Notice that we were able to use *aWeekAgo* in the second *let* statement. This is because we specified it on the previous line; if we were to swap the *let* statements so that *getSignins* came first, we would get an error.
+
+Now we can use *getSignins* as the basis of another query (in the same window):
+
+```kusto
+let aWeekAgo = ago(7d);
+let getSignins = SigninLogs
+| where TimeGenerated >= aWeekAgo;
+getSignins
+| where level >= 3
+| project IPAddress, UserDisplayName, Level
+```
+
+*Let* statements give you more power and flexibility in helping to organize your queries. *Let* can define scalar and tabular values as well as create user-defined functions. They truly come in handy when you are organizing more complex queries that may be doing multiple joins.
+
+## Next steps
+
+While this article has barely scratched the surface, you've now got the necessary foundation, and we've covered the parts you'll be using most often to get your work done in Microsoft Sentinel.
+
+### Advanced KQL for Microsoft Sentinel workbook
+
+Take advantage of a Kusto Query Language workbook right in Microsoft Sentinel itself - the **Advanced KQL for Microsoft Sentinel** workbook. It gives you step-by-step help and examples for many of the situations you're likely to encounter during your day-to-day security operations, and also points you to lots of ready-made, out-of-the-box examples of analytics rules, workbooks, hunting rules, and more elements that use Kusto queries. Launch this workbook from the **Workbooks** blade in Microsoft Sentinel.
+
+[Advanced KQL Framework Workbook - Empowering you to become KQL-savvy](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/advanced-kql-framework-workbook-empowering-you-to-become-kql/ba-p/3033766) is an excellent blog post that shows you how to use this workbook.
+
+### More resources
+
+See [this collection of learning, training, and skilling resources](kusto-resources.md) for broadening and deepening your knowledge of Kusto Query Language.
sentinel Kusto Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/kusto-resources.md
+
+ Title: Useful resources for working with Kusto Query Language in Microsoft Sentinel
+description: This document provides you with a list of useful resources for working with Kusto Query Language in Microsoft Sentinel.
++ Last updated : 01/10/2022++++
+# Useful resources for working with Kusto Query Language in Microsoft Sentinel
++
+Microsoft Sentinel uses Azure Monitor's Log Analytics environment and the Kusto Query Language (KQL) to build the queries that undergird much of Sentinel's functionality, from analytics rules to workbooks to hunting. This article lists resources that can help you skill up in working with Kusto Query Language, which will give you more tools to work with Microsoft Sentinel, whether as a security engineer or analyst.
+
+## Microsoft Docs and Learn
+
+### Microsoft Sentinel documentation
+- [Kusto Query Language in Microsoft Sentinel](kusto-overview.md)
+
+### Azure Monitor documentation
+- [Tutorial: Use Kusto queries](/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor)
+- [Get started with KQL queries](../azure-monitor/logs/get-started-queries.md)
+- [Query best practices](/azure/data-explorer/kusto/query/best-practices)
+
+### Reference guides
+- [KQL quick reference guide](/azure/data-explorer/kql-quick-reference)
+- [SQL to Kusto cheat sheet](/azure/data-explorer/kusto/query/sqlcheatsheet)
+- [Splunk to Kusto Query Language map](/azure/data-explorer/kusto/query/splunk-cheat-sheet)
+
+### Microsoft Sentinel Learn modules
+- [Write your first query with Kusto Query Language](/learn/modules/write-first-query-kusto-query-language/)
+- [Learning path SC-200: Create queries for Microsoft Sentinel using Kusto Query Language (KQL)](/learn/paths/sc-200-utilize-kql-for-azure-sentinel/)
+
+## Other resources
+
+### Microsoft TechCommunity blogs
+- [Advanced KQL Framework Workbook - Empowering you to become KQL-savvy](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/advanced-kql-framework-workbook-empowering-you-to-become-kql/ba-p/3033766) (includes webinar)
+- [Using KQL functions to speed up analysis in Azure Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/using-kql-functions-to-speed-up-analysis-in-azure-sentinel/ba-p/712381) (advanced level)
+- Ofer Shezaf's blog series on correlation rules using KQL operators:
+ - [Azure Sentinel correlation rules: Active Lists out, make_list() in: the AAD/AWS correlation example](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-correlation-rules-active-lists-out-make-list-in/ba-p/1029225)
+ - [Azure Sentinel correlation rules: the join KQL operator](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-correlation-rules-the-join-kql-operator/ba-p/1041500)
+ - [Implementing Lookups in Azure Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/implementing-lookups-in-azure-sentinel/ba-p/1091306)
+ - [Approximate, partial and combined lookups in Azure Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/approximate-partial-and-combined-lookups-in-azure-sentinel/ba-p/1393795)
+
+### Training and skilling resources
+- [Rod Trent's Must Learn KQL series](https://github.com/rod-trent/MustLearnKQL)
+- [Pluralsight training: Kusto Query Language from Scratch](https://www.pluralsight.com/courses/kusto-query-language-kql-from-scratch)
+- [Log Analytics demo environment](https://aka.ms/LADemo)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get certified!](/learn/paths/security-ops-sentinel/)
+
+> [!div class="nextstepaction"]
+> [Read customer use case stories](https://customers.microsoft.com/en-us/search?sq=%22Azure%20Sentinel%20%22&ff=&p=0&so=story_publish_date%20desc)
sentinel Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/resources.md
This article lists resources that can help you get more information about workin
Microsoft Sentinel uses Azure Monitor Log Analytics's Kusto Query Language (KQL) to build queries. For more information, see: -- [KQL concepts](/azure/data-explorer/kusto/concepts/)-- [KQL queries](/azure/data-explorer/kusto/query/)-- [KQL quick reference guide](/azure/data-explorer/kql-quick-reference).-- [Get started with KQL queries](../azure-monitor/logs/get-started-queries.md)
+- [Kusto Query Language in Microsoft Sentinel](kusto-overview.md)
+- [Useful resources for working with Kusto Query Language in Microsoft Sentinel](kusto-resources.md)
## Microsoft Sentinel templates for data to monitor
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
> > You can also contribute! Join us in the [Microsoft Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
+## January 2022
+
+### Kusto Query Language workbook and tutorial
+
+Kusto Query Language is used in Microsoft Sentinel to search, analyze, and visualize data, as the basis for detection rules, workbooks, hunting, and more.
+
+The new **Advanced KQL for Microsoft Sentinel** interactive workbook is designed to help you improve your Kusto Query Language proficiency by taking a use case-driven approach based on:
+
+- Grouping Kusto Query Language operators/commands by category for easy navigation.
+- Listing the possible tasks a user would perform with Kusto Query Language in Microsoft Sentinel. Each task includes operators used, sample queries and use cases.
+- Compiling a list of existing content found in Microsoft Sentinel (analytics rules, hunting queries, workbooks and so on) to provide additional references specific to the operators you want to learn.
+- Allowing you to execute sample queries on-the-fly, within your own environment or in "LA Demo" - a public [Log Analytics demo environment](https://aka.ms/lademo). Try the sample Kusto Query Language statements in real time without the need to navigate away from the workbook.
+
+Accompanying the new workbook is an explanatory [blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/advanced-kql-framework-workbook-empowering-you-to-become-kql/ba-p/3033766), as well as a new [introduction to Kusto Query Language](kusto-overview.md) and a [collection of learning and skilling resources](kusto-resources.md) in the Microsoft Sentinel documentation.
+ ## December 2021 - [IoT OT Threat Monitoring with Defender for IoT solution](#iot-ot-threat-monitoring-with-defender-for-iot-solution-public-preview)
If you're looking for items older than six months, you'll find them in the [Arch
### IoT OT Threat Monitoring with Defender for IoT solution (Public preview)
-The new **IoT OT Threat Monitoring with Defender for IoT** solution available in the [Microsoft Sentinel content hub](sentinel-solutions-catalog.md#microsoft) provides further support for the Microsoft Sentinel integration with Microsoft Defender for IoT, bridging gaps between IT and OT security challenges, and empowering SOC teams with enhanced abilities to efficiently and effectively detect and respond to OT threats.
+The new **IoT OT Threat Monitoring with Defender for IoT** solution available in the [Microsoft Sentinel content hub](sentinel-solutions-catalog.md#microsoft) provides further support for the Microsoft Sentinel integration with Microsoft Defender for IoT, bridging gaps between IT and OT security challenges, and empowering SOC teams with enhanced abilities to efficiently and effectively detect and respond to OT threats.
For more information, see [Tutorial: Integrate Microsoft Sentinel and Microsoft Defender for IoT](iot-solution.md).
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-support-matrix.md
Windows 7 (x64) with SP1 onwards | From version [9.30](https://support.microsoft
**Operating system** | **Details** |
-Red Hat Enterprise Linux | 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6,[7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/), [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609/), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5
+Red Hat Enterprise Linux | 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6,[7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/), [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609/), [8.3](https://support.microsoft.com/help/4597409/), 8.4 (4.18.0-305.30.1.el8_4.x86_64 or higher), 8.5 (4.18.0-348.5.1.el8_5.x86_64 or higher)
CentOS | 6.5, 6.6, 6.7, 6.8, 6.9, 6.10 </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, [7.8](https://support.microsoft.com/help/4564347/), [7.9 pre-GA version](https://support.microsoft.com/help/4578241/), 7.9 GA version is supported from 9.37 hot fix patch** </br> 8.0, 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5 Ubuntu 14.04 LTS Server | Includes support for all 14.04.*x* versions; [Supported kernel versions](#supported-ubuntu-kernel-versions-for-azure-virtual-machines); Ubuntu 16.04 LTS Server | Includes support for all 16.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloudinit configuration). Password-based sign in can be re-enabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal.
DRBD | Disks that are part of a DRBD setup are not supported. |
LRS | Supported | GRS | Supported | RA-GRS | Supported |
-ZRS | Supported | ZRS Managed disks are supported. If the source VM has one or more ZRS managed disks, Site Recovery ensures the target VM also has the same configuration of disks. If the source managed disks are of a different type, they cannot be converted to ZRS managed disks at target, and vice versa.
+ZRS | Not supported |
Cool and Hot Storage | Not supported | Virtual machine disks are not supported on cool and hot storage Azure Storage firewalls for virtual networks | Supported | If restrict virtual network access to storage accounts, enable [Allow trusted Microsoft services](../storage/common/storage-network-security.md#exceptions). General purpose V2 storage accounts (Both Hot and Cool tier) | Supported | Transaction costs increase substantially compared to General purpose V1 storage accounts
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-physical-azure-support-matrix.md
Windows 7 with SP1 64-bit | Supported from [Update rollup 36](https://support.mi
**Operating system** | **Details** | Linux | Only 64-bit system is supported. 32-bit system isn't supported.<br/><br/>Every Linux server should have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) installed. It is required to boot the server in Azure after test failover/failover. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. <br/><br/> Site Recovery orchestrates failover to run Linux servers in Azure. However Linux vendors might limit support to only distribution versions that haven't reached end-of-life.<br/><br/> On Linux distributions, only the stock kernels that are part of the distribution minor version release/update are supported.<br/><br/> Upgrading protected machines across major Linux distribution versions isn't supported. To upgrade, disable replication, upgrade the operating system, and then enable replication again.<br/><br/> [Learn more](https://support.microsoft.com/help/2941892/support-for-linux-and-open-source-technology-in-azure) about support for Linux and open-source technology in Azure.
-Linux Red Hat Enterprise | 5.2 to 5.11</b><br/> 6.1 to 6.10</b> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9 Beta version](https://support.microsoft.com/help/4578241/), [7.9](https://support.microsoft.com/help/4590304/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5 <br/> Few older kernels on servers running Red Hat Enterprise Linux 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure.
+Linux Red Hat Enterprise | 5.2 to 5.11</b><br/> 6.1 to 6.10</b> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9 Beta version](https://support.microsoft.com/help/4578241/), [7.9](https://support.microsoft.com/help/4590304/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4 (4.18.0-305.30.1.el8_4.x86_64 or higher), 8.5 (4.18.0-348.5.1.el8_5.x86_64 or higher) <br/> Few older kernels on servers running Red Hat Enterprise Linux 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure.
Linux: CentOS | 5.2 to 5.11</b><br/> 6.1 to 6.10</b><br/> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5 <br/><br/> Few older kernels on servers running CentOS 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. Ubuntu | Ubuntu 14.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions)<br/>Ubuntu 16.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 18.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 20.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> (*includes support for all 14.04.*x*, 16.04.*x*, 18.04.*x*, 20.04.*x* versions) Debian | Debian 7/Debian 8 (includes support for all 7. *x*, 8. *x* versions); Debian 9 (includes support for 9.1 to 9.13. Debian 9.0 is not supported.), Debian 10 [(Review supported kernel versions)](#debian-kernel-versions)
spring-cloud Concept Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/concept-security-controls.md
A security control is a quality or feature of an Azure service that contributes
| Server-side encryption at rest: Microsoft-managed keys | Yes | User uploaded source and artifacts, config server settings, app settings, and data in persistent storage are stored in Azure Storage, which automatically encrypts the content at rest.<br><br>Config server cache, runtime binaries built from uploaded source, and application logs during the application lifetime are saved to Azure managed disk, which automatically encrypts the content at rest.<br><br>Container images built from user uploaded source are saved in Azure Container Registry, which automatically encrypts the image content at rest. | [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md)<br><br>[Server-side encryption of Azure managed disks](../virtual-machines/disk-encryption.md)<br><br>[Container image storage in Azure Container Registry](../container-registry/container-registry-storage.md) | | Encryption in transient | Yes | User app public endpoints use HTTPS for inbound traffic by default. | | | API calls encrypted | Yes | Management calls to configure Azure Spring Cloud service occur via Azure Resource Manager calls over HTTPS. | [Azure Resource Manager](../azure-resource-manager/index.yml) |
+| Customer Lockbox | Yes | Provide Microsoft with access to relevant customer data during support scenarios. | [Customer Lockbox for Microsoft Azure](../security/fundamentals/customer-lockbox-overview.md)
**Network access security controls**
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
-# Known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage (preview)
+# Known issues with SSH File Transfer Protocol (SFTP) support in Azure Blob Storage (preview)
This article describes limitations and known issues of SFTP support in Azure Blob Storage.
This article describes limitations and known issues of SFTP support in Azure Blo
| ecdsa-sha2-nistp384| diffie-hellman-group16-sha512 | aes256-cbc | | ||| aes192-cbc ||
-SFTP support for Azure Blob Storage currently limits its cryptographic algorithm support in accordance to the Microsoft Security Development Lifecycle (SDL). We strongly recommend that customers utilize SDL approved algorithms to securely access their data. More details can be found [here](/security/sdl/cryptographic-recommendations)
+SFTP support in Azure Blob Storage currently limits its cryptographic algorithm support in accordance to the Microsoft Security Development Lifecycle (SDL). We strongly recommend that customers utilize SDL approved algorithms to securely access their data. More details can be found [here](/security/sdl/cryptographic-recommendations)
## Security
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
Title: Connect to Azure Blob Storage using SFTP (preview) | Microsoft Docs
-description: Learn how to enable SFTP support for your Azure Blob Storage account so that you can directly connect to your Azure Storage account by using an SFTP client.
+description: Learn how to enable SFTP support in your Azure Blob Storage account so that you can directly connect to your Azure Storage account by using an SFTP client.
Before you can enable SFTP support, you must register the SFTP feature with your
> [!div class="mx-imgBorder"] > ![Preview setting](./media/secure-file-transfer-protocol-support-how-to/preview-features-setting.png)
-4. In the **Preview features** page, select the **SFTP support for Azure Blob Storage** feature, and then select **Register**.
+4. In the **Preview features** page, select the **SFTP support in Azure Blob Storage** feature, and then select **Register**.
### Verify feature registration
Verify that the feature is registered before continuing with the other steps in
1. Open the **Preview features** page of your subscription.
-2. Locate the **SFTP support for Azure Blob Storage** feature and make sure that **Registered** appears in the **State** column.
+2. Locate the **SFTP support in Azure Blob Storage** feature and make sure that **Registered** appears in the **State** column.
## Enable SFTP support
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-support.md
Title: SFTP support for Azure Blob Storage (preview) | Microsoft Docs
+ Title: SFTP support in Azure Blob Storage (preview) | Microsoft Docs
description: Blob storage now supports the SSH File Transfer Protocol (SFTP).
-# SSH File Transfer Protocol (SFTP) support for Azure Blob Storage (preview)
+# SSH File Transfer Protocol (SFTP) support in Azure Blob Storage (preview)
Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage accounts via an SFTP endpoint, allowing you to leverage SFTP for file access, file transfer, as well as file management.
Azure allows secure data transfer to Blob Storage accounts using Azure Blob serv
Prior to the release of this feature, if you wanted to use SFTP to transfer data to Azure Blob Storage you would have to either purchase a third party product or orchestrate your own solution. You would have to create a virtual machine (VM) in Azure to host an SFTP server, and then figure out a way to move data into the storage account.
-Now, with SFTP support for Azure Blob Storage, you can enable an SFTP endpoint for Blob Storage accounts with a single setting. Then you can set up local user identities for authentication to transfer data securely without the need to do any additional work.
+Now, with SFTP support in Azure Blob Storage, you can enable an SFTP endpoint for Blob Storage accounts with a single setting. Then you can set up local user identities for authentication to transfer data securely without the need to do any additional work.
-This article describes SFTP support for Azure Blob Storage. To learn how to enable SFTP for your storage account, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP) (preview)](secure-file-transfer-protocol-support-how-to.md).
+This article describes SFTP support in Azure Blob Storage. To learn how to enable SFTP for your storage account, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP) (preview)](secure-file-transfer-protocol-support-how-to.md).
## SFTP and the hierarchical namespace
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-overview.md
Previously updated : 05/14/2021 Last updated : 01/10/2022
The following table describes the legacy storage account types. These account ty
| Type of legacy storage account | Supported storage services | Redundancy options | Deployment model | Usage | |--|--|--|--|--|
-| Standard general-purpose v1 | Blob, Queue, and Table storage, Azure Files | LRS/GRS/RA-GRS | Resource Manager, Classic | General-purpose v1 accounts may not have the latest features or the lowest per-gigabyte pricing. Consider using for these scenarios:<br /><ul><li>Your applications require the Azure [classic deployment model](../../azure-portal/supportability/classic-deployment-model-quota-increase-requests.md).</li><li>Your applications are transaction-intensive or use significant geo-replication bandwidth, but don't require large capacity. In this case, general-purpose v1 may be the most economical choice.</li><li>You use a version of the Azure Storage REST API that is earlier than 2014-02-14 or a client library with a version lower than 4.x, and you can't upgrade your application.</li></ul> |
+| Standard general-purpose v1 | Blob, Queue, and Table storage, Azure Files | LRS/GRS/RA-GRS | Resource Manager, Classic | General-purpose v1 accounts may not have the latest features or the lowest per-gigabyte pricing. Consider using for these scenarios:<br /><ul><li>Your applications require the Azure [classic deployment model](../../azure-portal/supportability/classic-deployment-model-quota-increase-requests.md).</li><li>Your applications are transaction-intensive or use significant geo-replication bandwidth, but don't require large capacity. In this case, a general-purpose v1 account may be the most economical choice.</li><li>You use a version of the Azure Storage REST API that is earlier than 2014-02-14 or a client library with a version lower than 4.x, and you can't upgrade your application.</li><li>You are selecting a storage account to use as a cache for Azure Site Recovery. Because Site Recovery is transaction-intensive, a general-purpose v1 account may be more cost-effective. For more information, see [Support matrix for Azure VM disaster recovery between Azure regions](../../site-recovery/azure-to-azure-support-matrix.md#cache-storage).</li></ul> |
| Standard Blob storage | Blob storage (block blobs and append blobs only) | LRS/GRS/RA-GRS | Resource Manager | Microsoft recommends using standard general-purpose v2 accounts instead when possible. | ## Next steps
storage Storage Files Identity Ad Ds Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md
If you have directories or files in on-premises file servers with Windows DACLs
Use the following Windows command to grant full permissions to all directories and files under the file share, including the root directory. Remember to replace the placeholder values in the example with your own values. ```
-icacls <mounted-drive-letter>: /grant <user-email>:(f)
+icacls <mounted-drive-letter>: /grant <user-upn>:(f)
``` For more information on how to use icacls to set Windows ACLs and on the different types of supported permissions, see [the command-line reference for icacls](/windows-server/administration/windows-commands/icacls).
storage Table Storage How To Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/tables/table-storage-how-to-use-powershell.md
New-AzStorageTable ΓÇôName $tableName ΓÇôContext $ctx
## Retrieve a list of tables in the storage account
-Retrieve a list of tables in the storage account using [Get-AzStorageTable](/powershell/module/azure.storage/Get-AzureStorageTable).
+Retrieve a list of tables in the storage account using [Get-AzStorageTable](/powershell/module/az.storage/Get-AzStorageTable).
```powershell Get-AzStorageTable ΓÇôContext $ctx | select Name
Get-AzStorageTable ΓÇôContext $ctx | select Name
## Retrieve a reference to a specific table
-To perform operations on a table, you need a reference to the specific table. Get a reference using [Get-AzStorageTable](/powershell/module/azure.storage/Get-AzureStorageTable).
+To perform operations on a table, you need a reference to the specific table. Get a reference using [Get-AzStorageTable](/powershell/module/az.storage/Get-AzStorageTable).
```powershell $storageTable = Get-AzStorageTable ΓÇôName $tableName ΓÇôContext $ctx
virtual-machines Field Programmable Gate Arrays Attestation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/field-programmable-gate-arrays-attestation.md
The FPGA Attestation service performs a series of validations on a design checkpoint file (called a ΓÇ£netlistΓÇ¥) generated by the Xilinx toolset and produces a file that contains the validated image (called a ΓÇ£bitstreamΓÇ¥) that can be loaded onto the Xilinx U250 FPGA card in an NP series VM. ## News
-The current attestation service is using Vitis 2020.2 from Xilinx, on Jan 10th 2022, weΓÇÖll be moving to Vitis 2021.1, the change should be transparent to most users. Once your designs are ΓÇ£attestedΓÇ¥ using Vitis 2021.1, you should be moving to XRT2021.1. Xilinx will publish new marketplace images based on XRT 2021.1.
+The current attestation service is using Vitis 2020.2 from Xilinx, on Jan 17th 2022, weΓÇÖll be moving to Vitis 2021.1, the change should be transparent to most users. Once your designs are ΓÇ£attestedΓÇ¥ using Vitis 2021.1, you should be moving to XRT2021.1. Xilinx will publish new marketplace images based on XRT 2021.1.
Please note that current designs already attested on Vitis 2020.2, will work on the current deployment marketplace images as well as new images based on XRT2021.1 As part of the move to 2021.1, Xilinx introduced a new DRC that might affect some designs previously working on Vitis 2020.2 regarding BUFCE_LEAF failing attestation, for more details here: [Xilinx AR 75980 UltraScale/UltraScale+ BRAM: CLOCK_DOMAIN = Common Mode skew checks](https://support.xilinx.com/s/article/75980?language=en_US).
virtual-machines Hbv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/hbv3-series.md
Previously updated : 03/12/2021 Last updated : 01/10/2022
All HBv3-series VMs feature 200 Gb/sec HDR InfiniBand from NVIDIA Networking to
[Live Migration](maintenance-and-updates.md): Not Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Coming soon<br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported ([Learn more](https://techcommunity.microsoft.com/t5/azure-compute/accelerated-networking-on-hb-hc-hbv2-and-ndv2/ba-p/2067965) about performance and potential issues) <br>
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported<br> <br>
virtual-machines Login Using Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/login-using-aad.md
- Title: Log in to a Linux VM with Azure Active Directory credentials
-description: Learn how to create and configure a Linux VM to sign in using Azure Active Directory authentication.
---- Previously updated : 11/22/2021--------
-# Deprecated: Login to a Linux virtual machine in Azure with Azure Active Directory using device code flow authentication
-
-> [!CAUTION]
-> **The public preview feature described in this article was deprecated August 15, 2021.**
->
-> This feature is being replaced with the ability to use Azure AD based SSH using openSSH certificate-based authentication. This feature is now generally available! For more information see the article, [Login to a Linux virtual machine in Azure with Azure Active Directory using SSH certificate-based authentication](../../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md). To migrate from the old version to this version, see [Migration from previous preview](../../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md#migration-from-previous-preview)
-
-To improve the security of Linux virtual machines (VMs) in Azure, you can integrate with Azure Active Directory (AD) authentication. When you use Azure AD authentication for Linux VMs, you centrally control and enforce policies that allow or deny access to the VMs. This article shows you how to create and configure a Linux VM to use Azure AD authentication.
-
-There are many benefits of using Azure AD authentication to log in to Linux VMs in Azure, including:
--- **Improved security:**
- - You can use your corporate AD credentials to log in to Azure Linux VMs. There is no need to create local administrator accounts and manage credential lifetime.
- - By reducing your reliance on local administrator accounts, you do not need to worry about credential loss/theft, users configuring weak credentials etc.
- - The password complexity and password lifetime policies configured for your Azure AD directory help secure Linux VMs as well.
- - To further secure login to Azure virtual machines, you can configure multi-factor authentication.
- - The ability to log in to Linux VMs with Azure Active Directory also works for customers that use [Federation Services](../../active-directory/hybrid/how-to-connect-fed-whatis.md).
--- **Seamless collaboration:** With Azure role-based access control (Azure RBAC), you can specify who can sign in to a given VM as a regular user or with administrator privileges. When users join or leave your team, you can update the Azure RBAC policy for the VM to grant access as appropriate. This experience is much simpler than having to scrub VMs to remove unnecessary SSH public keys. When employees leave your organization and their user account is disabled or removed from Azure AD, they no longer have access to your resources.-
-## Supported Azure regions and Linux distributions
-
-The following Linux distributions are currently supported during the preview of this feature:
-
-| Distribution | Version |
-| | |
-| CentOS | CentOS 6, CentOS 7 |
-| Debian | Debian 9 |
-| openSUSE | openSUSE Leap 42.3 |
-| RedHat Enterprise Linux | RHEL 6, RHEL 7 |
-| SUSE Linux Enterprise Server | SLES 12 |
-| Ubuntu Server | Ubuntu 14.04 LTS, Ubuntu Server 16.04, and Ubuntu Server 18.04 |
-
-> [!IMPORTANT]
-> The preview is not supported in Azure Government or sovereign clouds.
->
-> It's not supported to use this extension on Azure Kubernetes Service (AKS) clusters. For more information, see [Support policies for AKS](../../aks/support-policies.md).
-
-If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.0.31 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
-
-## Network requirements
-
-To enable Azure AD authentication for your Linux VMs in Azure, you need to ensure your VMs network configuration permits outbound access to the following endpoints over TCP port 443:
-
-* https:\//login.microsoftonline.com
-* https:\//login.windows.net
-* https:\//device.login.microsoftonline.com
-* https:\//pas.windows.net
-* https:\//management.azure.com
-* https:\//packages.microsoft.com
-
-> [!NOTE]
-> Currently, Azure network security groups can't be configured for VMs enabled with Azure AD authentication.
-
-## Create a Linux virtual machine
-
-Create a resource group with [az group create](/cli/azure/group#az_group_create), then create a VM with [az vm create](/cli/azure/vm#az_vm_create) using a supported distro and in a supported region. The following example deploys a VM named *myVM* that uses *Ubuntu 16.04 LTS* into a resource group named *myResourceGroup* in the *southcentralus* region. In the following examples, you can provide your own resource group and VM names as needed.
-
-```azurecli-interactive
-az group create --name myResourceGroup --location southcentralus
-
-az vm create \
- --resource-group myResourceGroup \
- --name myVM \
- --image UbuntuLTS \
- --admin-username azureuser \
- --generate-ssh-keys
-```
-
-It takes a few minutes to create the VM and supporting resources.
-
-## Install the Azure AD login VM extension
-
-> [!NOTE]
-> If deploying this extension to a previously created VM ensure the machine has at least 1GB of memory allocated else the extension will fail to install
-
-To log in to a Linux VM with Azure AD credentials, install the Azure Active Directory login VM extension. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines. Use [az vm extension set](/cli/azure/vm/extension#az_vm_extension_set) to install the *AADLoginForLinux* extension on the VM named *myVM* in the *myResourceGroup* resource group:
-
-```azurecli-interactive
-az vm extension set \
- --publisher Microsoft.Azure.ActiveDirectory.LinuxSSH \
- --name AADLoginForLinux \
- --resource-group myResourceGroup \
- --vm-name myVM
-```
-
-The *provisioningState* of *Succeeded* is shown once the extension is successfully installed on the VM. The VM needs a running VM agent to install the extension. For more information, see [VM Agent Overview](../extensions/agent-windows.md).
-
-## Configure role assignments for the VM
-
-Azure role-based access control (Azure RBAC) policy determines who can log in to the VM. Two Azure roles are used to authorize VM login:
--- **Virtual Machine Administrator Login**: Users with this role assigned can log in to an Azure virtual machine with Windows Administrator or Linux root user privileges.-- **Virtual Machine User Login**: Users with this role assigned can log in to an Azure virtual machine with regular user privileges.-
-> [!NOTE]
-> To allow a user to log in to the VM over SSH, you must assign either the *Virtual Machine Administrator Login* or *Virtual Machine User Login* role. The Virtual Machine Administrator Login and Virtual Machine User Login roles use dataActions and thus cannot be assigned at management group scope. Currently these roles can only be assigned at the subscription, resource group or resource scope. An Azure user with the *Owner* or *Contributor* roles assigned for a VM do not automatically have privileges to log in to the VM over SSH.
-
-The following example uses [az role assignment create](/cli/azure/role/assignment#az_role_assignment_create) to assign the *Virtual Machine Administrator Login* role to the VM for your current Azure user. The username of your active Azure account is obtained with [az account show](/cli/azure/account#az_account_show), and the *scope* is set to the VM created in a previous step with [az vm show](/cli/azure/vm#az_vm_show). The scope could also be assigned at a resource group or subscription level, and normal Azure RBAC inheritance permissions apply. For more information, see [Azure RBAC](../../role-based-access-control/overview.md)
-
-```azurecli-interactive
-username=$(az account show --query user.name --output tsv)
-vm=$(az vm show --resource-group myResourceGroup --name myVM --query id -o tsv)
-
-az role assignment create \
- --role "Virtual Machine Administrator Login" \
- --assignee $username \
- --scope $vm
-```
-
-> [!NOTE]
-> If your AAD domain and logon username domain do not match, you must specify the object ID of your user account with the *--assignee-object-id*, not just the username for *--assignee*. You can obtain the object ID for your user account with [az ad user list](/cli/azure/ad/user#az_ad_user_list).
-
-For more information on how to use Azure RBAC to manage access to your Azure subscription resources, see using the [Azure CLI](../../role-based-access-control/role-assignments-cli.md), [Azure portal](../../role-based-access-control/role-assignments-portal.md), or [Azure PowerShell](../../role-based-access-control/role-assignments-powershell.md).
-
-## Using Conditional Access
-
-You can enforce Conditional Access policies such as multi-factor authentication or user sign-in risk check before authorizing access to Linux VMs in Azure that are enabled with Azure AD sign in. To apply Conditional Access policy, you must select "Microsoft Azure Linux Virtual Machine Sign-In" app from the cloud apps or actions assignment option and then use Sign-in risk as a condition and/or require multi-factor authentication as a grant access control.
-
-> [!WARNING]
-> Per-user Enabled/Enforced Azure AD Multi-Factor Authentication is not supported for VM sign-in.
-
-## Log in to the Linux virtual machine
-
-First, view the public IP address of your VM with [az vm show](/cli/azure/vm#az_vm_show):
-
-```azurecli-interactive
-az vm show --resource-group myResourceGroup --name myVM -d --query publicIps -o tsv
-```
-
-Log in to the Azure Linux virtual machine using your Azure AD credentials. The `-l` parameter lets you specify your own Azure AD account address. Replace the example account with your own. Account addresses should be entered in all lowercase. Replace the example IP address with the public IP address of your VM from the previous command.
-
-```console
-ssh -l azureuser@contoso.onmicrosoft.com 10.11.123.456
-```
-
-You are prompted to sign in to Azure AD with a one-time use code at [https://microsoft.com/devicelogin](https://microsoft.com/devicelogin). Copy and paste the one-time use code into the device login page.
-
-When prompted, enter your Azure AD login credentials at the login page.
-
-The following message is shown in the web browser when you have successfully authenticated: `You have signed in to the Microsoft Azure Linux Virtual Machine Sign-In application on your device.`
-
-Close the browser window, return to the SSH prompt, and press the **Enter** key.
-
-You are now signed in to the Azure Linux virtual machine with the role permissions as assigned, such as *VM User* or *VM Administrator*. If your user account is assigned the *Virtual Machine Administrator Login* role, you can use `sudo` to run commands that require root privileges.
-
-## Sudo and AAD login
-
-The first time that you run sudo, you will be asked to authenticate a second time. If you don't want to have to authenticate again to run sudo, you can edit your sudoers file `/etc/sudoers.d/aad_admins` and replace this line:
-
-```bash
-%aad_admins ALL=(ALL) ALL
-```
-
-With this line:
-
-```bash
-%aad_admins ALL=(ALL) NOPASSWD:ALL
-```
-
-## Troubleshoot sign-in issues
-
-Some common errors when you try to SSH with Azure AD credentials include no Azure roles assigned, and repeated prompts to sign in. Use the following sections to correct these issues.
-
-### Access denied: Azure role not assigned
-
-If you see the following error on your SSH prompt, verify that you have configured Azure RBAC policies for the VM that grants the user either the *Virtual Machine Administrator Login* or *Virtual Machine User Login* role:
-
-```output
-login as: azureuser@contoso.onmicrosoft.com
-Using keyboard-interactive authentication.
-To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code FJX327AXD to authenticate. Press ENTER when ready.
-Using keyboard-interactive authentication.
-Access denied: to sign-in you be assigned a role with action 'Microsoft.Compute/virtualMachines/login/action', for example 'Virtual Machine User Login'
-Access denied
-```
-> [!NOTE]
-> If you are running into issues with Azure role assignments, see [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md#azure-role-assignments-limit).
-
-### Continued SSH sign-in prompts
-
-If you successfully complete the authentication step in a web browser, you may be immediately prompted to sign in again with a fresh code. This error is typically caused by a mismatch between the sign-in name you specified at the SSH prompt and the account you signed in to Azure AD with. To correct this issue:
--- Verify that the sign-in name you specified at the SSH prompt is correct. A typo in the sign-in name could cause a mismatch between the sign-in name you specified at the SSH prompt and the account you signed in to Azure AD with. For example, you typed *azuresuer\@contoso.onmicrosoft.com* instead of *azureuser\@contoso.onmicrosoft.com*.-- If you have multiple user accounts, make sure you don't provide a different user account in the browser window when signing in to Azure AD.-- Linux is a case-sensitive operating system. There is a difference between 'Azureuser@contoso.onmicrosoft.com' and 'azureuser@contoso.onmicrosoft.com', which can cause a mismatch. Make sure that you specify the UPN with the correct case-sensitivity at the SSH prompt.-
-### Other limitations
-
-Users that inherit access rights through nested groups or role assignments aren't currently supported. The user or group must be directly assigned the [required role assignments](#configure-role-assignments-for-the-vm). For example, the use of management groups or nested group role assignments won't grant the correct permissions to allow the user to sign in.
-
-## Preview feedback
-
-Share your feedback about this preview feature or report issues using it on the [Azure AD feedback forum](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789)
-
-## Next steps
-
-For more information on Azure Active Directory, see [What is Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md)
virtual-machines Ndm A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ndm-a100-v4-series.md
These instances provide excellent performance for many AI, ML, and analytics too
[Live Migration](maintenance-and-updates.md): Not Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Not Supported<br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br>
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported<br> InfiniBand: Supported, GPUDirect RDMA, 8 x 200 Gigabit HDR<br> Nvidia NVLink Interconnect: Supported<br>
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/virtual-wan-faq.md
A scale unit is a unit defined to pick an aggregate throughput of a gateway in V
### What is the difference between an Azure virtual network gateway (VPN Gateway) and an Azure Virtual WAN VPN gateway?
-Virtual WAN provides large-scale site-to-site connectivity and is built for throughput, scalability, and ease of use. When you connect a site to a Virtual WAN VPN gateway, it is different from a regular virtual network gateway that uses a gateway type 'VPN'. Similarly, when you connect an ExpressRoute circuit to a Virtual WAN hub, it uses a different resource for the ExpressRoute gateway than the regular virtual network gateway that uses gateway type 'ExpressRoute'.
+Virtual WAN provides large-scale site-to-site connectivity and is built for throughput, scalability, and ease of use. When you connect a site to a Virtual WAN VPN gateway, it is different from a regular virtual network gateway that uses a gateway type 'Site-to-site VPN'. When you want to connect remote users to Virtual WAN, you use a gateway type 'Point-to-site VPN'. The Point-to-site and Site-to-site VPN Gateways are separate entities in the Virtual WAN hub and must be individually deployed. Similarly, when you connect an ExpressRoute circuit to a Virtual WAN hub, it uses a different resource for the ExpressRoute gateway than the regular virtual network gateway that uses gateway type 'ExpressRoute'.
Virtual WAN supports up to 20-Gbps aggregate throughput both for VPN and ExpressRoute. Virtual WAN also has automation for connectivity with an ecosystem of CPE branch device partners. CPE branch devices have built-in automation that autoprovisions and connects into Azure Virtual WAN. These devices are available from a growing ecosystem of SD-WAN and VPN partners. See the [Preferred Partner List](virtual-wan-locations-partners.md).
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
description: This page provides information on web application firewall CRS rule
Previously updated : 09/02/2021 Last updated : 01/10/2022
Application Gateway web application firewall (WAF) protects web applications fro
## Core rule sets
-The Application Gateway WAF comes pre-configured with CRS 3.0 by default. But you can choose to use CRS 3.2, 3.1, or 2.2.9 instead.
+The Application Gateway WAF comes pre-configured with CRS 3.1 by default. But you can choose to use CRS 3.2, 3.0, or 2.2.9 instead.
CRS 3.2 (public preview) offers a new engine and new rule sets defending against Java infections, an initial set of file upload checks, fixed false positives, and more.